title
stringlengths
1
827
uuid
stringlengths
36
36
pmc_id
stringlengths
5
8
search_term
stringclasses
44 values
text
stringlengths
0
8.58M
Psychosocial and emotional morbidities after a diagnosis of cancer: Qualitative evidence from healthcare professional cancer patients
c57580f9-5aaf-4ed0-9b2c-af378410a273
10077364
Internal Medicine[mh]
INTRODUCTION It is well known ‘diseases don't read books’ and healthcare professionals (HCPs), like everyone else, receive cancer diagnoses and become vulnerable to the emotional and psychological morbidity associated with it. However, there is little research evidence on the emotional and psychological experiences and needs of HCPs who become ill with cancer. The little evidence that exists is disproportionately concentrated in high‐resourced countries, meaning the situation in low‐resourced countries remains unknown. BACKGROUND Cancer is common worldwide and the cancer burden is disproportionately more concentrated in low‐and middle‐income countries (LMICs). Globally, in 2012 alone, 65% of all cancer deaths occurred in LMICs (International Agency for Research on Cancer, ), and this is projected to increase to 75% by 2030 (The Lancet, ). Cancer remains prevalent and is among the prime causes of mortality and morbidity, in high‐income countries (HICs) with strong healthcare systems (Agur et al., ; Australian Institute of Health and Welfare, ). In many LMICs, sub‐Saharan Africa included, timely and effective cancer care are hampered by the limited availability of, and access to quality cancer care including skilled health personnel, that is, surgeons, oncologists, pathologists and infrastructure (to screen, diagnose and treat cancer), limited or a lack of public health insurance schemes and associated high out‐of‐pocket spending on health care by each individual, limited and expensive, and/or frequent stock outs of essential cancer therapies, that is, surgery, radiotherapy, chemotherapy, palliative care, etc., and limited research (Dare et al., ; Nakisige et al., ; World Bank, ). For example, in Uganda, there is only one radiotherapy treatment centre, based at Mulago national referral hospital, while chemotherapy is available in three of the 16 regional referral hospitals. Thus, it is unsurprising that cancer incidence, prevalence, mortality and morbidity rates are significantly higher and continue to rise in LMICs, whereas mortality rates in HICs are either decreasing or stable (Torre et al., ). The net effect is that cancer treatment outcomes are worse in LMICs than in HICs (Mallath et al., ). Receiving a diagnosis of cancer has been shown to be accompanied by significant psychosocial and emotional disruptions such as shock, anxiety, depression and stigma (Brinkman et al., ; Ljungman et al., ). In a recent study of cancer professional patients' experiences in Australia, Lagad et al.  found participants experienced unanticipated shock, anxiety, worry, frequent questioning of self and the unfairness and difficulty accepting their diagnosis. Anecdotal evidence shows the transition from a professional to patient role identity is a huge challenge for cancer patients with a healthcare professional (HCP) background. In her book ‘the other side’, a story of her lived experience with metastatic sarcoma, Granger , a medical doctor, wrote that handing over her professional identity and assuming the patient role was not only difficult but also undesirable. This similar lived experience is corroborated by Dr. Henry Marsh a neurosurgeon, who, in his latest book, ‘And finally: matters of life and death’, narrates the story of how he struggled with identity challenges when he became a prostate cancer patient, in particular wearing two identities; the doctor and an anxious patient. During his first consultation with his oncologist, Marsh recalls how he told his oncologist ‘Please talk to me as a doctor’, and the oncologist replied him ‘That's not how we do things here’. He further shares his reflection of the thoughts that flooded his mind as he left the hospital after the consultation, I have crossed to the other side. I have become just another patient, another old man with prostate cancer, and I knew I had no right to claim that I deserved otherwise. (Marsh, ) Furthermore, in their study, Lagad et al.  reported HCP cancer patients expressed the desire to be treated as professionals, while others wanted to be treated both as a professional and as patient. In his book ‘when doctors become patients’, Klitzman  wrote that HCPs have an ‘Illness happens to them (patients) over there, not us’ attitude, and are used to being in charge in the clinical setting, this notion is reflected in a The Lancet Editorial  which reported that doctors have the mentality to view themselves as people who treat the sick and do not get sick themselves. However, when they become patients, this perceived control is surrendered and the resultant feelings of erosion and disempowerment of their role identity lead to heightened vulnerability (Lagad et al., ; Tuffrey‐Wijne, ). In this patient role, they find it hard to apply their knowledge to themselves and need support from their professional colleagues (Campbell, ). It can be argued the support needs of HCPs who become patients are different from those of lay patients. According to Tuffrey‐Wijne , an associate professor of nursing who was diagnosed with grade‐2 breast cancer, HCPs who are patients experience immersion in the healthcare system from a myriad of perspectives‐; (1) a patient, (2) a critical analyst; studying themselves being a patient, (3) an observant HCP; assessing how other HCPs do their work and (4) a researcher, processing and analysing healthcare structures and procedures. Other psychosocial challenges reported among HCPs who become patients include subjective limitations in accessing health care such as reluctance to consult colleagues on emotional and treatment‐related concerns due to self‐disclosure issues, fear of ramifications upon return to work, stigma of being identified as a cancer patient, and a heightened feeling of intrusion (Lagad et al., ; Tuffrey‐Wijne, ). In their lived experiences of cancer, Campbell  and Granger , found many HCPs patients shun routine healthcare settings such as waiting areas for patients and opt to be seen in separate rooms, or out‐of‐hours, while for some, the daily reminder of cancer makes it hard for them to cope. Paucity of research evidence means that, perhaps, the nature and quality of care given to, and received by HCP cancer patients in health settings is based on assumptions that their experience of cancer, and so needs are similar to those of non‐HCP cancer patients. The study sought to add empirical evidence in this understudied area, in particular to (1) examine the psychosocial and emotional sequelae of a cancer diagnosis on HCPs in Uganda, (2) generate evidence to inform clinical practice with regard to the needs of HCPs with cancer. METHODS 3.1 Study design and setting A cross‐sectional qualitative design employing descriptive phenomenology was used to gain in‐depth accounts of the psychosocial and emotional morbidities following the diagnosis (Ellis, ; Sydney et al., ). The study was conducted in Uganda among HCPs with lived experience of a cancer diagnosis, and recruited from both private and public settings. Private, not‐for‐profit settings were mainly hospice and palliative care providers; Hospice Africa Kampala (HAKLA), Mobile Hospice Mbarara (MHM), the Palliative Care Association of Uganda (PCAU), mainly through members who knew some of the eligible participants at an individual level. Public setting included only Mbarara regional referral Hospital (HRRH) oncology unit. All the study settings were urban. 3.2 Sampling and eligibility selection Healthcare professionals with lived experiences of cancer were recruited using purposive sampling. Inclusion criteria were (1) a current or previous diagnosis of cancer confirmed histologically or by imaging, (2) completion, and/or active cancer treatment, and/or hospice and palliative care (PC), (3) an HCP background and (4) ability to speak English. Participants were recruited from cancer and palliative/hospice care settings. Data saturation, a point where no new data emerge despite further interviews was used as a basis for the determination of an appropriate point at which data collection was suspended (Ellis, ). 3.3 Data collection Data were collected by GN who has experience in conducting qualitative interviews with cancer, palliative care patients and used a structured demographic datasheet and an open‐ended topic guide based on the aims of the research. The topic guide was first piloted on two non‐HCP cancer patients. The themes in the topic guide covered participants' experiences in the following areas: pre‐diagnosis, during diagnosis and after diagnosis including impact on one's professional identity, social interactions and cancer treatment experiences . Interviews were conducted on the day, time and in a place preferred by each participant. Five of the interviews were conducted face‐to‐face and three by telephone. Open‐ended and probing questions like would you please share with me how you received the news of your cancer diagnosis. How did it affect you? How has undergoing cancer treatment been for you? (…) were used to gain a fuller understanding of the psychosocial and emotional impact of cancer on the HCPs. Interviews were audio‐recorded (using a reliable mobile smartphone) and saved using anonymized codes. The interviews lasted between 24 and 58 min. Data saturation was reached at the 8th interview. A trend was observed from the 6th to the 8th interview when emergent themes became repetitive, hence interviews were suspended at the 8th interviews. GN kept a diary for reflexive purposes. 3.4 Data analysis and reporting Pseudonymization, that is, use of identity‐concealing codes was used in reporting findings to ensure participants' anonymity. The audio records of the interviews were transcribed verbatim. Colaizzi's  seven‐step framework of qualitative data analysis was then applied to the transcripts; (1) initial reading of all transcripts, (2) extraction of significant statements/themes, (3) formulation of meanings, (4) clustering of themes, (5) exhaustive description, (6) fundamental structure formation and (7) validation of findings (Edward & Welch, ). Initial analysis was done by GN, who read and re‐read the verbatim transcripts to gain fuller familiarization with the data. Significant themes were then recorded in the margins of each interview. These initial themes were then sorted based on their thematic similarities and abstracted into broad‐based clusters of meaningful themes. The transcripts, initial themes and broad‐based theme clusters were then cross‐checked and discussed with PE, EN and WSA, who have experiences in qualitative data analysis, and agreement was reached on the themes and thematic clusters. An exhaustive description of theme clusters to provide clearer descriptions of participants’ lived narratives was done. Validation of findings through member‐checking; transcripts were returned to five of the interviewees who confirmed the information was a true reflection of what they had shared. The reporting of study findings was guided by the consolidated criteria for reporting qualitative studies (COREQ) 32‐item checklist (Tong et al., ). The checklist has three broad domains ‐: Research team and reflexivity (8 items), study design (15 items) and analysis and findings (9 items) that are used to appraise research findings to ensure credibility. 3.5 Ethical statement Ethical approval was obtained from Hospice Africa Uganda Research Ethics Committee, approved protocol number HAUREC‐079‐20. Institutional approval was obtained from managers of oncology clinics and the hospice settings where the patients were enrolled. The clinic leads contacted potential participants and, on agreement of the patients, shared their contact details with GN who contacted them, and provided study information for informed consenting. The information included‐: voluntariness of participation and a guarantee of anonymity and confidentiality during the conduct of the study and reporting of findings. Participants were informed beforehand about the possibility of experiencing emotional breakdown, discomfort during or after interviews, and were asked to approach GN, or the research team, for emotional support. Two HCPs declined participation in the study. Study information and consent forms, written in English, were then emailed to individual participants. Participants who were interviewed by telephone calls consented virtually (by email/text), while those interviewed face‐to‐face consented by signature. Two HCP participants experienced emotional breakdown during interviews; one during face‐to‐face and telephone interviews each, and were given emotional support. An additional follow‐up phone call to each was made a day after the interviews and found the participants were doing well, with no emotional distress. Study design and setting A cross‐sectional qualitative design employing descriptive phenomenology was used to gain in‐depth accounts of the psychosocial and emotional morbidities following the diagnosis (Ellis, ; Sydney et al., ). The study was conducted in Uganda among HCPs with lived experience of a cancer diagnosis, and recruited from both private and public settings. Private, not‐for‐profit settings were mainly hospice and palliative care providers; Hospice Africa Kampala (HAKLA), Mobile Hospice Mbarara (MHM), the Palliative Care Association of Uganda (PCAU), mainly through members who knew some of the eligible participants at an individual level. Public setting included only Mbarara regional referral Hospital (HRRH) oncology unit. All the study settings were urban. Sampling and eligibility selection Healthcare professionals with lived experiences of cancer were recruited using purposive sampling. Inclusion criteria were (1) a current or previous diagnosis of cancer confirmed histologically or by imaging, (2) completion, and/or active cancer treatment, and/or hospice and palliative care (PC), (3) an HCP background and (4) ability to speak English. Participants were recruited from cancer and palliative/hospice care settings. Data saturation, a point where no new data emerge despite further interviews was used as a basis for the determination of an appropriate point at which data collection was suspended (Ellis, ). Data collection Data were collected by GN who has experience in conducting qualitative interviews with cancer, palliative care patients and used a structured demographic datasheet and an open‐ended topic guide based on the aims of the research. The topic guide was first piloted on two non‐HCP cancer patients. The themes in the topic guide covered participants' experiences in the following areas: pre‐diagnosis, during diagnosis and after diagnosis including impact on one's professional identity, social interactions and cancer treatment experiences . Interviews were conducted on the day, time and in a place preferred by each participant. Five of the interviews were conducted face‐to‐face and three by telephone. Open‐ended and probing questions like would you please share with me how you received the news of your cancer diagnosis. How did it affect you? How has undergoing cancer treatment been for you? (…) were used to gain a fuller understanding of the psychosocial and emotional impact of cancer on the HCPs. Interviews were audio‐recorded (using a reliable mobile smartphone) and saved using anonymized codes. The interviews lasted between 24 and 58 min. Data saturation was reached at the 8th interview. A trend was observed from the 6th to the 8th interview when emergent themes became repetitive, hence interviews were suspended at the 8th interviews. GN kept a diary for reflexive purposes. Data analysis and reporting Pseudonymization, that is, use of identity‐concealing codes was used in reporting findings to ensure participants' anonymity. The audio records of the interviews were transcribed verbatim. Colaizzi's  seven‐step framework of qualitative data analysis was then applied to the transcripts; (1) initial reading of all transcripts, (2) extraction of significant statements/themes, (3) formulation of meanings, (4) clustering of themes, (5) exhaustive description, (6) fundamental structure formation and (7) validation of findings (Edward & Welch, ). Initial analysis was done by GN, who read and re‐read the verbatim transcripts to gain fuller familiarization with the data. Significant themes were then recorded in the margins of each interview. These initial themes were then sorted based on their thematic similarities and abstracted into broad‐based clusters of meaningful themes. The transcripts, initial themes and broad‐based theme clusters were then cross‐checked and discussed with PE, EN and WSA, who have experiences in qualitative data analysis, and agreement was reached on the themes and thematic clusters. An exhaustive description of theme clusters to provide clearer descriptions of participants’ lived narratives was done. Validation of findings through member‐checking; transcripts were returned to five of the interviewees who confirmed the information was a true reflection of what they had shared. The reporting of study findings was guided by the consolidated criteria for reporting qualitative studies (COREQ) 32‐item checklist (Tong et al., ). The checklist has three broad domains ‐: Research team and reflexivity (8 items), study design (15 items) and analysis and findings (9 items) that are used to appraise research findings to ensure credibility. Ethical statement Ethical approval was obtained from Hospice Africa Uganda Research Ethics Committee, approved protocol number HAUREC‐079‐20. Institutional approval was obtained from managers of oncology clinics and the hospice settings where the patients were enrolled. The clinic leads contacted potential participants and, on agreement of the patients, shared their contact details with GN who contacted them, and provided study information for informed consenting. The information included‐: voluntariness of participation and a guarantee of anonymity and confidentiality during the conduct of the study and reporting of findings. Participants were informed beforehand about the possibility of experiencing emotional breakdown, discomfort during or after interviews, and were asked to approach GN, or the research team, for emotional support. Two HCPs declined participation in the study. Study information and consent forms, written in English, were then emailed to individual participants. Participants who were interviewed by telephone calls consented virtually (by email/text), while those interviewed face‐to‐face consented by signature. Two HCP participants experienced emotional breakdown during interviews; one during face‐to‐face and telephone interviews each, and were given emotional support. An additional follow‐up phone call to each was made a day after the interviews and found the participants were doing well, with no emotional distress. RESULTS Three major themes emerged from the interviews (Figure ): (1) from a health provider to a patient, (2) socioeconomic challenges and (3) coping and support strategies (Table ). 4.1 Theme 1: From a healthcare provider to a patient 4.1.1 Psychological and emotional disruptions Receiving the news of cancer diagnosis was described as a unique and disturbing moment. Six of the participants were immersed in great psychological and emotional suffering comprising either‐: shock, anxiety, hopelessness, worries about the cost of treatment and fear of the unknown, cancer recurrence, disability, job loss and disempowerment of professional and gender role identity, and/or death: I felt my life was coming to an end. I was upset. Actually, the handover that I had, I didn't wait for any other formalities… my mind was saying I think I'm dying (#2, RN) …I decided to go for a skin biopsy, and it confirmed chronic lymphoblastic leukaemia (CLL). I felt sorry, I was stressed (#4, RN) Findings showed other HCPs do not know how to handle their colleagues who become cancer patients. Being insensitive while giving the unwelcome news was a common emergent theme, for example, some were told the news of a cancer diagnosis over the phone when they were not prepared, and the results were not explained which was overwhelming for them: … I felt so scared, sad, when they told me its cancer. You know cancer is incurable. They sent the results on my phone. (#7, Midwife) They think that any patient can take it as mycosis fungoides, it's a normal thing…! A nurse is a human being, a nurse can feel, and a nurse can react. But they say mycosis fungoides! I had to go and look on internet [google] and I found it was cancer! I was shocked… (#2, RN) Being given inadequate information on being given a referral to higher centres for further specialist attention, and during investigations prior to arriving at a cancer diagnosis was reported and resulted in emotional suffering. One participant with skin cancer broke down as she recounted how, at a regional referral hospital, lab personnel told her biopsy (‘taken off without anaesthesia’) was thrown away just because she did not pay money for processing it, yet no one told her about the payment. Participants who were insensitively given unwelcome news reported more psychological and emotional morbidities than their counterparts. One, a young newly graduated and married clinical officer, narrated how insensitively he was given the news of his cancer diagnosis by a consultant surgeon: He held the envelope and told me; you've to wait for Doctor… to give you the results. Of course, it came into my mind! If it was good news, he would have just given the results to me. So, telling me to wait for the Doctor., as a health worker, I suspected cancer. I started seeing myself as someone who's not having more than 10 years to live; losing hope, seeing what I have been planning (…). I became speechless… After the Doctor coming, he called me, and he was in a hurry. He gave me the results when he was hurrying, and he told me “I have to give you the results how they are. You have a cancer…” (#3, Clinical Officer) Receiving contradictory information about their state of health and stage of the cancer was also reported. Occasions of conspiracy of silence, where the attending health provider ‘blanketed’ the news of a cancer diagnosis from the HCP cancer patients was also reported. A noteworthy observation is that emotional suffering heightens as cancer advances to incurable stages, e.g. one participant, a widow bedbound with metastatic cancer of the cervix and two colostomies, exhibited more severe emotional suffering than colleagues with early stage, curable disease: He (the doctor) told me “your cancer is still in early stages and the uterus we already removed it; you'll be okay”. What hurt me most is the doctor to tell me he forgot to remove the lymph node (inguinal). Even in Kampala, people told me if they had removed the lymph node, I'd have cured. Then to tell me that the uterus got cured, that now the cancer is on the cervix, it has gone to the intestines, I do not understand, and it confuses me (#7, Midife) Healthcare professionals were not spared the challenges of the health system including inadequate supplies of medications and healthcare workers to review them, insensitive and uncaring health workers, and expensive and oftentimes unaffordable cancer treatment and investigations. These exacerbated their psychological symptoms to the extent that one abandoned treatment and developed suicidal ideation: When I reached oncology clinic, because of delaying us, it came almost to noon when health workers had not seen any patient and remember I'd come the day before, I slept there, that stress exacerbated the other stress (suicidal thought he had had the previous night). I decided to leave the chemo. I rode my motorcycle… I wanted to commit suicide… (#3, Clinical Officer) Psychological morbidities were fewer and less severe for those who expected a cancer diagnosis, especially if they were older: I knew it would be cancer. I was not shocked at all. As you get older, you get used to the fact that you can die. Death is part of life. (#6, MD) …I think I was not surprised because I knew the diagnosis would be cancer. (#1, MD) For some HCPs, their psychological and emotional morbidities were influenced by their professional knowledge of cancer to the extent that even when they were declared cancer free, they continued to worry about the possibility of cancer recurrence with associated or eventual deterioration in health status, for example, disability and becoming dependent, as well as the financial strain of expensive investigations and treatment. The reminders of the ‘ugly and painful’ cancer experience they went through were enough to cause them constant worry and fear: You get scared and you're like if they say it [cancer] is coming back, how will it be? You look at the cost, who will be there to support you? Then also if the sickness gets you bedridden, it also brings those memories and it's a scare (#8, RN) 4.1.2 Living two identities; as a patient and a health professional Participants described the professional‐to‐patient transition and how it impacted them, both positively and negatively. First, they grieved the loss of their professional identity. Second, they discussed the gains realized from the patient identity and how by being patients, they developed more compassion for patients than before: The two experiences were really touching: from an in‐charge to a patient on the bed. That was really humbling for me. (…) when I got the experience, it moved my heart so much. It moved my ego, self‐esteem, and pride. I really came down to the level of a patient. I realized that nurse, doctor, anyone can come down to the level of a patient. You sleep on that bed, the stretcher on which you have been wheeling other people. It humbled me, and it made me so close to patients than anything else. That I will never neglect any single patient! It totally increased my compassion (#2, RN) I now don't want to see any patient, especially in pain and I leave them, because I know how pain feels. (#5, RN) The cancer patient‐hood experience further increased their awareness of the gaps existing in the healthcare system. They discussed challenges in cancer diagnosis, the expenses of treatment and investigations, delays and made recommendations for improvement: The first contact with the patient, that's where the problem is; the healthcare system is full of so many delays… To have good outcome on cancer, emphasis should be on early detection, including primary prevention, screening programs, early diagnosis and treatment. We should also try to share information about the cancers that we have; most things are about ignorance. (#1, MD) For me its experience, experience is the best teacher. Once policy makers see they are healthy, it's okay. But once they have the experience, then they come to learn. So that wherever there are appeals for palliative care, cancer patients, or appeals for healthcare system, they should respond seriously. When they put a policy, let it be there. When they put a machine in the hospital, let someone take care of it. Let them follow up to see; are the machines working? Are they benefiting people? If they're student doctors, let there be someone; a senior with them, not to throw them in the hands of patients when they are not yet competent (#2, RN) 4.2 Theme: 2: Socioeconomic challenges 4.2.1 Work and family‐social disruptions All eight participants discussed the various disruptions caused by cancer and toxicities of cancer treatment; six of the participants experienced excruciating pain: Feeling pain everywhere… When you are taking those drugs (chemotherapy), you don't need to do anything. Sometimes I feel terrible pain here (epigastrium), even though you're taking omeprazole (#4, RN) …loss of libido and erectile dysfunction started immediately after starting chemo. Then reduced appetite and losing my beautiful hair, it's disturbing me. My fingernails have changed; the palms have become too dark. It's now easy to get mouth ulcers whenever I feed on hard foods…Even people are saying I could be taking ARVs (#3, Clinical Officer) A participant, a widow completely disabled by metastatic cervical cancer, recounted how she suffered while receiving radiotherapy at the Uganda Cancer Institute: I had to foot every morning from Kawempe to get radiotherapy (about 10 km) and after treatment walk back to my rented room. The stomach is very empty, you're surviving on watermelon, you reach in the room you've no energy to prepare a meal. Even the neighbours no one cares about you. That life was not good at all (#7, Midwife) All of them had significant disruptions to their bodily abilities and this affected their daily activities of living. Some had to retire early, while others had to navigate working with distressing symptoms: I work from Monday up to Friday from 8 am to 5 pm. I normally come on Wednesday to do investigations and get chemo on Thursday. When I go back, my boss tells me I have to pay back the days I didn't work while in the hospital. So, instead of resting, on Saturday and Sunday I've to work to compensate… (#3, Clinical Officer) You can't perform effectively. You're getting treatment the patients are also here waiting. When you are on chemo, you're sick, you have headache, you are what (…)! You cannot perform accordingly. That is a problem to my patients. Sometimes, when I get treatment, I get sick for a week… (#4, RN) Some professionals were battling social and psychological stigma, resulting from cancer itself, or toxicities of its treatment: The whole skin became dark, the face… People started seeing me, now I had long skirts (…). I was kind of stigmatized. Everyone would say, are you (her name)? What has happened? Of course that made me to feel bad about myself. Sometimes I would be forced to go out of Eucharistic mass before anyone sees me. It was very hard for me (#2, RN) Some sought traditional and herbal cures due to influence from their social networks, including family and friends, and also after modern medicine seemed unrewarding and very slow: My family was upset, they said this is not the right treatment, come and we take you to Nairobi, others said India. Then I said no, you people, I'm under a superior and I must obey her. Others advised herbs, that they treat and cure cancer. I accessed it for two weeks, but it was very expensive, A Sacket was 250,000/=. At first, I thought I was getting better, and the side effects of methotrexate were disappearing, but after the first week, they came back. I said this doesn't work… I had paid spent 1.5 Ugandan million ($417) … (#2, RN) 4.2.2 Economic challenges Participants grieved the financial problems arising from huge out‐of‐pocket expenditure on cancer investigations and treatment, and these exacerbated psychological suffering. Those whose treatments were covered by insurance reported lesser financial hardships: Every day I ask myself; people get sick and are able to walk around, but me I fell sick once and got disabled. I can't walk or get myself up… I was very enterprising; rearing chicken, pigs, that's no more. I had a private clinic, but I closed it. You can't receive patients when you're like this! I need to eat this; I can't afford it. All the money got finished on the pain and the disease. Even now I'm on loans. I have sold off almost everything I had. I feel so bad. My children (two) were at the university but now they are seated home. I have real suffering (#7, Midwife) Everything has gone down. I was a bread winner for my family. I was working at other places where I was getting some upkeep, that one I stopped. It's challenging and I would not want to go deep in those things (#4, RN) 4.2.3 Theme 3: Coping and support strategies Finally, they narrated how they had to ‘relocate’ and accept the patient‐hood identity and learn to cope with cancer, and the support networks that assisted them on this journey, that is, family, friends and work colleagues: After getting a diagnosis of cancer, it became so hard for me to counsel myself. I'm a health worker, but it became difficult for me to accept that I have a cancer and accept to start treatment. It became very, very difficult for me…I thought of committing suicide… (#3, Clinical Officer) If it was not that my aunt loves me, I would have done a bad thing to myself (#7, Midwife) Those who had a history of working in cancer and PC settings reported easier coping as they were supported by their colleagues: My friends were visiting me, and being in the PC circle, people were coming from all over to visit me; from the ministry, from hospitals… People would really encourage me, and I felt supported. My daughter was there for me; my colleagues within PC were all there for me. (#8, RN) Difficulty coping was also observed among those with poor, and/or inadequate social support from their loved ones, especially their family. A 29‐year‐old patient shared how marital relationship issues with his wife, who he felt did not support him enough, affected his coping: My appeal is to home care givers of cancer patients, to give a conducive environment to the cancer patients. When you get stress from your family, stress from the cancer (…). Cancer patients should not get any other external stress. In a conducive environment, you might even forget that you have cancer. (#3, Clinical Officer) Theme 1: From a healthcare provider to a patient 4.1.1 Psychological and emotional disruptions Receiving the news of cancer diagnosis was described as a unique and disturbing moment. Six of the participants were immersed in great psychological and emotional suffering comprising either‐: shock, anxiety, hopelessness, worries about the cost of treatment and fear of the unknown, cancer recurrence, disability, job loss and disempowerment of professional and gender role identity, and/or death: I felt my life was coming to an end. I was upset. Actually, the handover that I had, I didn't wait for any other formalities… my mind was saying I think I'm dying (#2, RN) …I decided to go for a skin biopsy, and it confirmed chronic lymphoblastic leukaemia (CLL). I felt sorry, I was stressed (#4, RN) Findings showed other HCPs do not know how to handle their colleagues who become cancer patients. Being insensitive while giving the unwelcome news was a common emergent theme, for example, some were told the news of a cancer diagnosis over the phone when they were not prepared, and the results were not explained which was overwhelming for them: … I felt so scared, sad, when they told me its cancer. You know cancer is incurable. They sent the results on my phone. (#7, Midwife) They think that any patient can take it as mycosis fungoides, it's a normal thing…! A nurse is a human being, a nurse can feel, and a nurse can react. But they say mycosis fungoides! I had to go and look on internet [google] and I found it was cancer! I was shocked… (#2, RN) Being given inadequate information on being given a referral to higher centres for further specialist attention, and during investigations prior to arriving at a cancer diagnosis was reported and resulted in emotional suffering. One participant with skin cancer broke down as she recounted how, at a regional referral hospital, lab personnel told her biopsy (‘taken off without anaesthesia’) was thrown away just because she did not pay money for processing it, yet no one told her about the payment. Participants who were insensitively given unwelcome news reported more psychological and emotional morbidities than their counterparts. One, a young newly graduated and married clinical officer, narrated how insensitively he was given the news of his cancer diagnosis by a consultant surgeon: He held the envelope and told me; you've to wait for Doctor… to give you the results. Of course, it came into my mind! If it was good news, he would have just given the results to me. So, telling me to wait for the Doctor., as a health worker, I suspected cancer. I started seeing myself as someone who's not having more than 10 years to live; losing hope, seeing what I have been planning (…). I became speechless… After the Doctor coming, he called me, and he was in a hurry. He gave me the results when he was hurrying, and he told me “I have to give you the results how they are. You have a cancer…” (#3, Clinical Officer) Receiving contradictory information about their state of health and stage of the cancer was also reported. Occasions of conspiracy of silence, where the attending health provider ‘blanketed’ the news of a cancer diagnosis from the HCP cancer patients was also reported. A noteworthy observation is that emotional suffering heightens as cancer advances to incurable stages, e.g. one participant, a widow bedbound with metastatic cancer of the cervix and two colostomies, exhibited more severe emotional suffering than colleagues with early stage, curable disease: He (the doctor) told me “your cancer is still in early stages and the uterus we already removed it; you'll be okay”. What hurt me most is the doctor to tell me he forgot to remove the lymph node (inguinal). Even in Kampala, people told me if they had removed the lymph node, I'd have cured. Then to tell me that the uterus got cured, that now the cancer is on the cervix, it has gone to the intestines, I do not understand, and it confuses me (#7, Midife) Healthcare professionals were not spared the challenges of the health system including inadequate supplies of medications and healthcare workers to review them, insensitive and uncaring health workers, and expensive and oftentimes unaffordable cancer treatment and investigations. These exacerbated their psychological symptoms to the extent that one abandoned treatment and developed suicidal ideation: When I reached oncology clinic, because of delaying us, it came almost to noon when health workers had not seen any patient and remember I'd come the day before, I slept there, that stress exacerbated the other stress (suicidal thought he had had the previous night). I decided to leave the chemo. I rode my motorcycle… I wanted to commit suicide… (#3, Clinical Officer) Psychological morbidities were fewer and less severe for those who expected a cancer diagnosis, especially if they were older: I knew it would be cancer. I was not shocked at all. As you get older, you get used to the fact that you can die. Death is part of life. (#6, MD) …I think I was not surprised because I knew the diagnosis would be cancer. (#1, MD) For some HCPs, their psychological and emotional morbidities were influenced by their professional knowledge of cancer to the extent that even when they were declared cancer free, they continued to worry about the possibility of cancer recurrence with associated or eventual deterioration in health status, for example, disability and becoming dependent, as well as the financial strain of expensive investigations and treatment. The reminders of the ‘ugly and painful’ cancer experience they went through were enough to cause them constant worry and fear: You get scared and you're like if they say it [cancer] is coming back, how will it be? You look at the cost, who will be there to support you? Then also if the sickness gets you bedridden, it also brings those memories and it's a scare (#8, RN) 4.1.2 Living two identities; as a patient and a health professional Participants described the professional‐to‐patient transition and how it impacted them, both positively and negatively. First, they grieved the loss of their professional identity. Second, they discussed the gains realized from the patient identity and how by being patients, they developed more compassion for patients than before: The two experiences were really touching: from an in‐charge to a patient on the bed. That was really humbling for me. (…) when I got the experience, it moved my heart so much. It moved my ego, self‐esteem, and pride. I really came down to the level of a patient. I realized that nurse, doctor, anyone can come down to the level of a patient. You sleep on that bed, the stretcher on which you have been wheeling other people. It humbled me, and it made me so close to patients than anything else. That I will never neglect any single patient! It totally increased my compassion (#2, RN) I now don't want to see any patient, especially in pain and I leave them, because I know how pain feels. (#5, RN) The cancer patient‐hood experience further increased their awareness of the gaps existing in the healthcare system. They discussed challenges in cancer diagnosis, the expenses of treatment and investigations, delays and made recommendations for improvement: The first contact with the patient, that's where the problem is; the healthcare system is full of so many delays… To have good outcome on cancer, emphasis should be on early detection, including primary prevention, screening programs, early diagnosis and treatment. We should also try to share information about the cancers that we have; most things are about ignorance. (#1, MD) For me its experience, experience is the best teacher. Once policy makers see they are healthy, it's okay. But once they have the experience, then they come to learn. So that wherever there are appeals for palliative care, cancer patients, or appeals for healthcare system, they should respond seriously. When they put a policy, let it be there. When they put a machine in the hospital, let someone take care of it. Let them follow up to see; are the machines working? Are they benefiting people? If they're student doctors, let there be someone; a senior with them, not to throw them in the hands of patients when they are not yet competent (#2, RN) Psychological and emotional disruptions Receiving the news of cancer diagnosis was described as a unique and disturbing moment. Six of the participants were immersed in great psychological and emotional suffering comprising either‐: shock, anxiety, hopelessness, worries about the cost of treatment and fear of the unknown, cancer recurrence, disability, job loss and disempowerment of professional and gender role identity, and/or death: I felt my life was coming to an end. I was upset. Actually, the handover that I had, I didn't wait for any other formalities… my mind was saying I think I'm dying (#2, RN) …I decided to go for a skin biopsy, and it confirmed chronic lymphoblastic leukaemia (CLL). I felt sorry, I was stressed (#4, RN) Findings showed other HCPs do not know how to handle their colleagues who become cancer patients. Being insensitive while giving the unwelcome news was a common emergent theme, for example, some were told the news of a cancer diagnosis over the phone when they were not prepared, and the results were not explained which was overwhelming for them: … I felt so scared, sad, when they told me its cancer. You know cancer is incurable. They sent the results on my phone. (#7, Midwife) They think that any patient can take it as mycosis fungoides, it's a normal thing…! A nurse is a human being, a nurse can feel, and a nurse can react. But they say mycosis fungoides! I had to go and look on internet [google] and I found it was cancer! I was shocked… (#2, RN) Being given inadequate information on being given a referral to higher centres for further specialist attention, and during investigations prior to arriving at a cancer diagnosis was reported and resulted in emotional suffering. One participant with skin cancer broke down as she recounted how, at a regional referral hospital, lab personnel told her biopsy (‘taken off without anaesthesia’) was thrown away just because she did not pay money for processing it, yet no one told her about the payment. Participants who were insensitively given unwelcome news reported more psychological and emotional morbidities than their counterparts. One, a young newly graduated and married clinical officer, narrated how insensitively he was given the news of his cancer diagnosis by a consultant surgeon: He held the envelope and told me; you've to wait for Doctor… to give you the results. Of course, it came into my mind! If it was good news, he would have just given the results to me. So, telling me to wait for the Doctor., as a health worker, I suspected cancer. I started seeing myself as someone who's not having more than 10 years to live; losing hope, seeing what I have been planning (…). I became speechless… After the Doctor coming, he called me, and he was in a hurry. He gave me the results when he was hurrying, and he told me “I have to give you the results how they are. You have a cancer…” (#3, Clinical Officer) Receiving contradictory information about their state of health and stage of the cancer was also reported. Occasions of conspiracy of silence, where the attending health provider ‘blanketed’ the news of a cancer diagnosis from the HCP cancer patients was also reported. A noteworthy observation is that emotional suffering heightens as cancer advances to incurable stages, e.g. one participant, a widow bedbound with metastatic cancer of the cervix and two colostomies, exhibited more severe emotional suffering than colleagues with early stage, curable disease: He (the doctor) told me “your cancer is still in early stages and the uterus we already removed it; you'll be okay”. What hurt me most is the doctor to tell me he forgot to remove the lymph node (inguinal). Even in Kampala, people told me if they had removed the lymph node, I'd have cured. Then to tell me that the uterus got cured, that now the cancer is on the cervix, it has gone to the intestines, I do not understand, and it confuses me (#7, Midife) Healthcare professionals were not spared the challenges of the health system including inadequate supplies of medications and healthcare workers to review them, insensitive and uncaring health workers, and expensive and oftentimes unaffordable cancer treatment and investigations. These exacerbated their psychological symptoms to the extent that one abandoned treatment and developed suicidal ideation: When I reached oncology clinic, because of delaying us, it came almost to noon when health workers had not seen any patient and remember I'd come the day before, I slept there, that stress exacerbated the other stress (suicidal thought he had had the previous night). I decided to leave the chemo. I rode my motorcycle… I wanted to commit suicide… (#3, Clinical Officer) Psychological morbidities were fewer and less severe for those who expected a cancer diagnosis, especially if they were older: I knew it would be cancer. I was not shocked at all. As you get older, you get used to the fact that you can die. Death is part of life. (#6, MD) …I think I was not surprised because I knew the diagnosis would be cancer. (#1, MD) For some HCPs, their psychological and emotional morbidities were influenced by their professional knowledge of cancer to the extent that even when they were declared cancer free, they continued to worry about the possibility of cancer recurrence with associated or eventual deterioration in health status, for example, disability and becoming dependent, as well as the financial strain of expensive investigations and treatment. The reminders of the ‘ugly and painful’ cancer experience they went through were enough to cause them constant worry and fear: You get scared and you're like if they say it [cancer] is coming back, how will it be? You look at the cost, who will be there to support you? Then also if the sickness gets you bedridden, it also brings those memories and it's a scare (#8, RN) Living two identities; as a patient and a health professional Participants described the professional‐to‐patient transition and how it impacted them, both positively and negatively. First, they grieved the loss of their professional identity. Second, they discussed the gains realized from the patient identity and how by being patients, they developed more compassion for patients than before: The two experiences were really touching: from an in‐charge to a patient on the bed. That was really humbling for me. (…) when I got the experience, it moved my heart so much. It moved my ego, self‐esteem, and pride. I really came down to the level of a patient. I realized that nurse, doctor, anyone can come down to the level of a patient. You sleep on that bed, the stretcher on which you have been wheeling other people. It humbled me, and it made me so close to patients than anything else. That I will never neglect any single patient! It totally increased my compassion (#2, RN) I now don't want to see any patient, especially in pain and I leave them, because I know how pain feels. (#5, RN) The cancer patient‐hood experience further increased their awareness of the gaps existing in the healthcare system. They discussed challenges in cancer diagnosis, the expenses of treatment and investigations, delays and made recommendations for improvement: The first contact with the patient, that's where the problem is; the healthcare system is full of so many delays… To have good outcome on cancer, emphasis should be on early detection, including primary prevention, screening programs, early diagnosis and treatment. We should also try to share information about the cancers that we have; most things are about ignorance. (#1, MD) For me its experience, experience is the best teacher. Once policy makers see they are healthy, it's okay. But once they have the experience, then they come to learn. So that wherever there are appeals for palliative care, cancer patients, or appeals for healthcare system, they should respond seriously. When they put a policy, let it be there. When they put a machine in the hospital, let someone take care of it. Let them follow up to see; are the machines working? Are they benefiting people? If they're student doctors, let there be someone; a senior with them, not to throw them in the hands of patients when they are not yet competent (#2, RN) Theme: 2: Socioeconomic challenges 4.2.1 Work and family‐social disruptions All eight participants discussed the various disruptions caused by cancer and toxicities of cancer treatment; six of the participants experienced excruciating pain: Feeling pain everywhere… When you are taking those drugs (chemotherapy), you don't need to do anything. Sometimes I feel terrible pain here (epigastrium), even though you're taking omeprazole (#4, RN) …loss of libido and erectile dysfunction started immediately after starting chemo. Then reduced appetite and losing my beautiful hair, it's disturbing me. My fingernails have changed; the palms have become too dark. It's now easy to get mouth ulcers whenever I feed on hard foods…Even people are saying I could be taking ARVs (#3, Clinical Officer) A participant, a widow completely disabled by metastatic cervical cancer, recounted how she suffered while receiving radiotherapy at the Uganda Cancer Institute: I had to foot every morning from Kawempe to get radiotherapy (about 10 km) and after treatment walk back to my rented room. The stomach is very empty, you're surviving on watermelon, you reach in the room you've no energy to prepare a meal. Even the neighbours no one cares about you. That life was not good at all (#7, Midwife) All of them had significant disruptions to their bodily abilities and this affected their daily activities of living. Some had to retire early, while others had to navigate working with distressing symptoms: I work from Monday up to Friday from 8 am to 5 pm. I normally come on Wednesday to do investigations and get chemo on Thursday. When I go back, my boss tells me I have to pay back the days I didn't work while in the hospital. So, instead of resting, on Saturday and Sunday I've to work to compensate… (#3, Clinical Officer) You can't perform effectively. You're getting treatment the patients are also here waiting. When you are on chemo, you're sick, you have headache, you are what (…)! You cannot perform accordingly. That is a problem to my patients. Sometimes, when I get treatment, I get sick for a week… (#4, RN) Some professionals were battling social and psychological stigma, resulting from cancer itself, or toxicities of its treatment: The whole skin became dark, the face… People started seeing me, now I had long skirts (…). I was kind of stigmatized. Everyone would say, are you (her name)? What has happened? Of course that made me to feel bad about myself. Sometimes I would be forced to go out of Eucharistic mass before anyone sees me. It was very hard for me (#2, RN) Some sought traditional and herbal cures due to influence from their social networks, including family and friends, and also after modern medicine seemed unrewarding and very slow: My family was upset, they said this is not the right treatment, come and we take you to Nairobi, others said India. Then I said no, you people, I'm under a superior and I must obey her. Others advised herbs, that they treat and cure cancer. I accessed it for two weeks, but it was very expensive, A Sacket was 250,000/=. At first, I thought I was getting better, and the side effects of methotrexate were disappearing, but after the first week, they came back. I said this doesn't work… I had paid spent 1.5 Ugandan million ($417) … (#2, RN) 4.2.2 Economic challenges Participants grieved the financial problems arising from huge out‐of‐pocket expenditure on cancer investigations and treatment, and these exacerbated psychological suffering. Those whose treatments were covered by insurance reported lesser financial hardships: Every day I ask myself; people get sick and are able to walk around, but me I fell sick once and got disabled. I can't walk or get myself up… I was very enterprising; rearing chicken, pigs, that's no more. I had a private clinic, but I closed it. You can't receive patients when you're like this! I need to eat this; I can't afford it. All the money got finished on the pain and the disease. Even now I'm on loans. I have sold off almost everything I had. I feel so bad. My children (two) were at the university but now they are seated home. I have real suffering (#7, Midwife) Everything has gone down. I was a bread winner for my family. I was working at other places where I was getting some upkeep, that one I stopped. It's challenging and I would not want to go deep in those things (#4, RN) 4.2.3 Theme 3: Coping and support strategies Finally, they narrated how they had to ‘relocate’ and accept the patient‐hood identity and learn to cope with cancer, and the support networks that assisted them on this journey, that is, family, friends and work colleagues: After getting a diagnosis of cancer, it became so hard for me to counsel myself. I'm a health worker, but it became difficult for me to accept that I have a cancer and accept to start treatment. It became very, very difficult for me…I thought of committing suicide… (#3, Clinical Officer) If it was not that my aunt loves me, I would have done a bad thing to myself (#7, Midwife) Those who had a history of working in cancer and PC settings reported easier coping as they were supported by their colleagues: My friends were visiting me, and being in the PC circle, people were coming from all over to visit me; from the ministry, from hospitals… People would really encourage me, and I felt supported. My daughter was there for me; my colleagues within PC were all there for me. (#8, RN) Difficulty coping was also observed among those with poor, and/or inadequate social support from their loved ones, especially their family. A 29‐year‐old patient shared how marital relationship issues with his wife, who he felt did not support him enough, affected his coping: My appeal is to home care givers of cancer patients, to give a conducive environment to the cancer patients. When you get stress from your family, stress from the cancer (…). Cancer patients should not get any other external stress. In a conducive environment, you might even forget that you have cancer. (#3, Clinical Officer) Work and family‐social disruptions All eight participants discussed the various disruptions caused by cancer and toxicities of cancer treatment; six of the participants experienced excruciating pain: Feeling pain everywhere… When you are taking those drugs (chemotherapy), you don't need to do anything. Sometimes I feel terrible pain here (epigastrium), even though you're taking omeprazole (#4, RN) …loss of libido and erectile dysfunction started immediately after starting chemo. Then reduced appetite and losing my beautiful hair, it's disturbing me. My fingernails have changed; the palms have become too dark. It's now easy to get mouth ulcers whenever I feed on hard foods…Even people are saying I could be taking ARVs (#3, Clinical Officer) A participant, a widow completely disabled by metastatic cervical cancer, recounted how she suffered while receiving radiotherapy at the Uganda Cancer Institute: I had to foot every morning from Kawempe to get radiotherapy (about 10 km) and after treatment walk back to my rented room. The stomach is very empty, you're surviving on watermelon, you reach in the room you've no energy to prepare a meal. Even the neighbours no one cares about you. That life was not good at all (#7, Midwife) All of them had significant disruptions to their bodily abilities and this affected their daily activities of living. Some had to retire early, while others had to navigate working with distressing symptoms: I work from Monday up to Friday from 8 am to 5 pm. I normally come on Wednesday to do investigations and get chemo on Thursday. When I go back, my boss tells me I have to pay back the days I didn't work while in the hospital. So, instead of resting, on Saturday and Sunday I've to work to compensate… (#3, Clinical Officer) You can't perform effectively. You're getting treatment the patients are also here waiting. When you are on chemo, you're sick, you have headache, you are what (…)! You cannot perform accordingly. That is a problem to my patients. Sometimes, when I get treatment, I get sick for a week… (#4, RN) Some professionals were battling social and psychological stigma, resulting from cancer itself, or toxicities of its treatment: The whole skin became dark, the face… People started seeing me, now I had long skirts (…). I was kind of stigmatized. Everyone would say, are you (her name)? What has happened? Of course that made me to feel bad about myself. Sometimes I would be forced to go out of Eucharistic mass before anyone sees me. It was very hard for me (#2, RN) Some sought traditional and herbal cures due to influence from their social networks, including family and friends, and also after modern medicine seemed unrewarding and very slow: My family was upset, they said this is not the right treatment, come and we take you to Nairobi, others said India. Then I said no, you people, I'm under a superior and I must obey her. Others advised herbs, that they treat and cure cancer. I accessed it for two weeks, but it was very expensive, A Sacket was 250,000/=. At first, I thought I was getting better, and the side effects of methotrexate were disappearing, but after the first week, they came back. I said this doesn't work… I had paid spent 1.5 Ugandan million ($417) … (#2, RN) Economic challenges Participants grieved the financial problems arising from huge out‐of‐pocket expenditure on cancer investigations and treatment, and these exacerbated psychological suffering. Those whose treatments were covered by insurance reported lesser financial hardships: Every day I ask myself; people get sick and are able to walk around, but me I fell sick once and got disabled. I can't walk or get myself up… I was very enterprising; rearing chicken, pigs, that's no more. I had a private clinic, but I closed it. You can't receive patients when you're like this! I need to eat this; I can't afford it. All the money got finished on the pain and the disease. Even now I'm on loans. I have sold off almost everything I had. I feel so bad. My children (two) were at the university but now they are seated home. I have real suffering (#7, Midwife) Everything has gone down. I was a bread winner for my family. I was working at other places where I was getting some upkeep, that one I stopped. It's challenging and I would not want to go deep in those things (#4, RN) Theme 3: Coping and support strategies Finally, they narrated how they had to ‘relocate’ and accept the patient‐hood identity and learn to cope with cancer, and the support networks that assisted them on this journey, that is, family, friends and work colleagues: After getting a diagnosis of cancer, it became so hard for me to counsel myself. I'm a health worker, but it became difficult for me to accept that I have a cancer and accept to start treatment. It became very, very difficult for me…I thought of committing suicide… (#3, Clinical Officer) If it was not that my aunt loves me, I would have done a bad thing to myself (#7, Midwife) Those who had a history of working in cancer and PC settings reported easier coping as they were supported by their colleagues: My friends were visiting me, and being in the PC circle, people were coming from all over to visit me; from the ministry, from hospitals… People would really encourage me, and I felt supported. My daughter was there for me; my colleagues within PC were all there for me. (#8, RN) Difficulty coping was also observed among those with poor, and/or inadequate social support from their loved ones, especially their family. A 29‐year‐old patient shared how marital relationship issues with his wife, who he felt did not support him enough, affected his coping: My appeal is to home care givers of cancer patients, to give a conducive environment to the cancer patients. When you get stress from your family, stress from the cancer (…). Cancer patients should not get any other external stress. In a conducive environment, you might even forget that you have cancer. (#3, Clinical Officer) DISCUSSION This study presents findings of an understudied topic, with particular emphasis put on examining how the psychosocial and emotional well‐being of HCPs are affected when HCPs are confronted with and become cancer patients. Findings show that, HCP cancer patients, just like other cancer patients, experience considerable, or even higher emotional and psychosocial morbidities than their counterparts non‐HCP patients. The HCPs experienced varying degrees of emotional suffering including shock, anxiety, hopelessness, worries about the costs of treatment and fear of the unknown, fear of loss of control and death. These findings confirm the available anecdotal evidence from the lived experiences of other HCPs who became cancer patients (Campbell, ; Granger, ) or suffered other chronic and mental health illnesses like depression and bipolar affective disorder (Klitzman, ; Singh, ). Some participants' knowledge around cancer and its prognosis also influenced its psychological impact, for example, even after being declared cancer free, they continued to worry about recurrence. For some, the emotional and psychological disturbances started even before they received the diagnosis and worsened after the news was broken. Similar experiences on how having a professional background exacerbates individual emotional reactions have been reported in previous studies on HCP cancer patients (Fox et al., ; Prenkert et al., ). This finding corroborates that of an earlier study in which 71% of doctors reported feeling embarrassed about becoming a patient (Davidson & Schattner, ). Emotional and psychological suffering were most prevalent and severe in younger and middle‐aged HCPs. This was due to concerns about ‘unfinished business’ and premature loss, including employment and financial, as well as physical disabilities and deformities. On top of the emotional disturbances following diagnosis, inadequate, and/or contradictory information about the illness from HCPs exacerbated the emotional distress of some participants. Disabling psychological symptoms have been reported in studies with non‐professional cancer patients (Natuhwera et al., ; Van Beek et al., ). The study revealed many HCPs lack the knowledge and skills to handle their colleagues who become cancer patients. Poor or inappropriate communication skills, especially with health providers giving the news of a cancer diagnosis insensitively were common. HCPs who were insensitively given their diagnosis reported greater emotional and psychological suffering. This phenomenon could underline a training gap, and so the need for prioritizing training in communication skills for HCPs working in cancer care, as demonstrated by Moore et al.  and Uitterhoeue et al. . Participants narrated the challenging experience associated with the transition from being an HCP to being a patient; loss of professional role identity was a common challenge. This finding supports those of previous studies which report how adopting a patient identity is difficult for the HCP patient (Fox et al., ; Kay et al., , ; Kenny et al., ; Lagad et al., ; Marsh, ), and undesirable (Granger, ; Klitzman, , ; Tuffrey‐Wijne, ) and heightens feelings of vulnerability among HCP patients. The findings further point to the fact that after becoming a patient, a HCP becomes vulnerable and needs care support from other professional colleagues, contrary to an assumed belief they can help themselves due to their background knowledge and clinical practice. Reporting on his experience of testicular cancer Campbell , a physician, mentioned how he was no longer able to apply his professional knowledge to help himself, but needed the support of fellow HCPs. Most participants reported significant disruptions in performing their work, and to their family‐social domain. Physical symptoms related to cancer and its aggressive treatments, for example, nausea, fatigue were predominant contributors to such disruption. Previous medical literature has reported how health professionals' ill health can adversely affect their performance and functioning, including the ability to provide compassionate care to their patients (Davies, , Oxtoby, ). Some reported fear and stigma of being known and identified as patients. All narrated how getting a cancer diagnosis adversely impacted their psychological and social well‐being, and ‘metastasized’ to their loved ones (family, friends, work colleagues) who also suffered psychological distress, shock, worry, etc. Stigma related to cancer and non‐cancer illnesses has been reported in other medical literature and studies conducted with health professional patient populations, and is a risk factor to, and exacerbates psychological morbidity (Fox et al., ; Kay et al., , ; Marsh, ; Natuhwera et al., ). There is thus, an urgent need to identify strategies to destigmatize cancer, as stigma could negatively impact healthcare‐seeking behaviours, and lead to poor cancer treatment outcome. Barriers to cancer care access, including prolonged delays before accessing care were reported by most participants. These barriers included limited availability of specialist services, including palliative care (personnel and equipment) for investigations and treatment, unaffordability of cancer care, both investigations and treatments. These barriers have been reported in other studies in the context of LMICs (Arbyn, ; IAEA, ; Sullivan et al, ; WHO, ). Participants who had a history of working in palliative care narrated how, in some instances, they were disappointed as they would not be receiving palliative care for their symptoms, or if they did it was late, and this caused them unnecessary suffering; “I knew what I needed for that pain; I needed morphine, but they would not give me morphine”, a surprising finding reported by a participant who, at some point, received care in the UK, a HIC. Delays and challenges receiving palliative care for distressing symptoms were also reported by other participants who received care in Uganda. Financial suffering was commonest and greatest in those not covered by insurance, deepening their psychological suffering. Whereas this finding is unsurprising, it is important to understand it in a more generic and broader sense based on the socioeconomic context in Uganda where the financial hardship is not just about paying for treatment but also travel, subsistence and not having a job. For example, agonizingly, one of the participants (a clinical officer) decided to abandon chemotherapy treatment and developed suicidal thoughts due to multicomponent stress exacerbated by delays while receiving treatment, financial strain and social and emotional disruptions resulting from cancer. This finding corroborates evidence from previous studies which report increased susceptibility to mental health issues, including increased risk of suicide among doctors (Gerada, ; Kay et al., ; Learner, ; Schlicht et al., ). The study has shown a diagnosis of cancer has far reaching financial ramifications on patients, and their entire social systems and financial suffering was a common emergent theme. Some were forced to sell houses and land to finance their treatment and meet other obligations. This pushed them into extreme poverty, further heightening their psychological distress. This study identified how six of the participants forced themselves to continue working, despite pain and other significant distressing cancer symptoms. One participant reported how unbearable pain made her quit work without even informing her employer, while another could no longer withstand work demands retired early. A widow who was bedridden with advanced metastatic cancer broke down as she narrated how cancer and constant spending on its treatments (multiple surgeries, radiotherapy and chemotherapy) drove her into heavy debt and the loss of all her possessions. Her two children also had to drop out of university. These findings are not surprising, given that a lack of a national insurance scheme in Uganda dictates that individuals have to suffer out‐of‐pocket spending on their health care, and HCP cancer patients in this study, in particular, those that lacked insurance cover were not spared. Anderson et al. , in their study, found 46% of cancer patients attending a regional referral hospital in Uganda met the World Bank's definition of extreme poverty (living on US$1.90/person/day). Out‐of‐pocket spending on health care is prevalent at 48.1%, 33.3% and 13.7% in LMICs, Upper‐Middle Income Countries and HICs respectively (World Bank, ). It should be noted that Uganda is one of the poorest countries in the world. According to Multidimensional Poverty Index (MPI) survey report 2022 by the Uganda Bureau of Statistics (UBoS)  and the World Bank data 42% of Ugandans live in multidimensional poverty, while 14.7% (of 45 million Ugandans) are deprived of an income and are extremely poor (UBoS, ). Finally, participants narrated how they had to relocate and accept the cancer patient‐hood identity. Support received from friends, work colleagues and family members were utilized in coping with cancer patient‐hood. Spiritual and religious structures and systems were also utilized and provided a lever of support. These findings emphasize the role of holistic care in the management of complex needs of cancer patients. Participants who were attending, or working in, PC reported better support and easier coping than their colleagues, in addition to being able to advocate for themselves to be given palliative care for control of their symptoms, including but not limited to pain. This finding supports the active role palliative care in cancer care, as a key component of universal Health Coverage. Participants who had completed treatments reported their hopes for cure were restored after they were assessed and told the cancer was no longer there. However, some of them continued to experience post‐traumatic stress disorder‐like symptoms such as intrusive thoughts and fears of disease recurrence when they encountered reminders of the disease—a phenomenon reported elsewhere (Campbell, ; Granger, ; Lagad et al., ; Tuffrey‐Wijne, ). 5.1 Strengths and limitations No known documented study has been conducted in Africa, to examine the psychosocial and emotional morbidities caused by cancer as lived by HCPs. This study, therefore, provides novel insights into this under‐researched area and evidence critical for informing clinical practice, or even policy to some extent. The COVID‐19 pandemic and lockdowns presented practical transport difficulties, meaning some participants were interviewed by telephone which could have limited elicitation of some of their experiences and the capture of non‐verbal cues, so limiting the richness of the data. This also meant it was not possible to observe emotional distress and offer face‐to‐face emotional support to participants who could have suffered emotional distress and preferred not to disclose it to the interviewer during interviews. Hence, the interviewer had to rely on participant‐disclosed emotional distress, and that which was audibly eminent during the telephone interview. Similarly, challenges in accessing health care, as well as socioeconomic, and psychological disruptions due to the pandemic could also have exacerbated some of the participants' experiences. Limited research in the topic also presented a unique limitation, where the researchers did not have adequate available evidence to compare with the results of the current study. Strengths and limitations No known documented study has been conducted in Africa, to examine the psychosocial and emotional morbidities caused by cancer as lived by HCPs. This study, therefore, provides novel insights into this under‐researched area and evidence critical for informing clinical practice, or even policy to some extent. The COVID‐19 pandemic and lockdowns presented practical transport difficulties, meaning some participants were interviewed by telephone which could have limited elicitation of some of their experiences and the capture of non‐verbal cues, so limiting the richness of the data. This also meant it was not possible to observe emotional distress and offer face‐to‐face emotional support to participants who could have suffered emotional distress and preferred not to disclose it to the interviewer during interviews. Hence, the interviewer had to rely on participant‐disclosed emotional distress, and that which was audibly eminent during the telephone interview. Similarly, challenges in accessing health care, as well as socioeconomic, and psychological disruptions due to the pandemic could also have exacerbated some of the participants' experiences. Limited research in the topic also presented a unique limitation, where the researchers did not have adequate available evidence to compare with the results of the current study. CONCLUSION Many HCPs are now diagnosed with cancer every year. Findings from this study indicate getting a cancer diagnosis and the transition from being an HCP and care provider to being a patient and recipient of care is a unique experience. The patient role identity is associated with significant multidimensional disruption and suffering, including feelings of disempowerment, for example, grieving threatened and/or actual loss of professional role identity, fear of the consequences of disease and recurrence, shock, worry, anxiety and depression. Constant reminders of cancer, financial hardships and challenges in accessing care, pain and other distressing symptoms and toxicities of treatment further exacerbate psychosocial and emotional suffering. RELEVANCE TO CLINICAL PRACTICE The study findings suggest an urgent need for action to assuage the suffering associated with cancer including the need to: (1) improve access to care; (2) increase the communication training of cancer care specialists and other care providers; (3) intensify cancer awareness campaigns; (4) prioritize and introduce a public health insurance scheme to eliminate out‐of‐pocket costs for cancer treatment and, (5) develop guidelines for the management of HCP cancer patients. These will lessen the psychosocial, financial and emotional suffering for both HCP, and non‐HCP, cancer patients and survivors. GN: Conceptualized the study, reviewed literature, collected data, transcribed interviews, analysis and writing up the paper, prepared and submitted manuscript for journal publication. PE: Reviewed literature, transcripts, analysed the data, expert review of the paper, final proofreading and formatting of the manuscript prior to submission to the journal. SWA: Analysis, review of the paper, prepared and proofread manuscript for journal publication. EN: Analysis, proofread and prepared manuscript for journal publication. The authors received no financial support for the study. The authors declare no competing interests. The HCP cancer patients and survivors are participants in the study but did not contribute to the design or undertaking of the study.
Effectiveness and content components of nursing counselling interventions on self‐ and symptom management of patients in oncology rehabilitation—A systematic review
30ef07bc-f412-42df-8d96-e8ebad6514ed
10077385
Internal Medicine[mh]
INTRODUCTION Living with an oncological disease includes dealing with the psychological, physical, social and existential consequences of the disease itself and the side effects of treatments. These burdens and consequences are often treated nowadays in oncology rehabilitation (Strasser, ). As these symptoms and burdens often persist long term (Wu & Harden, ), they have important implications for the self‐management of patients (Foster et al., ). Supporting the self‐management of cancer survivors is essential in light of the extended periods that patients often live with their disease and its consequences (Campling & Calman, ; McCorkle et al., ). Nurses support oncology‐rehabilitation patients in their self‐management by using evidence‐based knowledge (Gutenbrunner et al., ; Suter‐Riederer et al., ). Often, such support is delivered in the form of counselling and educational interventions, as these forms are important elements of self‐management support programs (Federal Office of Public Health [FOPH], Swiss Health Leagues Conferences, ). However, little is known about the content of such consultations. For other chronic conditions, interventions in self‐management were shown to reduce symptoms, hospital admissions and unscheduled visits to doctors in people with asthma (Gibson et al., ) and to potentially improve the quality of life of rehabilitators with coronary heart disease (Anderson et al., ). Little is known about self‐management interventions in the cancer population. The Global Partners on Self‐Management in Cancer, which has shown that self‐management measures for cancer are neglected in everyday life compared to other chronic diseases (Howell et al., ). One study demonstrated that self‐management interventions statistically significantly improved sleep disturbances, stress‐related problems and the depression of cancer survivors (Risendal et al., ). Further, a systematic review in cancer‐care highlights the effectiveness of self‐management interventions in improving cancer‐related symptoms such as fatigue, depression and anxiety (Howell et al., ). However, these interventions were delivered by various health care professionals (HCP), not exclusively by rehabilitation nurses. Therefore, this study aims to summarize the evidence related to the effectiveness and content components of nurse‐led counselling interventions on the self‐ and symptom management of patients in oncology rehabilitation. BACKGROUND According to the World Health Organization (WHO), rehabilitation is ‘a set of interventions designed to optimize functioning and reduce disability in individuals with health conditions in interaction with their environment’ (World Health Organization [WHO], ). Oncology‐rehabilitation interventions are designed to enable patients to be as independent as possible and to support their participation in work, education and social roles (WHO, ). As cancer‐related challenges arise with the diagnosis, along the whole disease trajectory and even in the aftermath, rehabilitation can be implemented in various stages of the cancer‐care continuum (National Cancer Institute, Division of Cancer Control,, & Population Sciences, ) that is before, during and after cancer treatment (Silver et al., ). A scoping review of the role of nurses in oncology rehabilitation was recently conducted by our team (Mayrhofer et al., ). Nurses' role can be attributed to three features: First, they are mediators between patients, relatives and other members of the multidisciplinary rehabilitation team. Second, nurses provide emotional and psychological support and deliver interventions to promote the patients' coping skills and self‐management. Third, they are coordinators who inform, train and counsel people with cancer undergoing rehabilitation to strengthen their independence (Mayrhofer et al., ). Thus, providing patients with support about their self‐management is a crucial skill for rehabilitation nurses. Self‐management in the oncological context is seen as an approach an individual uses to manage disease symptoms and consequences and to optimize living with cancer (Foster et al., ; Howell et al., ). The Global Partners on Self‐Management in Cancer recommend self‐management interventions to improve patients' core skills (i.e. problem‐solving, setting goals, planning actions) and to strengthen patients' self‐efficacy (Howell et al., ). Self‐efficacy is an individual's belief in their capabilities to behave in a required way that is to take action in self‐management, self‐care or coping to achieve the desired outcome. Evidence suggests that self‐efficacy improves self‐management behaviour (Foster & Fenlon, ; Lorig & Holman, ; White et al., ). Self‐management and symptom management are linked. Lorig and Holman  demonstrated that, in the past, outcomes of self‐management interventions were often operationalized in terms of improved health status. Also, the absence of symptoms or the presence of better symptom management was regarded as an improved health status. This improved health status was the result of a change in behaviour due to the self‐management intervention. Also, the goal of a self‐management intervention cannot only be the absence of symptoms or an increase in symptom management, especially for enduringly ill people. Equally important was a greater confidence in controlling the disease (self‐efficacy) and thus in the long‐term relief of symptoms. (Lorig & Holman, ). Counselling provided by nurses that targets the improvement of patients' self‐management and self‐efficacy plays an important role in oncology rehabilitation. Counselling describes a professional relationship that empowers individuals to reach wellbeing and health through a problem‐based consideration of psychosocial and performance issues (Tracy & O’Grady, ). Counselling interventions are suitable for working with patients on their self‐management and coping skills. Previous reviews found limited evidence that counselling interventions positively affect cancer symptoms such as fatigue (Bennett et al., ; Goedendorp et al., ). The aforementioned information on counselling, self‐management and the role of nurses demonstrates the need for clarity in the topic. This article provides a systematic review of nurse‐led self‐management interventions that have already been evaluated for their effectiveness and their content components. METHODS A systematic literature review was conducted and registered at the International Prospective Register of Systematic Reviews (PROSPERO) (Booth et al., ) (registration number CRD42021239437, available from https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=239437 ). We followed the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) (Page et al., ) and the RefHunter Manual (Nordhausen & Hirt, ). 3.1 Eligibility criteria We included all the studies that investigated counselling interventions led by nurses for patients in oncology‐rehabilitation settings. Furthermore, eligibility criteria and the developed search strategy, corresponded to the following population, intervention, comparing condition and outcomes (PICO) statement: 3.1.1 Population Adults aged 18 years or older, suffering from any type of cancer, who attended oncology‐rehabilitation therapies. 3.1.2 Intervention Quantitative studies probing all types of consulting, counselling and educational interventions carried out by nursing professionals. Eligible interventions were designed to support patients in their self‐ and symptom management. Studies that considered interventions of psychologists, social workers or any other HCP other than nurses were excluded. 3.1.3 Comparing condition Studies compared groups that received a nursing intervention with groups that received usual care or interventions carried out by other HCP. 3.1.4 Outcomes Outcomes of interest were changes in patients' self‐ or symptom management after participation in a nursing counselling intervention. Outcomes of eligible studies could be patient reported or clinically measured. 3.1.5 Study design We included studies with a randomized controlled or a quasi‐experimental design (one‐group‐pre‐test–post‐test design). Qualitative studies were excluded and for mixed‐methods studies, only the quantitative part was considered. 3.2 Search strategy Electronic databases (MEDLINE via PubMed, CINAHL, Cochrane Library) were searched for studies published using all available records between the 8th of March and the 26th of March 2021. The search string is shown in Appendix . No time or language restrictions were applied at this review stage. Additionally, one author (V. W. B.) conducted a hand search via Google Scholar with different search terms screening the first hundred hits. The website opengrey.eu was searched for grey literature. 3.3 Study selection and data extraction After duplicate removal, two authors (V. W. B., C. L.) independently screened the titles and abstracts of all identified articles based on the selection criteria using the app Rayyan (Ouzzani et al., ). The same authors independently assessed the eligibility of articles included for full‐text screening. Any disagreements were discussed with the last author (M. K.). Full texts in languages other than English or German were excluded. Reference lists of all eligible studies were screened for further publications. Two authors (V. W. B., C. L.) independently extracted data from each study that met the inclusion criteria using a standardized Excel spreadsheet. The following information was collected: design, sample, objective, intervention, measurement, control group, primary and secondary outcomes and limitations. We only extracted outcome data relevant to the review question (i.e. quantitative data on the effectiveness of counselling interventions, but no data on patients' satisfaction was collected). We resolved any disagreements through discussion and by recourse to the last author (M. K.). 3.4 Methodological quality assessment The methodological quality of the included studies was appraised by two authors (V. W. B., C. L.) independently and following the Joanna Briggs Institute (JBI) checklists for randomized controlled trials and quasi‐experimental studies (Joanna Briggs Institute [JBI], , ). Disagreements were discussed with the last author (M. K.) until consensus was reached. As the instrument from JBI did not fulfil the purpose of detecting the bias enough, the Cochrane risk‐of‐bias tool was also used (Higgins & Green, ). We did not exclude any study due to qualitative limitations. 3.5 Data synthesis The extracted results were ordered, synthesized in tabular form (Table ) and summarized narratively. The data synthesis studies were grouped according to their interventions' components. Further, we clustered the studies according to their outcomes into the two clusters of self‐ and symptom management. We assessed clinical heterogeneity by sorting and contrasting the included studies concerning their interventions and outcomes. Due to the expected low number of involved studies and the expected heterogeneity in the population, settings, interventions and outcomes, we refrained from conducting a meta‐analysis. 3.6 Ethics The Research Ethics Committee approval was not required. Eligibility criteria We included all the studies that investigated counselling interventions led by nurses for patients in oncology‐rehabilitation settings. Furthermore, eligibility criteria and the developed search strategy, corresponded to the following population, intervention, comparing condition and outcomes (PICO) statement: 3.1.1 Population Adults aged 18 years or older, suffering from any type of cancer, who attended oncology‐rehabilitation therapies. 3.1.2 Intervention Quantitative studies probing all types of consulting, counselling and educational interventions carried out by nursing professionals. Eligible interventions were designed to support patients in their self‐ and symptom management. Studies that considered interventions of psychologists, social workers or any other HCP other than nurses were excluded. 3.1.3 Comparing condition Studies compared groups that received a nursing intervention with groups that received usual care or interventions carried out by other HCP. 3.1.4 Outcomes Outcomes of interest were changes in patients' self‐ or symptom management after participation in a nursing counselling intervention. Outcomes of eligible studies could be patient reported or clinically measured. 3.1.5 Study design We included studies with a randomized controlled or a quasi‐experimental design (one‐group‐pre‐test–post‐test design). Qualitative studies were excluded and for mixed‐methods studies, only the quantitative part was considered. Population Adults aged 18 years or older, suffering from any type of cancer, who attended oncology‐rehabilitation therapies. Intervention Quantitative studies probing all types of consulting, counselling and educational interventions carried out by nursing professionals. Eligible interventions were designed to support patients in their self‐ and symptom management. Studies that considered interventions of psychologists, social workers or any other HCP other than nurses were excluded. Comparing condition Studies compared groups that received a nursing intervention with groups that received usual care or interventions carried out by other HCP. Outcomes Outcomes of interest were changes in patients' self‐ or symptom management after participation in a nursing counselling intervention. Outcomes of eligible studies could be patient reported or clinically measured. Study design We included studies with a randomized controlled or a quasi‐experimental design (one‐group‐pre‐test–post‐test design). Qualitative studies were excluded and for mixed‐methods studies, only the quantitative part was considered. Search strategy Electronic databases (MEDLINE via PubMed, CINAHL, Cochrane Library) were searched for studies published using all available records between the 8th of March and the 26th of March 2021. The search string is shown in Appendix . No time or language restrictions were applied at this review stage. Additionally, one author (V. W. B.) conducted a hand search via Google Scholar with different search terms screening the first hundred hits. The website opengrey.eu was searched for grey literature. Study selection and data extraction After duplicate removal, two authors (V. W. B., C. L.) independently screened the titles and abstracts of all identified articles based on the selection criteria using the app Rayyan (Ouzzani et al., ). The same authors independently assessed the eligibility of articles included for full‐text screening. Any disagreements were discussed with the last author (M. K.). Full texts in languages other than English or German were excluded. Reference lists of all eligible studies were screened for further publications. Two authors (V. W. B., C. L.) independently extracted data from each study that met the inclusion criteria using a standardized Excel spreadsheet. The following information was collected: design, sample, objective, intervention, measurement, control group, primary and secondary outcomes and limitations. We only extracted outcome data relevant to the review question (i.e. quantitative data on the effectiveness of counselling interventions, but no data on patients' satisfaction was collected). We resolved any disagreements through discussion and by recourse to the last author (M. K.). Methodological quality assessment The methodological quality of the included studies was appraised by two authors (V. W. B., C. L.) independently and following the Joanna Briggs Institute (JBI) checklists for randomized controlled trials and quasi‐experimental studies (Joanna Briggs Institute [JBI], , ). Disagreements were discussed with the last author (M. K.) until consensus was reached. As the instrument from JBI did not fulfil the purpose of detecting the bias enough, the Cochrane risk‐of‐bias tool was also used (Higgins & Green, ). We did not exclude any study due to qualitative limitations. Data synthesis The extracted results were ordered, synthesized in tabular form (Table ) and summarized narratively. The data synthesis studies were grouped according to their interventions' components. Further, we clustered the studies according to their outcomes into the two clusters of self‐ and symptom management. We assessed clinical heterogeneity by sorting and contrasting the included studies concerning their interventions and outcomes. Due to the expected low number of involved studies and the expected heterogeneity in the population, settings, interventions and outcomes, we refrained from conducting a meta‐analysis. Ethics The Research Ethics Committee approval was not required. RESULTS 4.1 Study selection Through a systematic search, 314 relevant articles were identified. After removing duplicates, 304 articles were screened by the titles and abstracts, of which 21 (7%) were eligible for full‐text screening. After further exclusion of 14 articles, seven studies met the inclusion criteria and were finally included in the review (see Figure ). 4.2 Study characteristics Five randomized controlled trials (Chambers et al., ; Taylor et al., ; Turner et al., ;Yates et al., ; Zhang et al., ) and two quasi‐experimental studies (Chan et al., ; Reb et al., ) were included. Four studies were conducted in Australia (Chambers et al., ; Taylor et al., ; Turner et al., ; Yates et al., ), two in China (Chan et al., ; Zhang et al., ) and one in the United States of America (Reb et al., ). With one exception (Yates et al., ), all studies were published after 2014. Supplementary information on the study characteristics, the sample and the measurement instruments of the outcomes were provided in Table . 4.3 Quality of included studies The results of the methodological quality assessment are listed in Tables and according to JBI checklists (JBI, , ). The risk of bias in a randomized controlled trial is additionally presented by the Cochrane risk‐of‐bias tool for randomized trials in Table (Higgins & Green, ). In all randomized controlled trials, outcomes were measured in the same way for treatment and control groups (Chambers et al., ; Taylor et al., ; Turner et al., ; Yates et al., ; Zhang et al., ). In two randomized controlled trials, reasons for not determining a sample size were missing (Yates et al., ; Zhang et al., ). Turner et al.  used a sample size calculation but were not able to recruit enough patients. Only two studies recruited enough patients according to their sample size calculations (Chambers et al., ; Chan et al., ). The incomplete follow‐up was adequately described and analysed in one RCT (Turner et al., ). 4.4 Study results listed according to PICO 4.4.1 Population In total, 829 people with cancer (550 female, 66%) with a range of 20 to 354 patients per study were included. Patients suffered from lung (Chambers et al., ; Reb et al., ; Yates et al., ; Zhang et al., ), colorectal (Chambers et al., ; Reb et al., ; Yates et al., ; Zhang et al., ), breast (Chambers et al., ; Zhang et al., ), head and neck (Turner et al., ; Yates et al., ), prostate (Chambers et al., ), haematological (Chambers et al., ), (Non‐) Hodgkin cancer (Taylor et al., ). All studies were conducted in an ambulatory setting. In one study the intervention started in the inpatient setting and was continued after discharge (Zhang et al., ). 4.4.2 Interventions Four studies (Chan et al., ; Reb et al., ; Turner et al., ; Yates et al., ) based their intervention on frameworks that is the PRECEDE model of health behaviour (Yates et al., ) (Green et al., ), the chronic care self‐management model (CCM) (Reb et al., ), the hope theory (Chan et al., ) and principles of chronic disease and self‐management (Reb et al., ; Turner et al., ). In all studies, nurses used techniques to support patients in self‐ and symptom management, such as motivational interviewing (Taylor et al., ) or cognitive and instructional behavioural strategies (Reb et al., ). The persons who delivered the interventions were nurses in all studies. Nurses had to be well experienced in the clinical field (Reb et al., ; Taylor et al., ; Yates et al., ), were educated as a survivorship cancer nurse coordinator (Taylor et al., ), as an oncology nurse (Chambers et al., ) or in a Master's degree program (Chan et al., ; Reb et al., ). In one study the involved nurses received special training in communication techniques and the principles of self‐management (Turner et al., ). The interventions differed in the methods of performance, duration and frequency. A single session was conducted face‐to‐face by Reb et al.  and on the telephone by Chambers et al. . One study used a combination of face‐to‐face and telephone sessions (Yates et al., ). Two or more face‐to‐face and telephone session combinations were provided in three studies (Chan et al., ; Taylor et al., ; Zhang et al., ). The duration ranged from 10 minutes (follow‐up telephone counselling) (Zhang et al., ) to 60 min (face‐to‐face) (Chan et al., ; Taylor et al., ). 4.4.3 Control groups Two studies had no control group (Chan et al., ; Reb et al., ). The usual care as a control group was performed in three studies (Taylor et al., ; Turner et al., ; Zhang et al., ). Chambers et al.  conducted a control group with a psychologist‐led intervention, whereas Yates et al.  prepared a general‐patient‐education intervention as a control group. Receiving standardized information material in the control group was conducted by one study (Turner et al., ). 4.4.4 Outcomes All the studies had the outcome elements of self‐management. The most frequently named outcomes were self‐efficacy (Reb et al., ; Taylor et al., ; Turner et al., ; Yates et al., ; Zhang et al., ), followed by depression and anxiety (Chan et al., ; Reb et al., ; Turner et al., ). A range of instruments was used to assess the outcomes. In one study self‐efficacy was measured by a self‐developed instrument (Yates et al., ). All instruments were self‐reported. Details of the instruments are listed in Table . Outcome measurement points ranged from 1 week after the intervention (Yates et al., ) to 12 months (Chambers et al., ). The most common measurement time points were at baseline, three and 6 months after intervention (Chambers et al., ; Taylor et al., ; Turner et al., ). 4.5 Synthesis of the results Results were synthesized into the intervention content components and then clustered into the outcome groups of self‐ and symptom management. 4.5.1 Content components A total of seven components of the studied care‐counselling interventions were identified and are listed in Table . In two studies the goals in the interventions were evaluated (Chan et al., ; Yates et al., ). 4.5.2 Self‐management outcomes Self‐efficacy was measured in four randomized controlled trials (Taylor et al., ; Turner et al., ; Yates et al., ; Zhang et al., ) and in one quasi‐experimental study (Reb et al., ). Compared to the control group, the self‐efficacy rose statistically significantly in the intervention group 3 months after an intervention (M = 31.83 (SD = 4.46) vs. M = 28.52 (SD = 6.60), p = .049) (Zhang et al., ). Taylor et al.  identified self‐empowerment as one of the components. Self‐empowerment was defined, inter alia, as the willingness of patients to change their lifestyle in order to apply self‐management strategies in their daily lives. Six months after an intervention, the intervention group had a statistically significantly higher score in self‐empowerment compared to the control group (M = 49.50 (SD = 5.63) vs. M = 45.79 (SD = 5.85), p = .016). Improved self‐empowerment also affected patients' willingness to make lifestyle changes to apply self‐management strategies in their daily lives (Taylor et al., ). After 9 months, the intervention group demonstrated a higher willingness to make lifestyle changes (M = 3.34 vs. M = 2.83, p = .022) than the control group. Similarly, the intervention group also indicated that they had statistically significantly more information to cope with their illness compared to the control group (M = 3.47 vs. M = 3.03, p = .008). No SDs were reported (Taylor et al., ). No statistically significant difference in self‐efficacy between the control group and the intervention group was measured in one randomized controlled study, three and 6 months after intervention (Turner et al., ). Further, no statistically significant difference in self‐efficacy was measured in a single intervention study (Yates et al., ). A statistically significant increase in self‐efficacy was measured 2 months after the single intervention compared to pre‐intervention (M = 3.5 [SD = 0.54] vs. M = 3.7 [SD = 0.35], p = .05) (Reb et al., ). 4.5.3 Symptom management outcomes All involved studies reported symptom‐management outcomes (Chambers et al., ; Chan et al., ; Reb et al., ; Taylor et al., ; Turner et al., ; Yates et al., ; Zhang et al., ). One week after an intervention, the intervention group demonstrated, compared to the control group, statistically significantly more self‐reported pain knowledge (M = 12.1 (SD = 5.4) vs. M = 13.1 (SD = 4.4), p < .01), less side effects of the pain medications (M = 30.0 (SD = 6.0) vs. M = 28.6 (SD = 5.4), p < .05) and a statistically significant increase of the feelings to control their pain (M = 33.5 (SD 5.2) vs. M.34.9 (SD = 5.5), p < .05). Eight weeks after the intervention, patients in the intervention group reported a statistically significant reduction in concerns about pain addictions compared to the control group (M = 28.5 (SD = 8.4) vs. M = 23.0 (SD =7.7), p < .01) and more willingness to tolerate the pain (M = 20.1 (SD = 4.9) vs. M = 18.1 (SD =5.0), p < .01) (Yates et al., ). Depression and anxiety were measured in seven studies (Chambers et al., ; Chan et al., ; Reb et al., ; Taylor et al., ; Turner et al., ; Yates et al., ; Zhang et al., ). Anxiety was statistically significantly reduced in the intervention group 8 weeks after intervention compared to the control group (Estimated marginal mean (EMM) = − 1.8 (Standard error of the mean [S.E.M.] = 0.4) vs. EMM ‐0.6 (S.E.M. = 0.4), p > .05) (Yates et al., ). Moreover, one intervention statistically significantly decreased anxiety in the intervention group compared to the control group after the intervention (M = 3.50 (SD = 1.82) vs. M = 5.26 (SD = 2.65), p = .011) (Zhang et al., ). However, no statistically significant differences in anxiety and depression were measured in three studies (Chambers et al., ; Taylor et al., ; Turner et al., ). Even if there was no statistically significance 9 months after the intervention, the control group in one study said less prepared to deal with feelings of depression (M = 1.24 vs. M = 0.62, p = .047) (Taylor et al., ). No statistically significant change in depression was measured 4 weeks after the intervention compared to the pre‐intervention baseline (Chan et al., ). Two months after a single management intervention, patients reported a statistically significant decrease in depression (M = 9.9 (SD = 6.1) vs. M = 6.4 (SD = 5.2), p = .001) and anxiety (M = 33.5 (SD = 10.9) vs. M = 28.8 (SD = 9.1), p = .02) compared to pre‐intervention (Reb et al., ). Study selection Through a systematic search, 314 relevant articles were identified. After removing duplicates, 304 articles were screened by the titles and abstracts, of which 21 (7%) were eligible for full‐text screening. After further exclusion of 14 articles, seven studies met the inclusion criteria and were finally included in the review (see Figure ). Study characteristics Five randomized controlled trials (Chambers et al., ; Taylor et al., ; Turner et al., ;Yates et al., ; Zhang et al., ) and two quasi‐experimental studies (Chan et al., ; Reb et al., ) were included. Four studies were conducted in Australia (Chambers et al., ; Taylor et al., ; Turner et al., ; Yates et al., ), two in China (Chan et al., ; Zhang et al., ) and one in the United States of America (Reb et al., ). With one exception (Yates et al., ), all studies were published after 2014. Supplementary information on the study characteristics, the sample and the measurement instruments of the outcomes were provided in Table . Quality of included studies The results of the methodological quality assessment are listed in Tables and according to JBI checklists (JBI, , ). The risk of bias in a randomized controlled trial is additionally presented by the Cochrane risk‐of‐bias tool for randomized trials in Table (Higgins & Green, ). In all randomized controlled trials, outcomes were measured in the same way for treatment and control groups (Chambers et al., ; Taylor et al., ; Turner et al., ; Yates et al., ; Zhang et al., ). In two randomized controlled trials, reasons for not determining a sample size were missing (Yates et al., ; Zhang et al., ). Turner et al.  used a sample size calculation but were not able to recruit enough patients. Only two studies recruited enough patients according to their sample size calculations (Chambers et al., ; Chan et al., ). The incomplete follow‐up was adequately described and analysed in one RCT (Turner et al., ). Study results listed according to PICO 4.4.1 Population In total, 829 people with cancer (550 female, 66%) with a range of 20 to 354 patients per study were included. Patients suffered from lung (Chambers et al., ; Reb et al., ; Yates et al., ; Zhang et al., ), colorectal (Chambers et al., ; Reb et al., ; Yates et al., ; Zhang et al., ), breast (Chambers et al., ; Zhang et al., ), head and neck (Turner et al., ; Yates et al., ), prostate (Chambers et al., ), haematological (Chambers et al., ), (Non‐) Hodgkin cancer (Taylor et al., ). All studies were conducted in an ambulatory setting. In one study the intervention started in the inpatient setting and was continued after discharge (Zhang et al., ). 4.4.2 Interventions Four studies (Chan et al., ; Reb et al., ; Turner et al., ; Yates et al., ) based their intervention on frameworks that is the PRECEDE model of health behaviour (Yates et al., ) (Green et al., ), the chronic care self‐management model (CCM) (Reb et al., ), the hope theory (Chan et al., ) and principles of chronic disease and self‐management (Reb et al., ; Turner et al., ). In all studies, nurses used techniques to support patients in self‐ and symptom management, such as motivational interviewing (Taylor et al., ) or cognitive and instructional behavioural strategies (Reb et al., ). The persons who delivered the interventions were nurses in all studies. Nurses had to be well experienced in the clinical field (Reb et al., ; Taylor et al., ; Yates et al., ), were educated as a survivorship cancer nurse coordinator (Taylor et al., ), as an oncology nurse (Chambers et al., ) or in a Master's degree program (Chan et al., ; Reb et al., ). In one study the involved nurses received special training in communication techniques and the principles of self‐management (Turner et al., ). The interventions differed in the methods of performance, duration and frequency. A single session was conducted face‐to‐face by Reb et al.  and on the telephone by Chambers et al. . One study used a combination of face‐to‐face and telephone sessions (Yates et al., ). Two or more face‐to‐face and telephone session combinations were provided in three studies (Chan et al., ; Taylor et al., ; Zhang et al., ). The duration ranged from 10 minutes (follow‐up telephone counselling) (Zhang et al., ) to 60 min (face‐to‐face) (Chan et al., ; Taylor et al., ). 4.4.3 Control groups Two studies had no control group (Chan et al., ; Reb et al., ). The usual care as a control group was performed in three studies (Taylor et al., ; Turner et al., ; Zhang et al., ). Chambers et al.  conducted a control group with a psychologist‐led intervention, whereas Yates et al.  prepared a general‐patient‐education intervention as a control group. Receiving standardized information material in the control group was conducted by one study (Turner et al., ). 4.4.4 Outcomes All the studies had the outcome elements of self‐management. The most frequently named outcomes were self‐efficacy (Reb et al., ; Taylor et al., ; Turner et al., ; Yates et al., ; Zhang et al., ), followed by depression and anxiety (Chan et al., ; Reb et al., ; Turner et al., ). A range of instruments was used to assess the outcomes. In one study self‐efficacy was measured by a self‐developed instrument (Yates et al., ). All instruments were self‐reported. Details of the instruments are listed in Table . Outcome measurement points ranged from 1 week after the intervention (Yates et al., ) to 12 months (Chambers et al., ). The most common measurement time points were at baseline, three and 6 months after intervention (Chambers et al., ; Taylor et al., ; Turner et al., ). Population In total, 829 people with cancer (550 female, 66%) with a range of 20 to 354 patients per study were included. Patients suffered from lung (Chambers et al., ; Reb et al., ; Yates et al., ; Zhang et al., ), colorectal (Chambers et al., ; Reb et al., ; Yates et al., ; Zhang et al., ), breast (Chambers et al., ; Zhang et al., ), head and neck (Turner et al., ; Yates et al., ), prostate (Chambers et al., ), haematological (Chambers et al., ), (Non‐) Hodgkin cancer (Taylor et al., ). All studies were conducted in an ambulatory setting. In one study the intervention started in the inpatient setting and was continued after discharge (Zhang et al., ). Interventions Four studies (Chan et al., ; Reb et al., ; Turner et al., ; Yates et al., ) based their intervention on frameworks that is the PRECEDE model of health behaviour (Yates et al., ) (Green et al., ), the chronic care self‐management model (CCM) (Reb et al., ), the hope theory (Chan et al., ) and principles of chronic disease and self‐management (Reb et al., ; Turner et al., ). In all studies, nurses used techniques to support patients in self‐ and symptom management, such as motivational interviewing (Taylor et al., ) or cognitive and instructional behavioural strategies (Reb et al., ). The persons who delivered the interventions were nurses in all studies. Nurses had to be well experienced in the clinical field (Reb et al., ; Taylor et al., ; Yates et al., ), were educated as a survivorship cancer nurse coordinator (Taylor et al., ), as an oncology nurse (Chambers et al., ) or in a Master's degree program (Chan et al., ; Reb et al., ). In one study the involved nurses received special training in communication techniques and the principles of self‐management (Turner et al., ). The interventions differed in the methods of performance, duration and frequency. A single session was conducted face‐to‐face by Reb et al.  and on the telephone by Chambers et al. . One study used a combination of face‐to‐face and telephone sessions (Yates et al., ). Two or more face‐to‐face and telephone session combinations were provided in three studies (Chan et al., ; Taylor et al., ; Zhang et al., ). The duration ranged from 10 minutes (follow‐up telephone counselling) (Zhang et al., ) to 60 min (face‐to‐face) (Chan et al., ; Taylor et al., ). Control groups Two studies had no control group (Chan et al., ; Reb et al., ). The usual care as a control group was performed in three studies (Taylor et al., ; Turner et al., ; Zhang et al., ). Chambers et al.  conducted a control group with a psychologist‐led intervention, whereas Yates et al.  prepared a general‐patient‐education intervention as a control group. Receiving standardized information material in the control group was conducted by one study (Turner et al., ). Outcomes All the studies had the outcome elements of self‐management. The most frequently named outcomes were self‐efficacy (Reb et al., ; Taylor et al., ; Turner et al., ; Yates et al., ; Zhang et al., ), followed by depression and anxiety (Chan et al., ; Reb et al., ; Turner et al., ). A range of instruments was used to assess the outcomes. In one study self‐efficacy was measured by a self‐developed instrument (Yates et al., ). All instruments were self‐reported. Details of the instruments are listed in Table . Outcome measurement points ranged from 1 week after the intervention (Yates et al., ) to 12 months (Chambers et al., ). The most common measurement time points were at baseline, three and 6 months after intervention (Chambers et al., ; Taylor et al., ; Turner et al., ). Synthesis of the results Results were synthesized into the intervention content components and then clustered into the outcome groups of self‐ and symptom management. 4.5.1 Content components A total of seven components of the studied care‐counselling interventions were identified and are listed in Table . In two studies the goals in the interventions were evaluated (Chan et al., ; Yates et al., ). 4.5.2 Self‐management outcomes Self‐efficacy was measured in four randomized controlled trials (Taylor et al., ; Turner et al., ; Yates et al., ; Zhang et al., ) and in one quasi‐experimental study (Reb et al., ). Compared to the control group, the self‐efficacy rose statistically significantly in the intervention group 3 months after an intervention (M = 31.83 (SD = 4.46) vs. M = 28.52 (SD = 6.60), p = .049) (Zhang et al., ). Taylor et al.  identified self‐empowerment as one of the components. Self‐empowerment was defined, inter alia, as the willingness of patients to change their lifestyle in order to apply self‐management strategies in their daily lives. Six months after an intervention, the intervention group had a statistically significantly higher score in self‐empowerment compared to the control group (M = 49.50 (SD = 5.63) vs. M = 45.79 (SD = 5.85), p = .016). Improved self‐empowerment also affected patients' willingness to make lifestyle changes to apply self‐management strategies in their daily lives (Taylor et al., ). After 9 months, the intervention group demonstrated a higher willingness to make lifestyle changes (M = 3.34 vs. M = 2.83, p = .022) than the control group. Similarly, the intervention group also indicated that they had statistically significantly more information to cope with their illness compared to the control group (M = 3.47 vs. M = 3.03, p = .008). No SDs were reported (Taylor et al., ). No statistically significant difference in self‐efficacy between the control group and the intervention group was measured in one randomized controlled study, three and 6 months after intervention (Turner et al., ). Further, no statistically significant difference in self‐efficacy was measured in a single intervention study (Yates et al., ). A statistically significant increase in self‐efficacy was measured 2 months after the single intervention compared to pre‐intervention (M = 3.5 [SD = 0.54] vs. M = 3.7 [SD = 0.35], p = .05) (Reb et al., ). 4.5.3 Symptom management outcomes All involved studies reported symptom‐management outcomes (Chambers et al., ; Chan et al., ; Reb et al., ; Taylor et al., ; Turner et al., ; Yates et al., ; Zhang et al., ). One week after an intervention, the intervention group demonstrated, compared to the control group, statistically significantly more self‐reported pain knowledge (M = 12.1 (SD = 5.4) vs. M = 13.1 (SD = 4.4), p < .01), less side effects of the pain medications (M = 30.0 (SD = 6.0) vs. M = 28.6 (SD = 5.4), p < .05) and a statistically significant increase of the feelings to control their pain (M = 33.5 (SD 5.2) vs. M.34.9 (SD = 5.5), p < .05). Eight weeks after the intervention, patients in the intervention group reported a statistically significant reduction in concerns about pain addictions compared to the control group (M = 28.5 (SD = 8.4) vs. M = 23.0 (SD =7.7), p < .01) and more willingness to tolerate the pain (M = 20.1 (SD = 4.9) vs. M = 18.1 (SD =5.0), p < .01) (Yates et al., ). Depression and anxiety were measured in seven studies (Chambers et al., ; Chan et al., ; Reb et al., ; Taylor et al., ; Turner et al., ; Yates et al., ; Zhang et al., ). Anxiety was statistically significantly reduced in the intervention group 8 weeks after intervention compared to the control group (Estimated marginal mean (EMM) = − 1.8 (Standard error of the mean [S.E.M.] = 0.4) vs. EMM ‐0.6 (S.E.M. = 0.4), p > .05) (Yates et al., ). Moreover, one intervention statistically significantly decreased anxiety in the intervention group compared to the control group after the intervention (M = 3.50 (SD = 1.82) vs. M = 5.26 (SD = 2.65), p = .011) (Zhang et al., ). However, no statistically significant differences in anxiety and depression were measured in three studies (Chambers et al., ; Taylor et al., ; Turner et al., ). Even if there was no statistically significance 9 months after the intervention, the control group in one study said less prepared to deal with feelings of depression (M = 1.24 vs. M = 0.62, p = .047) (Taylor et al., ). No statistically significant change in depression was measured 4 weeks after the intervention compared to the pre‐intervention baseline (Chan et al., ). Two months after a single management intervention, patients reported a statistically significant decrease in depression (M = 9.9 (SD = 6.1) vs. M = 6.4 (SD = 5.2), p = .001) and anxiety (M = 33.5 (SD = 10.9) vs. M = 28.8 (SD = 9.1), p = .02) compared to pre‐intervention (Reb et al., ). Content components A total of seven components of the studied care‐counselling interventions were identified and are listed in Table . In two studies the goals in the interventions were evaluated (Chan et al., ; Yates et al., ). Self‐management outcomes Self‐efficacy was measured in four randomized controlled trials (Taylor et al., ; Turner et al., ; Yates et al., ; Zhang et al., ) and in one quasi‐experimental study (Reb et al., ). Compared to the control group, the self‐efficacy rose statistically significantly in the intervention group 3 months after an intervention (M = 31.83 (SD = 4.46) vs. M = 28.52 (SD = 6.60), p = .049) (Zhang et al., ). Taylor et al.  identified self‐empowerment as one of the components. Self‐empowerment was defined, inter alia, as the willingness of patients to change their lifestyle in order to apply self‐management strategies in their daily lives. Six months after an intervention, the intervention group had a statistically significantly higher score in self‐empowerment compared to the control group (M = 49.50 (SD = 5.63) vs. M = 45.79 (SD = 5.85), p = .016). Improved self‐empowerment also affected patients' willingness to make lifestyle changes to apply self‐management strategies in their daily lives (Taylor et al., ). After 9 months, the intervention group demonstrated a higher willingness to make lifestyle changes (M = 3.34 vs. M = 2.83, p = .022) than the control group. Similarly, the intervention group also indicated that they had statistically significantly more information to cope with their illness compared to the control group (M = 3.47 vs. M = 3.03, p = .008). No SDs were reported (Taylor et al., ). No statistically significant difference in self‐efficacy between the control group and the intervention group was measured in one randomized controlled study, three and 6 months after intervention (Turner et al., ). Further, no statistically significant difference in self‐efficacy was measured in a single intervention study (Yates et al., ). A statistically significant increase in self‐efficacy was measured 2 months after the single intervention compared to pre‐intervention (M = 3.5 [SD = 0.54] vs. M = 3.7 [SD = 0.35], p = .05) (Reb et al., ). Symptom management outcomes All involved studies reported symptom‐management outcomes (Chambers et al., ; Chan et al., ; Reb et al., ; Taylor et al., ; Turner et al., ; Yates et al., ; Zhang et al., ). One week after an intervention, the intervention group demonstrated, compared to the control group, statistically significantly more self‐reported pain knowledge (M = 12.1 (SD = 5.4) vs. M = 13.1 (SD = 4.4), p < .01), less side effects of the pain medications (M = 30.0 (SD = 6.0) vs. M = 28.6 (SD = 5.4), p < .05) and a statistically significant increase of the feelings to control their pain (M = 33.5 (SD 5.2) vs. M.34.9 (SD = 5.5), p < .05). Eight weeks after the intervention, patients in the intervention group reported a statistically significant reduction in concerns about pain addictions compared to the control group (M = 28.5 (SD = 8.4) vs. M = 23.0 (SD =7.7), p < .01) and more willingness to tolerate the pain (M = 20.1 (SD = 4.9) vs. M = 18.1 (SD =5.0), p < .01) (Yates et al., ). Depression and anxiety were measured in seven studies (Chambers et al., ; Chan et al., ; Reb et al., ; Taylor et al., ; Turner et al., ; Yates et al., ; Zhang et al., ). Anxiety was statistically significantly reduced in the intervention group 8 weeks after intervention compared to the control group (Estimated marginal mean (EMM) = − 1.8 (Standard error of the mean [S.E.M.] = 0.4) vs. EMM ‐0.6 (S.E.M. = 0.4), p > .05) (Yates et al., ). Moreover, one intervention statistically significantly decreased anxiety in the intervention group compared to the control group after the intervention (M = 3.50 (SD = 1.82) vs. M = 5.26 (SD = 2.65), p = .011) (Zhang et al., ). However, no statistically significant differences in anxiety and depression were measured in three studies (Chambers et al., ; Taylor et al., ; Turner et al., ). Even if there was no statistically significance 9 months after the intervention, the control group in one study said less prepared to deal with feelings of depression (M = 1.24 vs. M = 0.62, p = .047) (Taylor et al., ). No statistically significant change in depression was measured 4 weeks after the intervention compared to the pre‐intervention baseline (Chan et al., ). Two months after a single management intervention, patients reported a statistically significant decrease in depression (M = 9.9 (SD = 6.1) vs. M = 6.4 (SD = 5.2), p = .001) and anxiety (M = 33.5 (SD = 10.9) vs. M = 28.8 (SD = 9.1), p = .02) compared to pre‐intervention (Reb et al., ). DISCUSSION In this systematic review focusing on the effectiveness and content components of nursing counselling interventions on self‐ and symptom management, seven studies with 829 people with cancer were included. Two RCTs (Taylor et al., ; Zhang et al., ) measured a statistically significant increase in self‐efficacy compared to the control group and one quasi‐experimental trial (Reb et al., ) compared to the pre‐intervention period. The symptom anxiety was statistically significantly reduced in two RCTs (Yates et al., ; Zhang et al., ) and one quasi‐experimental trial (Reb et al., ). The intervention were found to have similar components, such as identifying patients' concerns, setting goals, developing action plans, empowering patients, evaluating the goals and giving patients tailored information. The similar components of the consulting interventions are consistent with Lorig and Holman  self‐management skills such as problem‐solving, decision‐making, action planning and self‐tailoring. Similar findings were reported in a recently published systematic review focusing on the self‐management of elder people with cancer, where interventions were also based on self‐management core skills such as problem‐solving, setting goals and action planning (Haase et al., ). However, setting goals is one thing, evaluating them is another. In our review, only two studies evaluated their goals (Chan et al., ; Yates et al., ). One reason could be the missed continuity in patient contact, as three studies were single sessions (Chambers et al., ; Reb et al., ; Taylor et al., ). Nevertheless, evaluating patients’ goals is a key element for nurses in oncology rehabilitation. On the one hand, goals are needed to work on patients’ rehabilitation success, and on the other hand, patient goals also demonstrate the effectiveness of rehabilitation interventions (Crevenna et al., ). Reasons for only demonstrating a tendency for the effectiveness of nursing counselling interventions are wide‐ranging. Firstly, the interventions per se differed in their theoretical background, content, duration and dose. No tendency was found in the included studies that more continuous contact with the nurses increased the effectiveness of the intervention. This outcome was contrary to previous systematic reviews, which showed that continuous care coordinators can improve 81% of patients outcomes, for example the quality of life or measured patient experience if they were frequently engaged with patients (Conway et al., ; Gorin et al., ). Secondly, although we clearly defined the outcomes at the beginning, the involved studies all had several outcomes, which were measured with different outcome instruments at diverse time points. Thirdly, the people with cancer differed in diagnoses and treatment stages. As noted in one involved study, the burden of the illness and the resulting patients’ needs differed according to the cancer diagnosis and the different cancer treatments (Turner et al., ). Considering patients’ burden is a major topic in self‐management, due to its patient‐perceived problem approach (Lorig & Holman, ). Fourthly, even though five studies (Chambers et al., ; Reb et al., ; Taylor et al., ; Turner et al., ; Yates et al., ) provided a patient‐tailored care plan, we hypothesized that some patient‐related factors were not considered well enough. One of these factors is health literacy. Only one study suggested that patients with lower literacy needed deeper and more targeted support than the planned single intervention (Chambers et al., ). The importance of addressing health literacy in a self‐management intervention is documented in a German study, which described that at the end of an inpatient oncology rehabilitation, 56% of all patients reported limited health literacy (Meng et al., ). Health literacy was seen as a key point in the motivation and understanding of health information. Higher health literacy has an impact on patients’ workability, their health status and the improvement of their rehabilitation outcomes (Meng et al., ). Lastly, no tendency was observed for studies with a higher quality to conduct more statistically significant results. In our opinion, reasons could be the low quality of the included studies, for example missing sample size calculation (Yates et al., ; Zhang et al., ) and an inability to recruit enough patients according to the sample size calculation (Turner et al., ). The challenge to measure the effectiveness of a counselling intervention is also attributed to the nature of the intervention. Counselling interventions per se can be viewed as complex due to their permitted flexibility, their variability in outcomes and the amount of interaction between intervention components (Craig et al., ). Moreover, Zhang et al.  intervention includes several components such as education, emotional support sessions and a progressive muscle relaxations program. All these components influence the effectiveness of the intervention. The evaluation of such complex interventions is, therefore, perceived as challenging, as these interventions are difficult to standardize and their effect can hardly be reduced to linear causal relationships (Bartholomeyczik & Höhmann, ; Minary et al., ). Instead, multiple possible interactions between different intervention components and multiple additional factors (e.g. different cancer diagnoses, different intervention doses and durations, contextual factors) are plausible contributors to the measured effects (Barratt et al., ; Bartholomeyczik & Höhmann, ). Therefore, randomized controlled trials that examined an association between a simple intervention and a primary outcome are often less suitable for complex interventions (Bartholomeyczik & Höhmann, ), although they are considered the gold standard for investigating effectiveness issues (Moore et al., ). One approach is to use a program theory to describe how an intervention is supposed to work (Chen, ). A program theory usually consists of an action and a change model. The action model describes the intervention itself and its contextual factors (Chen, ). The change model with its typical outcome chain demonstrates the assumed cause‐and‐effect relationship between all outcomes (Funnell & Rogers, ). For instance, by applying a program theory, Zhang et al.  study could clarify, which component of the intervention (education, emotional support) would influence patients' self‐efficacy. 5.1 Limitations One limitation of the review process is the restriction of three databases, even though these were the most relevant with the most statistically significant datasets. Further, we did not perform a search in study registers for ongoing studies. Our intention to not perform a meta‐analysis was confirmed by the low number of studies and the heterogeneity of the data. Besides the linguistic restriction to English and Germany, the comparability of results was challenging due to different designs such as RCTs and quasi‐experimental studies. A further limitation of the included studies is the rarely documented follow‐ups of the RCTs and the missed sample‐size calculations. Lastly, all our studies were conducted in Australia, China and the United States of America. As the nurses' scope of practice and their roles differ countrywise, generalizability across studies has to be taken with caution. Limitations One limitation of the review process is the restriction of three databases, even though these were the most relevant with the most statistically significant datasets. Further, we did not perform a search in study registers for ongoing studies. Our intention to not perform a meta‐analysis was confirmed by the low number of studies and the heterogeneity of the data. Besides the linguistic restriction to English and Germany, the comparability of results was challenging due to different designs such as RCTs and quasi‐experimental studies. A further limitation of the included studies is the rarely documented follow‐ups of the RCTs and the missed sample‐size calculations. Lastly, all our studies were conducted in Australia, China and the United States of America. As the nurses' scope of practice and their roles differ countrywise, generalizability across studies has to be taken with caution. CONCLUSIONS The present systematic review was designed to summarize the evidence related to the effectiveness and content components of nurse‐led counselling interventions on the self‐ and symptom management of patients in oncology rehabilitation. The result of this review demonstrated some indications by two RCTs and one quasi‐experimental study that nurse‐led interventions can affect patients' self‐management. More research is needed to confirm these indications. RELEVANCE TO CLINICAL PRACTICE The seven similar theory‐based components are the key elements of self‐management interventions. For clinical practice, it can be recommended to enhance the continuous evaluation of the patients' goals for sustainable rehabilitation success. The aforementioned heterogeneity of interventions hampers the formulation of implications for practice. When implementing complex interventions, special attention should be paid to the development phase. An intervention must be theoretically based and fit well in its context. Furthermore, an outcome must be clearly measurable. Further research is needed to develop appropriate approaches to test and subsequently demonstrate the effectiveness of these complex interventions. In addition, new designs are needed for evaluating interventions where RCTs are not suitable. One possible method is the previously noted program theory, although the development is still in its infancy and more research for its establishment is needed. V.W.B. involved in conceptualization, methodology, investigation and writing the original draft. C.L. involved in methodology, formal analysis, investigation and writing the original draft A.K. involved in validation, review and editing and supervision M.K. involved in conceptualization, methodology, validation, review and editing, supervision and project administration. The study was supported by the Eastern Switzerland University of Applied Sciences. None International prospective register of systematic reviews (PROSPERO) registration: CRD42021239437. https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=239437 . Appendix 1 Click here for additional data file.
Medical evidence assisting non-fatal strangulation prosecution: a scoping review
5c567428-7662-4f05-8e74-c52106e94e73
10077461
Forensic Medicine[mh]
Strangulation is the partial or total restriction of the breath and/or blood vessels through pressure to the neck using ligatures or via manual strangulation (eg, hands, arms/chokehold), affecting, among other things, blood flow to the brain and oxygen delivery to the lungs. Non-fatal strangulation (NFS) may cause a range of short-term and long-term physical and mental health issues including loss of or change in voice; difficulty in swallowing or breathing; physical injuries including bruising around the neck; petechial haemorrhages; injury to the brain resulting in unconsciousness, headaches, depression, anxiety and problems with memory and concentration; and has been associated with miscarriage and preterm births. Strangulation can easily be fatal and is a common feature of non-fatal violence against women. Women are 13 times more likely than men to experience this type of assault and prevalence among European and North American women is estimated to be 3%–9.7% and between 27% and 68% among women who have experienced intimate partner violence. Globally, NFS is becoming recognised as a serious form of violence, led by legislation in jurisdictions of the USA, specific offences of NFS have also been introduced in the UK, New Zealand, Canada and most of Australia. In large part, this is due to a better understanding of the prevalence of strangulation as a form of domestic violence alongside its significant health consequences. Notably, NFS is a key marker for escalation of domestic violence with one study finding it raises the risk of becoming a victim of homicide or very serious future harm by 7.48 times above victim–survivors who do not experience NFS. Several reviews have documented the prevalence and type of visible and psychological strangulation injuries. However, there is a cumulative evidence that having few or no injuries visible to the naked eye is common following NFS. The implications of this are not only problematic for healthcare, but can make the prosecution of NFS as a result of domestic violence more difficult. This is particularly true among jurisdictions that rely on evidence that the assault occurred beyond the testimony of the complainant and where this evidence can lend additional credibility to the complainant. Where NFS legislation is new or emerging, there has been recognition that reliable and consistent recording of the event, symptoms and injuries is needed for responders to strangulation, particularly where there are no obvious signs of injury. Providing strong guidance for health workers for observing and recording injuries can be used in future for prosecution and it may be the only opportunity a patient has for thorough assessment and recording of NFS where domestic violence may prohibit future contact with the health system. Because of the nature of health-service usage among victim–survivors of domestic violence, signs of strangulation may be encountered across a variety of healthcare settings, including consultations in forensic, emergency, reproductive health and general practice settings. To our knowledge, no reviews have focused on whether and how health practitioners can best assist in the prosecution of NFS through routine consultation and assessment of patients reporting NFS. Therefore, the aim of this scoping review was to provide an overview of activities that can support clinical decision making and referral as well as the prosecution of criminal charges of NFS, particularly when externally visible injuries are absent. We refer to ‘medical evidence’ throughout this scoping review and we use this terminology to refer to evidence collected, documented and presented by health professionals in the context of a complaint of NFS. In this scoping review, we provide an overview of available research to understand (1) what types of evidence can be routinely collected by medical practitioners when little or no externally visible evidence is observed in patients who have experienced NFS and (2) the types of medical documentation and evidence useful for prosecution of charges of NFS that can also contribute to healthcare. To get a broad understanding of the types of assessment available and how it might be used in court, the review was conducted in the following categories: The types of assessment that reveal evidence of injury that is otherwise not externally visible to the naked eye from an incident of NFS. The types of clinical documentation used to record an NFS incident. The kind of medical evidence currently used to support the successful prosecution of charges of NFS. Search strategy A literature search was conducted using PubMed, CINAHL, Cochrane, Embase, Medline, Scopus and Social Science Database to find publications from medical practice or health sciences related to NFS and medical imaging of injuries, and the documentation of strangulation. Law Journal Library, Westlaw, Lexis Advance and Worldlii were searched for Australia, the UK and the USA to find relevant publications on the kinds of medical evidence presented to courts in cases prosecuting NFS and analyses of ‘what works’ in that context. The search strategy was adapted to the requirements of each database and terms included “non-fatal strangulation”, “choking”, “strangulation”, “garrotting” and “throttling”, with searches of legal databases also including “medical” and “forensic evidence”. See search strategies. Researchers and legal professionals in the field of NFS were contacted and reference lists in review articles examined for relevant articles not found in the searched databases. 10.1136/bmjopen-2023-072077.supp1 Supplementary data Inclusion and exclusion criteria To be included in the scoping review, we employed the following criteria relevant to all categories published before 30 June 2021: Full-text English language articles. Mean age of population is >18 years. Peer reviewed, published articles. Population that had primarily survived a strangulation attempt. Medical investigation of NFS injuries, clinical documentation of NFS or medical evidence related to NFS prosecution. Exclusion criteria relevant to all categories were: Where strangulation in the population was primarily a suicide attempt (eg, via hanging). Primarily reporting prevalence of NFS or associated injuries. Separately, for medical assessment of NFS, articles were included where a clear method of injury investigation was reported, the method of investigation could reveal evidence of injury not visible to the naked eye, and the article was an original empirical study. Articles on medical assessment were further excluded from review if they included case studies with less than three people. Articles on documentation were included where tools were for clinical settings and focused on an NFS event, rather than broader domestic violence. Medical evidence related to prosecution included any article discussing NFS and medical evidence used in court or used to assist prosecution. Due to the dearth of articles available, review articles were included where they related to ways documentation can be done in clinical settings and medical evidence used in court. Study selection Following removal of duplicates, all articles were assessed by title and abstract by three reviewers using the selection criteria through Rayyan (rayyan.ai). Articles selected for full-text review were agreed on by all reviewers. All articles selected for full-text review were read by the same reviewers. Inclusion to the study was agreed on by all reviewers with any disagreements settled by a fourth reviewer. Charting and synthesis of data All authors agreed on each article’s focus and their respective categories. Charting tables were agreed on and trialled prior to conducting formal searches for this review, concentrating on recording and revealing injuries as would be the most likely focus for medical settings and that add credibility if a patient wishes to prosecute in future. They were further refined during the extraction process. Data were extracted by the lead author (LSS) from all articles by title, author and publication date. Data for medical investigation of NFS injuries were extracted focused on the type of study, sample size, method of investigation, referral, type of assessor, type of service, method of strangulation and injuries found/revealed. Documentation of injuries was extracted based on the article focus and type of documentation reported for NFS injuries. Medical evidence used in prosecution of NFS was extracted by article focus, jurisdiction, type of medical evidence used and any information regarding utility of evidence in court. Further information was extracted regarding expert testimony and is included in . Extracted data were organised into relevant thematic categories. Patient and public involvement This research involves analysis of existing research and involves no patients or members of the public. A literature search was conducted using PubMed, CINAHL, Cochrane, Embase, Medline, Scopus and Social Science Database to find publications from medical practice or health sciences related to NFS and medical imaging of injuries, and the documentation of strangulation. Law Journal Library, Westlaw, Lexis Advance and Worldlii were searched for Australia, the UK and the USA to find relevant publications on the kinds of medical evidence presented to courts in cases prosecuting NFS and analyses of ‘what works’ in that context. The search strategy was adapted to the requirements of each database and terms included “non-fatal strangulation”, “choking”, “strangulation”, “garrotting” and “throttling”, with searches of legal databases also including “medical” and “forensic evidence”. See search strategies. Researchers and legal professionals in the field of NFS were contacted and reference lists in review articles examined for relevant articles not found in the searched databases. 10.1136/bmjopen-2023-072077.supp1 Supplementary data Inclusion and exclusion criteria To be included in the scoping review, we employed the following criteria relevant to all categories published before 30 June 2021: Full-text English language articles. Mean age of population is >18 years. Peer reviewed, published articles. Population that had primarily survived a strangulation attempt. Medical investigation of NFS injuries, clinical documentation of NFS or medical evidence related to NFS prosecution. Exclusion criteria relevant to all categories were: Where strangulation in the population was primarily a suicide attempt (eg, via hanging). Primarily reporting prevalence of NFS or associated injuries. Separately, for medical assessment of NFS, articles were included where a clear method of injury investigation was reported, the method of investigation could reveal evidence of injury not visible to the naked eye, and the article was an original empirical study. Articles on medical assessment were further excluded from review if they included case studies with less than three people. Articles on documentation were included where tools were for clinical settings and focused on an NFS event, rather than broader domestic violence. Medical evidence related to prosecution included any article discussing NFS and medical evidence used in court or used to assist prosecution. Due to the dearth of articles available, review articles were included where they related to ways documentation can be done in clinical settings and medical evidence used in court. Study selection Following removal of duplicates, all articles were assessed by title and abstract by three reviewers using the selection criteria through Rayyan (rayyan.ai). Articles selected for full-text review were agreed on by all reviewers. All articles selected for full-text review were read by the same reviewers. Inclusion to the study was agreed on by all reviewers with any disagreements settled by a fourth reviewer. To be included in the scoping review, we employed the following criteria relevant to all categories published before 30 June 2021: Full-text English language articles. Mean age of population is >18 years. Peer reviewed, published articles. Population that had primarily survived a strangulation attempt. Medical investigation of NFS injuries, clinical documentation of NFS or medical evidence related to NFS prosecution. Exclusion criteria relevant to all categories were: Where strangulation in the population was primarily a suicide attempt (eg, via hanging). Primarily reporting prevalence of NFS or associated injuries. Separately, for medical assessment of NFS, articles were included where a clear method of injury investigation was reported, the method of investigation could reveal evidence of injury not visible to the naked eye, and the article was an original empirical study. Articles on medical assessment were further excluded from review if they included case studies with less than three people. Articles on documentation were included where tools were for clinical settings and focused on an NFS event, rather than broader domestic violence. Medical evidence related to prosecution included any article discussing NFS and medical evidence used in court or used to assist prosecution. Due to the dearth of articles available, review articles were included where they related to ways documentation can be done in clinical settings and medical evidence used in court. Following removal of duplicates, all articles were assessed by title and abstract by three reviewers using the selection criteria through Rayyan (rayyan.ai). Articles selected for full-text review were agreed on by all reviewers. All articles selected for full-text review were read by the same reviewers. Inclusion to the study was agreed on by all reviewers with any disagreements settled by a fourth reviewer. All authors agreed on each article’s focus and their respective categories. Charting tables were agreed on and trialled prior to conducting formal searches for this review, concentrating on recording and revealing injuries as would be the most likely focus for medical settings and that add credibility if a patient wishes to prosecute in future. They were further refined during the extraction process. Data were extracted by the lead author (LSS) from all articles by title, author and publication date. Data for medical investigation of NFS injuries were extracted focused on the type of study, sample size, method of investigation, referral, type of assessor, type of service, method of strangulation and injuries found/revealed. Documentation of injuries was extracted based on the article focus and type of documentation reported for NFS injuries. Medical evidence used in prosecution of NFS was extracted by article focus, jurisdiction, type of medical evidence used and any information regarding utility of evidence in court. Further information was extracted regarding expert testimony and is included in . Extracted data were organised into relevant thematic categories. This research involves analysis of existing research and involves no patients or members of the public. The final searches retrieved a total of 3312 articles across 11 databases. Following removal of duplicates, review of abstracts and full-text screening of likely eligible articles, 26 were found to meet the inclusion criteria (see ). However, two articles were found to draw on the same sample, so only the most recent version was included. Thus, 25 articles were included in this review. Seven articles were related to medical assessment of injury from NFS, 8 articles for documentation of NFS and 12 articles regarding medical evidence in courts. Two articles were used in the review for both documentation of injuries and medical evidence in courts. Medical investigation Study characteristics Studies about medical imaging of NFS injury were composed of four retrospective analyses of medical data from hospitals or other medical facilities and three prospective analyses, (see ). Four of the articles identified that survivors of NFS were referred for medical imaging as part of police investigation or by other protective organisations such as child/adult protective services. The remaining three articles did not specify a referral pathway. Presentations of NFS were evaluated in emergency departments, medical centres and by forensic medical examiners/nurses. Imaging investigations of physical injury and clinical symptoms were most frequently carried out by radiologists. One article involved assessment by forensic nurses. Radiological imaging, including MRI and CT, was used in six studies. Of these, four studies used MRI, and one study used CT or MRI. Separately, one article did not use radiological imaging to investigate strangulation injuries, instead using alternative light sources (ALS). See for details about these studies. Across all studies involving medical imaging, there were 959 cases of strangulation that were immediately non-fatal and where a person presented to medical personnel alive with a complaint of being strangled. Of these, 701 received some form of imaging. The six studies that reported the methods of NFS showed that it was primarily manual, using one or two hands (79%), 11% used ligatures and 6% were chokeholds, the remainder used a combination of methods or the victim was unsure of the method. Studies showed a gender disparity in strangulations where women were recorded as the primary victim–survivors in 87% (831) of cases. Where reported, it appeared that many of these strangulations were the result of assaults from intimate partners. While we do not report in detail externally visible or subjective complaints of injuries as they have been investigated in other reviews, overall subjective complaints and clinical symptoms were frequent and varied across the studies. Only two studies reported the absence of subjective complaints showing that 17% of those 463 survivors reported no subjective symptoms following NFS. Thus, the majority (83%) of strangulation survivors had some reported symptoms including neck pain, loss of consciousness and difficulty swallowing. On the other hand, absence of external injuries ranged from 17% to 93%, with the average being 44% of NFS survivors with no externally visible evidence of external injury. One of these studies reported 17% of NFS presentations as having neither subjective complaints nor physically visible symptoms. The most common injury reported across studies was bruising/haematoma related to the neck or face. Where injuries were recorded in two or more studies, on average the following injuries were present: neck redness/bruising 55%; abrasions 41%; neck tenderness 37%; petechiae 9% found on the neck, but also found in the eyes, on the gumline and behind the ears ; swelling 5%; subconjunctival haemorrhage 4%; and ligature marks 2%. records the number of visible physical injuries and subjective complaints recorded through routine clinical assessment across the included studies. Imaging Survivors who received CT scans showed visible injuries in 77% of cases. However, only 8% of all cases examined found evidence of injury. CT of the neck generally did not provide further information than usual clinical investigation related to injury visibility (see ). Comparatively, MRI of the neck and/or head found relevant injuries of the assault in at least 52% of the NFS cases examined (see ). MRI was at times able to detect injuries when no corresponding external injury was visible. One study found 39% of cases with evidence of injury in the absence of other significant clinical findings through MRI. Although MRI was able to find internal injuries in approximately half of the cases examined across all studies, there was no clear pattern of symptoms that were related to radiological findings outside of neck pain, which was a common subjective complaint across these imaging studies. Only one study used imaging other than radiological investigation using an ALS. An ALS emits ultraviolent, visible and infrared wavelengths through a powerful lamp enhancing the visibility of some evidence. This study showed ALS detected evidence of intradermal injury in 98% of strangulation survivors that had no externally visible injuries in clinical examination. This imaging was able to be produced by forensic nurses and showed the sensitivity to detect patterns in some injuries, for example, a shoe print. Injuries revealed through ALS were able to be photographed. Documentation of NFS Study characteristics All eight articles that discussed clinical documentation and disclosure of NFS were focused on clinical care (see ). The majority of articles were reviews focused on clinical practice care and evaluation of patients. However, two involved retrospective analysis of the mechanisms and prevalence of injuries (including subjective) found during medicolegal examination. One article also included case presentation examples of the tool being used. All articles were relatively consistent in their general recommendations and approaches to documenting evidence. Overall, they provided a comprehensive strategy for evaluating NFS injuries and symptoms and most were conscious of the utility of this documentation for evidence for prosecution of criminal charges of NFS. 10.1136/bmjopen-2023-072077.supp2 Supplementary data Documentation tools The use of available documentation tools such as those created by Faugno et al and the Training Institute on Strangulation Prevention were primary strategies for recording information about NFS. These documentation tools were specialised for physical examination providing a body map to indicate injuries, a checklist for physical examination and injuries of concern, and providing a thorough record of strangulation history including details of the strangulation incident. Thorough questioning regarding strangulation history was recommended by all articles alongside some specific recommendations regarding quoting victim–survivors’ verbatim accounts of the incident, and recording their demeanour and emotional/mental status. Taking this type of documentation was claimed to greatly assist appropriate health and legal intervention. Questions and quotations As NFS injuries may be minimal or absent, providing clear documentation of the survivor’s experience of the strangulation event was discussed as supportive of the prosecution of an NFS offence. Questions that could be asked by health professionals might include ‘What were they saying to you as you were strangled?’ and ‘What did you think was going to happen?’. It was observed that, where possible, answers to questions posed by health professionals should be recorded in quotation marks to assist with communicating observations to prosecution services and provide corroboration for survivors’ statements of events to police and others, including social workers. Although quotations are unlikely to assist with further clinical assessment, they were described as important for prosecutors in preparing the brief of evidence to support the prosecution, particularly where physical evidence is absent or minimal. For example, direct quotations from the victim–survivor can be useful in discrediting claims made by an accused person that the complainant consented to the NFS or that the accused was acting in self-defence and generally bolster the complainant’s credibility about the circumstances of the offending, particularly where there are other documented injuries consistent with their account of the incident. Photographs Some studies advocated that photographs should be taken where there are external injuries present and visible. Funk and Schuppel recommended four types of photographs: (1) a distance photograph that shows the person’s full body to identify them and the location of any injuries; (2) close up photographs of injuries from different angles and with each angle taken both with and without a ruler placed by the injury; (3) follow-up photographs of injuries across different time intervals to show them as they change over time and to document if any new injuries appear that may not have been present immediately after the event and finally (4) it was recommended to take a photograph of the survivor demonstrating how they were strangled. Patients can be asked how the strangulation took place, with one or two hands, forearm, or the use of a ligature etc, which arm/hand/what kind of ligature, or use a mannequin to assist with demonstration and documentation. Medical evidence in court Study characteristics Twelve articles were found to meet our criteria for medical evidence related to the prosecution of NFS (see ). Five of these were reviews, with two focused on prosecution, one broadly reviewed medical and legal research on NFS, one focused on forensic pathology and medicolegal investigations and one focused on best practice for healthcare providers. Five articles were focused on providing recommendations and strategies for prosecuting NFS. Lastly, two articles were retrospective analyses of case criminal legal issues and adjudication decisions of NFS. All articles primarily reviewed studies and cases from the USA, with the exception of four articles that focused on evidence from Australasia, Canada and the UK. The value of evidence It was apparent that medical professionals’ recording of strangulation symptoms, injuries and statements were vital to evidence gathering for prosecution across all articles. Evidence that was recommended as useful for prosecutors were any diagnostic testing; photographic images and medical records of any visible injuries such as contusions, scratches, ligature marks or defensive wounds related to the assault; and records of other clinical symptoms related to the assault, neck pain, loss of consciousness or incontinence. Strack et al described the significance of medical observations as more ‘robust’ for prosecution evidence gathering than the same observations recorded by law enforcement. Importantly, a lack of external injury was discussed across all articles, with three remarking that arguments that absence of injury are consistent with the occurrence of strangulation is ambiguous, or potentially misleading to the court. That is, although absence of external injury may be consistent with strangulation, it is also consistent with not experiencing NFS. On the other hand, the use of coordinated evidence collection using questioning provided broader corroborating evidence that did not rely solely on the presence of external injury. The overall quality of medical evidence was discussed as a central factor in prosecuting NFS cases. This was reiterated by data finding that cases were 40% more likely to be filed when NFS victim–survivors were examined using procedural collection of evidence through forensic nurses, compared with cases where a forensic examination did not take place. Study characteristics Studies about medical imaging of NFS injury were composed of four retrospective analyses of medical data from hospitals or other medical facilities and three prospective analyses, (see ). Four of the articles identified that survivors of NFS were referred for medical imaging as part of police investigation or by other protective organisations such as child/adult protective services. The remaining three articles did not specify a referral pathway. Presentations of NFS were evaluated in emergency departments, medical centres and by forensic medical examiners/nurses. Imaging investigations of physical injury and clinical symptoms were most frequently carried out by radiologists. One article involved assessment by forensic nurses. Radiological imaging, including MRI and CT, was used in six studies. Of these, four studies used MRI, and one study used CT or MRI. Separately, one article did not use radiological imaging to investigate strangulation injuries, instead using alternative light sources (ALS). See for details about these studies. Across all studies involving medical imaging, there were 959 cases of strangulation that were immediately non-fatal and where a person presented to medical personnel alive with a complaint of being strangled. Of these, 701 received some form of imaging. The six studies that reported the methods of NFS showed that it was primarily manual, using one or two hands (79%), 11% used ligatures and 6% were chokeholds, the remainder used a combination of methods or the victim was unsure of the method. Studies showed a gender disparity in strangulations where women were recorded as the primary victim–survivors in 87% (831) of cases. Where reported, it appeared that many of these strangulations were the result of assaults from intimate partners. While we do not report in detail externally visible or subjective complaints of injuries as they have been investigated in other reviews, overall subjective complaints and clinical symptoms were frequent and varied across the studies. Only two studies reported the absence of subjective complaints showing that 17% of those 463 survivors reported no subjective symptoms following NFS. Thus, the majority (83%) of strangulation survivors had some reported symptoms including neck pain, loss of consciousness and difficulty swallowing. On the other hand, absence of external injuries ranged from 17% to 93%, with the average being 44% of NFS survivors with no externally visible evidence of external injury. One of these studies reported 17% of NFS presentations as having neither subjective complaints nor physically visible symptoms. The most common injury reported across studies was bruising/haematoma related to the neck or face. Where injuries were recorded in two or more studies, on average the following injuries were present: neck redness/bruising 55%; abrasions 41%; neck tenderness 37%; petechiae 9% found on the neck, but also found in the eyes, on the gumline and behind the ears ; swelling 5%; subconjunctival haemorrhage 4%; and ligature marks 2%. records the number of visible physical injuries and subjective complaints recorded through routine clinical assessment across the included studies. Imaging Survivors who received CT scans showed visible injuries in 77% of cases. However, only 8% of all cases examined found evidence of injury. CT of the neck generally did not provide further information than usual clinical investigation related to injury visibility (see ). Comparatively, MRI of the neck and/or head found relevant injuries of the assault in at least 52% of the NFS cases examined (see ). MRI was at times able to detect injuries when no corresponding external injury was visible. One study found 39% of cases with evidence of injury in the absence of other significant clinical findings through MRI. Although MRI was able to find internal injuries in approximately half of the cases examined across all studies, there was no clear pattern of symptoms that were related to radiological findings outside of neck pain, which was a common subjective complaint across these imaging studies. Only one study used imaging other than radiological investigation using an ALS. An ALS emits ultraviolent, visible and infrared wavelengths through a powerful lamp enhancing the visibility of some evidence. This study showed ALS detected evidence of intradermal injury in 98% of strangulation survivors that had no externally visible injuries in clinical examination. This imaging was able to be produced by forensic nurses and showed the sensitivity to detect patterns in some injuries, for example, a shoe print. Injuries revealed through ALS were able to be photographed. Studies about medical imaging of NFS injury were composed of four retrospective analyses of medical data from hospitals or other medical facilities and three prospective analyses, (see ). Four of the articles identified that survivors of NFS were referred for medical imaging as part of police investigation or by other protective organisations such as child/adult protective services. The remaining three articles did not specify a referral pathway. Presentations of NFS were evaluated in emergency departments, medical centres and by forensic medical examiners/nurses. Imaging investigations of physical injury and clinical symptoms were most frequently carried out by radiologists. One article involved assessment by forensic nurses. Radiological imaging, including MRI and CT, was used in six studies. Of these, four studies used MRI, and one study used CT or MRI. Separately, one article did not use radiological imaging to investigate strangulation injuries, instead using alternative light sources (ALS). See for details about these studies. Across all studies involving medical imaging, there were 959 cases of strangulation that were immediately non-fatal and where a person presented to medical personnel alive with a complaint of being strangled. Of these, 701 received some form of imaging. The six studies that reported the methods of NFS showed that it was primarily manual, using one or two hands (79%), 11% used ligatures and 6% were chokeholds, the remainder used a combination of methods or the victim was unsure of the method. Studies showed a gender disparity in strangulations where women were recorded as the primary victim–survivors in 87% (831) of cases. Where reported, it appeared that many of these strangulations were the result of assaults from intimate partners. While we do not report in detail externally visible or subjective complaints of injuries as they have been investigated in other reviews, overall subjective complaints and clinical symptoms were frequent and varied across the studies. Only two studies reported the absence of subjective complaints showing that 17% of those 463 survivors reported no subjective symptoms following NFS. Thus, the majority (83%) of strangulation survivors had some reported symptoms including neck pain, loss of consciousness and difficulty swallowing. On the other hand, absence of external injuries ranged from 17% to 93%, with the average being 44% of NFS survivors with no externally visible evidence of external injury. One of these studies reported 17% of NFS presentations as having neither subjective complaints nor physically visible symptoms. The most common injury reported across studies was bruising/haematoma related to the neck or face. Where injuries were recorded in two or more studies, on average the following injuries were present: neck redness/bruising 55%; abrasions 41%; neck tenderness 37%; petechiae 9% found on the neck, but also found in the eyes, on the gumline and behind the ears ; swelling 5%; subconjunctival haemorrhage 4%; and ligature marks 2%. records the number of visible physical injuries and subjective complaints recorded through routine clinical assessment across the included studies. Survivors who received CT scans showed visible injuries in 77% of cases. However, only 8% of all cases examined found evidence of injury. CT of the neck generally did not provide further information than usual clinical investigation related to injury visibility (see ). Comparatively, MRI of the neck and/or head found relevant injuries of the assault in at least 52% of the NFS cases examined (see ). MRI was at times able to detect injuries when no corresponding external injury was visible. One study found 39% of cases with evidence of injury in the absence of other significant clinical findings through MRI. Although MRI was able to find internal injuries in approximately half of the cases examined across all studies, there was no clear pattern of symptoms that were related to radiological findings outside of neck pain, which was a common subjective complaint across these imaging studies. Only one study used imaging other than radiological investigation using an ALS. An ALS emits ultraviolent, visible and infrared wavelengths through a powerful lamp enhancing the visibility of some evidence. This study showed ALS detected evidence of intradermal injury in 98% of strangulation survivors that had no externally visible injuries in clinical examination. This imaging was able to be produced by forensic nurses and showed the sensitivity to detect patterns in some injuries, for example, a shoe print. Injuries revealed through ALS were able to be photographed. Study characteristics All eight articles that discussed clinical documentation and disclosure of NFS were focused on clinical care (see ). The majority of articles were reviews focused on clinical practice care and evaluation of patients. However, two involved retrospective analysis of the mechanisms and prevalence of injuries (including subjective) found during medicolegal examination. One article also included case presentation examples of the tool being used. All articles were relatively consistent in their general recommendations and approaches to documenting evidence. Overall, they provided a comprehensive strategy for evaluating NFS injuries and symptoms and most were conscious of the utility of this documentation for evidence for prosecution of criminal charges of NFS. 10.1136/bmjopen-2023-072077.supp2 Supplementary data Documentation tools The use of available documentation tools such as those created by Faugno et al and the Training Institute on Strangulation Prevention were primary strategies for recording information about NFS. These documentation tools were specialised for physical examination providing a body map to indicate injuries, a checklist for physical examination and injuries of concern, and providing a thorough record of strangulation history including details of the strangulation incident. Thorough questioning regarding strangulation history was recommended by all articles alongside some specific recommendations regarding quoting victim–survivors’ verbatim accounts of the incident, and recording their demeanour and emotional/mental status. Taking this type of documentation was claimed to greatly assist appropriate health and legal intervention. Questions and quotations As NFS injuries may be minimal or absent, providing clear documentation of the survivor’s experience of the strangulation event was discussed as supportive of the prosecution of an NFS offence. Questions that could be asked by health professionals might include ‘What were they saying to you as you were strangled?’ and ‘What did you think was going to happen?’. It was observed that, where possible, answers to questions posed by health professionals should be recorded in quotation marks to assist with communicating observations to prosecution services and provide corroboration for survivors’ statements of events to police and others, including social workers. Although quotations are unlikely to assist with further clinical assessment, they were described as important for prosecutors in preparing the brief of evidence to support the prosecution, particularly where physical evidence is absent or minimal. For example, direct quotations from the victim–survivor can be useful in discrediting claims made by an accused person that the complainant consented to the NFS or that the accused was acting in self-defence and generally bolster the complainant’s credibility about the circumstances of the offending, particularly where there are other documented injuries consistent with their account of the incident. Photographs Some studies advocated that photographs should be taken where there are external injuries present and visible. Funk and Schuppel recommended four types of photographs: (1) a distance photograph that shows the person’s full body to identify them and the location of any injuries; (2) close up photographs of injuries from different angles and with each angle taken both with and without a ruler placed by the injury; (3) follow-up photographs of injuries across different time intervals to show them as they change over time and to document if any new injuries appear that may not have been present immediately after the event and finally (4) it was recommended to take a photograph of the survivor demonstrating how they were strangled. Patients can be asked how the strangulation took place, with one or two hands, forearm, or the use of a ligature etc, which arm/hand/what kind of ligature, or use a mannequin to assist with demonstration and documentation. All eight articles that discussed clinical documentation and disclosure of NFS were focused on clinical care (see ). The majority of articles were reviews focused on clinical practice care and evaluation of patients. However, two involved retrospective analysis of the mechanisms and prevalence of injuries (including subjective) found during medicolegal examination. One article also included case presentation examples of the tool being used. All articles were relatively consistent in their general recommendations and approaches to documenting evidence. Overall, they provided a comprehensive strategy for evaluating NFS injuries and symptoms and most were conscious of the utility of this documentation for evidence for prosecution of criminal charges of NFS. 10.1136/bmjopen-2023-072077.supp2 Supplementary data The use of available documentation tools such as those created by Faugno et al and the Training Institute on Strangulation Prevention were primary strategies for recording information about NFS. These documentation tools were specialised for physical examination providing a body map to indicate injuries, a checklist for physical examination and injuries of concern, and providing a thorough record of strangulation history including details of the strangulation incident. Thorough questioning regarding strangulation history was recommended by all articles alongside some specific recommendations regarding quoting victim–survivors’ verbatim accounts of the incident, and recording their demeanour and emotional/mental status. Taking this type of documentation was claimed to greatly assist appropriate health and legal intervention. As NFS injuries may be minimal or absent, providing clear documentation of the survivor’s experience of the strangulation event was discussed as supportive of the prosecution of an NFS offence. Questions that could be asked by health professionals might include ‘What were they saying to you as you were strangled?’ and ‘What did you think was going to happen?’. It was observed that, where possible, answers to questions posed by health professionals should be recorded in quotation marks to assist with communicating observations to prosecution services and provide corroboration for survivors’ statements of events to police and others, including social workers. Although quotations are unlikely to assist with further clinical assessment, they were described as important for prosecutors in preparing the brief of evidence to support the prosecution, particularly where physical evidence is absent or minimal. For example, direct quotations from the victim–survivor can be useful in discrediting claims made by an accused person that the complainant consented to the NFS or that the accused was acting in self-defence and generally bolster the complainant’s credibility about the circumstances of the offending, particularly where there are other documented injuries consistent with their account of the incident. Some studies advocated that photographs should be taken where there are external injuries present and visible. Funk and Schuppel recommended four types of photographs: (1) a distance photograph that shows the person’s full body to identify them and the location of any injuries; (2) close up photographs of injuries from different angles and with each angle taken both with and without a ruler placed by the injury; (3) follow-up photographs of injuries across different time intervals to show them as they change over time and to document if any new injuries appear that may not have been present immediately after the event and finally (4) it was recommended to take a photograph of the survivor demonstrating how they were strangled. Patients can be asked how the strangulation took place, with one or two hands, forearm, or the use of a ligature etc, which arm/hand/what kind of ligature, or use a mannequin to assist with demonstration and documentation. Study characteristics Twelve articles were found to meet our criteria for medical evidence related to the prosecution of NFS (see ). Five of these were reviews, with two focused on prosecution, one broadly reviewed medical and legal research on NFS, one focused on forensic pathology and medicolegal investigations and one focused on best practice for healthcare providers. Five articles were focused on providing recommendations and strategies for prosecuting NFS. Lastly, two articles were retrospective analyses of case criminal legal issues and adjudication decisions of NFS. All articles primarily reviewed studies and cases from the USA, with the exception of four articles that focused on evidence from Australasia, Canada and the UK. The value of evidence It was apparent that medical professionals’ recording of strangulation symptoms, injuries and statements were vital to evidence gathering for prosecution across all articles. Evidence that was recommended as useful for prosecutors were any diagnostic testing; photographic images and medical records of any visible injuries such as contusions, scratches, ligature marks or defensive wounds related to the assault; and records of other clinical symptoms related to the assault, neck pain, loss of consciousness or incontinence. Strack et al described the significance of medical observations as more ‘robust’ for prosecution evidence gathering than the same observations recorded by law enforcement. Importantly, a lack of external injury was discussed across all articles, with three remarking that arguments that absence of injury are consistent with the occurrence of strangulation is ambiguous, or potentially misleading to the court. That is, although absence of external injury may be consistent with strangulation, it is also consistent with not experiencing NFS. On the other hand, the use of coordinated evidence collection using questioning provided broader corroborating evidence that did not rely solely on the presence of external injury. The overall quality of medical evidence was discussed as a central factor in prosecuting NFS cases. This was reiterated by data finding that cases were 40% more likely to be filed when NFS victim–survivors were examined using procedural collection of evidence through forensic nurses, compared with cases where a forensic examination did not take place. Twelve articles were found to meet our criteria for medical evidence related to the prosecution of NFS (see ). Five of these were reviews, with two focused on prosecution, one broadly reviewed medical and legal research on NFS, one focused on forensic pathology and medicolegal investigations and one focused on best practice for healthcare providers. Five articles were focused on providing recommendations and strategies for prosecuting NFS. Lastly, two articles were retrospective analyses of case criminal legal issues and adjudication decisions of NFS. All articles primarily reviewed studies and cases from the USA, with the exception of four articles that focused on evidence from Australasia, Canada and the UK. It was apparent that medical professionals’ recording of strangulation symptoms, injuries and statements were vital to evidence gathering for prosecution across all articles. Evidence that was recommended as useful for prosecutors were any diagnostic testing; photographic images and medical records of any visible injuries such as contusions, scratches, ligature marks or defensive wounds related to the assault; and records of other clinical symptoms related to the assault, neck pain, loss of consciousness or incontinence. Strack et al described the significance of medical observations as more ‘robust’ for prosecution evidence gathering than the same observations recorded by law enforcement. Importantly, a lack of external injury was discussed across all articles, with three remarking that arguments that absence of injury are consistent with the occurrence of strangulation is ambiguous, or potentially misleading to the court. That is, although absence of external injury may be consistent with strangulation, it is also consistent with not experiencing NFS. On the other hand, the use of coordinated evidence collection using questioning provided broader corroborating evidence that did not rely solely on the presence of external injury. The overall quality of medical evidence was discussed as a central factor in prosecuting NFS cases. This was reiterated by data finding that cases were 40% more likely to be filed when NFS victim–survivors were examined using procedural collection of evidence through forensic nurses, compared with cases where a forensic examination did not take place. This scoping review aimed to provide an overview of whether and how health professionals can support the prosecution of criminal charges of NFS through routine practice, particularly when injuries are not visible to the naked eye. Overall, it was clear that medical professionals have a range of investigative tools with differing sensitivity available to reveal and record evidence of NFS and assist clinical investigation. Although many victim–survivors may not wish to proceed with a prosecution when they initially present in a healthcare setting, victim–survivors may choose to proceed at some future time. Ensuring that NFS is well-documented empowers victim–survivors to make the choice to proceed. A lack of documentation, on the other hand, may limit opportunities for potential future legal pathways. Importantly, victim–survivors’ decision-making processes are not always linear and can be influenced by their own changing circumstances as well as system-related factors such as delay and or available support. The use of these tools in clinical settings was considered important to progress the prosecution of NFS offences across jurisdictions providing evaluations over and above those provided by police. Our review revealed the following techniques that can be built-in into regular practice that can assist with clinical and judicial outcomes: Standardised documentation procedure using clinical charts such as those developed by the Strangulation Training Institute. Photographs of patient injuries and follow-up if new injuries develop, taking into consideration lighting and patient skin tone. Referral for appropriate imaging to reveal signs of injury not visible to the naked eye that may be clinically relevant. Revealing injuries Despite CT as the recommended pathway for the detection of vascular injuries, few studies reported on the results of CT in the context of strangulation and its ability to reveal injuries compared with MRI were limited among the included articles. It is possible that because MRI provides superior detection of soft tissue and ligamentous injuries in the context of strangulation, fewer studies utilised CT in their investigations. However, importantly in everyday contexts MRI can be costly and difficult to access with medical limitations to it being performed, including the obstruction of metal objects in a person (eg, piercings, medical devices), inability to have an MRI with contrast, confined space anxiety that may be particularly relevant to this patient group and weight/size limitations. Thus, CT continues to be an appropriate pathway if there are clinical indications for imaging. The use of ALS appeared to be the most consistent for revealing injuries that were otherwise invisible to the naked eye, though only one study investigated this method. If further investigations of this method support its effectiveness at revealing injuries, use of ALS is likely to be resource, time and cost-efficient, and able to provide indications for further assessment, including diagnostic imaging. Critically, ALS could be more likely to find evidence of intradermal injury not visible under normal light that can then be photographed, with no differences in injury detection dependent on age or skin tone that are otherwise susceptible to bias in photography under normal light. Unfortunately, there have been no studies that have explored the use of ALS in the prosecution of NFS offences and its utility may be more likely in forensic settings than broader clinical contexts. However, the use of ALS to document injuries is likely to provide important evidence in a criminal trial of NFS. Documenting injuries Taking a patient’s history of events should be done using standardised documentation tools specialised for strangulation, with priority placed on reciting the patient’s exact words in response to questions. Recording quotations will provide documentation that can assist in proving the alleged offender’s intent to hinder the victim’s breath or blood flow, that is relevant in some jurisdictions. As many victim–survivors report fear that they felt they were going to die and often report death threats, asking questions about what survivors were thinking during the assault and what the perpetrator may have said or threatened could be important evidence. Several documentation tools have been developed, particularly in the USA and Canada, with the most up to date and developed of these created by the Training Institute on Strangulation Prevention ( www.strangulationtraininginstitute.com ). Using standardised documentation tools in measuring and recording injuries should produce evidence that is of better quality for criminal trials and greater confidence in detecting signs and symptoms of injuries in health settings that does not rely on individual practitioner knowledge. Further, this documentation may alleviate some patient experiences where they report difficulty accessing health workers who understand the potential severity of NFS and receiving referrals for scans or social work support. Clinical evaluation may be difficult if a survivor has any memory problems or if there is little physical evidence of strangulation and it may take several hours for serious internal injuries to be found. Because of the potential delay in the presentation of injuries, patients may need to be admitted for observation for 24–36 hours and to monitor for signs that may lead to delayed death. Information such as whether the patient lost consciousness or whether they lost control of their bladder and/or bowels will provide vital clinical indicators of NFS and can also be important evidence for a criminal trial. Importantly, if there are memory problems present, a survivor’s inability to recall specific events when evaluated may produce a deceivingly low number of clinical symptoms. If memory problems are noted, it is important to remember that evidence of memory difficulties are not inconsistent with an NFS assault. Identification of injuries may present further challenges where a person has a darker skin tone where injuries such as bruising may not be as visible. While we do not know the specific implications that skin tone may have on identification of strangulation injuries, it is vital to consider this when making assessments of people with darker complexions, and the utility of ALS if it is available to assist with the visibility of those injuries. Regardless of access to ALS, any visible injuries should be photographed using a camera with high resolution and good lighting to increase the likelihood that injuries will be captured. The utility of medical evidence for prosecution Overall, evidence gathering as part of routine medical assessment of NFS can lead to increased numbers of prosecutions of NFS and a higher likelihood of successful prosecution. Corroborative medical evidence of NFS can rebut the accused’s claim that the NFS was carried out by accident or in self-defence. Furthermore, studies have identified that for survivors who have experienced trauma, giving evidence in court proceedings can be experienced as a form of secondary victimisation as they must relive the experience all over again in a context where their version of events is challenged. The more corroborative evidence available to the prosecution the more likely the accused is to plead guilty, avoiding the need for the survivor to testify. Furthermore, for a range of reasons, it is common for survivors of NFS and other family violence related offences to withdraw their support for prosecution. The presence of other forms of evidence, beyond the testimony of the complainant/survivor, may result in some NFS prosecutions proceeding despite the absence of the complainant/survivor’s testimony. In the context of rape jury trials, research suggests that some jury members expect to see medical or scientific evidence in the course of the trial, though there is currently no research about this issue regarding NFS jury trials. Risks of bias While this research presents a scoping review and does not include a risk of bias, there were nonetheless clear avenues for introduction of bias. First, the evidence for medical imaging largely involved studies with retrospective review of NFS cases presenting in medical settings. Unlike prospective studies, these studies were uncontrolled and were, therefore, less likely to have any consistent protocols for assessing and documenting injuries from strangulation and for whether a person was eligible for imaging, excluding that of Bruguier et al . This may have resulted in more cases where injuries were already visible receiving imaging and inflating the number of NFS injuries identified through MRI or CT scans. Further, exercise of discretion for referral was identified as a problem even when decision guidelines were in place regarding imaging. For example, Bruguier et al showed that despite MRI eligibility criteria for NFS symptoms and injuries, only 11 of the 112 survivors over a 4-year period received an MRI following clinical assessment. These referral biases, particularly in retrospective studies, may lead to an inflated claim of the effectiveness of MRI in detecting of NFS injuries, particularly among those who have no externally visible symptoms. Future research in examining internal injuries from NFS should focus on prospective reviews of NFS investigation, particularly where evidence is limited, but promising, such as for ALS. Strengths and limitations Aside from biases, a considerable limitation of this review was the lack of available literature reviewing CT scans for strangulation injuries. Despite CT angiography the current ‘gold standard’ for detecting vascular injuries, there were a surprising lack of articles available for review in this area. This may have been due to language restrictions used in this review that may have excluded relevant research conducted and published in other languages. Notably, several titles and abstracts able to be screened in English from Russian articles, may have been relevant to this review. Future reviews should, therefore, consider including Russian, among other, language articles if possible. Another limitation of the available evidence was the primarily qualitative assessments from authors as experienced clinicians and prosecutors describing the utility of documentation tools in court cases, the use of direct quotations in notes, and the importance of photographs taken by health professionals in underpinning prosecutions of NFS. More research is needed in this area to confirm the most robust clinical documentation tool that assists with both the care of the patient and, separately, their utility as evidence in the prosecution of NFS. It is possible that further research on documentation tools efficacy in prosecution and further detail on investigation of internal injuries from NFS are available in other languages that were not able to be assessed within the bounds of this scoping review. Despite these limitations, this review provides a first and comprehensive review of the literature to guide clinicians regarding clinical decision making, referral and criminal prosecution of NFS, when the victim–survivor wishes to take that path. Despite CT as the recommended pathway for the detection of vascular injuries, few studies reported on the results of CT in the context of strangulation and its ability to reveal injuries compared with MRI were limited among the included articles. It is possible that because MRI provides superior detection of soft tissue and ligamentous injuries in the context of strangulation, fewer studies utilised CT in their investigations. However, importantly in everyday contexts MRI can be costly and difficult to access with medical limitations to it being performed, including the obstruction of metal objects in a person (eg, piercings, medical devices), inability to have an MRI with contrast, confined space anxiety that may be particularly relevant to this patient group and weight/size limitations. Thus, CT continues to be an appropriate pathway if there are clinical indications for imaging. The use of ALS appeared to be the most consistent for revealing injuries that were otherwise invisible to the naked eye, though only one study investigated this method. If further investigations of this method support its effectiveness at revealing injuries, use of ALS is likely to be resource, time and cost-efficient, and able to provide indications for further assessment, including diagnostic imaging. Critically, ALS could be more likely to find evidence of intradermal injury not visible under normal light that can then be photographed, with no differences in injury detection dependent on age or skin tone that are otherwise susceptible to bias in photography under normal light. Unfortunately, there have been no studies that have explored the use of ALS in the prosecution of NFS offences and its utility may be more likely in forensic settings than broader clinical contexts. However, the use of ALS to document injuries is likely to provide important evidence in a criminal trial of NFS. Taking a patient’s history of events should be done using standardised documentation tools specialised for strangulation, with priority placed on reciting the patient’s exact words in response to questions. Recording quotations will provide documentation that can assist in proving the alleged offender’s intent to hinder the victim’s breath or blood flow, that is relevant in some jurisdictions. As many victim–survivors report fear that they felt they were going to die and often report death threats, asking questions about what survivors were thinking during the assault and what the perpetrator may have said or threatened could be important evidence. Several documentation tools have been developed, particularly in the USA and Canada, with the most up to date and developed of these created by the Training Institute on Strangulation Prevention ( www.strangulationtraininginstitute.com ). Using standardised documentation tools in measuring and recording injuries should produce evidence that is of better quality for criminal trials and greater confidence in detecting signs and symptoms of injuries in health settings that does not rely on individual practitioner knowledge. Further, this documentation may alleviate some patient experiences where they report difficulty accessing health workers who understand the potential severity of NFS and receiving referrals for scans or social work support. Clinical evaluation may be difficult if a survivor has any memory problems or if there is little physical evidence of strangulation and it may take several hours for serious internal injuries to be found. Because of the potential delay in the presentation of injuries, patients may need to be admitted for observation for 24–36 hours and to monitor for signs that may lead to delayed death. Information such as whether the patient lost consciousness or whether they lost control of their bladder and/or bowels will provide vital clinical indicators of NFS and can also be important evidence for a criminal trial. Importantly, if there are memory problems present, a survivor’s inability to recall specific events when evaluated may produce a deceivingly low number of clinical symptoms. If memory problems are noted, it is important to remember that evidence of memory difficulties are not inconsistent with an NFS assault. Identification of injuries may present further challenges where a person has a darker skin tone where injuries such as bruising may not be as visible. While we do not know the specific implications that skin tone may have on identification of strangulation injuries, it is vital to consider this when making assessments of people with darker complexions, and the utility of ALS if it is available to assist with the visibility of those injuries. Regardless of access to ALS, any visible injuries should be photographed using a camera with high resolution and good lighting to increase the likelihood that injuries will be captured. Overall, evidence gathering as part of routine medical assessment of NFS can lead to increased numbers of prosecutions of NFS and a higher likelihood of successful prosecution. Corroborative medical evidence of NFS can rebut the accused’s claim that the NFS was carried out by accident or in self-defence. Furthermore, studies have identified that for survivors who have experienced trauma, giving evidence in court proceedings can be experienced as a form of secondary victimisation as they must relive the experience all over again in a context where their version of events is challenged. The more corroborative evidence available to the prosecution the more likely the accused is to plead guilty, avoiding the need for the survivor to testify. Furthermore, for a range of reasons, it is common for survivors of NFS and other family violence related offences to withdraw their support for prosecution. The presence of other forms of evidence, beyond the testimony of the complainant/survivor, may result in some NFS prosecutions proceeding despite the absence of the complainant/survivor’s testimony. In the context of rape jury trials, research suggests that some jury members expect to see medical or scientific evidence in the course of the trial, though there is currently no research about this issue regarding NFS jury trials. While this research presents a scoping review and does not include a risk of bias, there were nonetheless clear avenues for introduction of bias. First, the evidence for medical imaging largely involved studies with retrospective review of NFS cases presenting in medical settings. Unlike prospective studies, these studies were uncontrolled and were, therefore, less likely to have any consistent protocols for assessing and documenting injuries from strangulation and for whether a person was eligible for imaging, excluding that of Bruguier et al . This may have resulted in more cases where injuries were already visible receiving imaging and inflating the number of NFS injuries identified through MRI or CT scans. Further, exercise of discretion for referral was identified as a problem even when decision guidelines were in place regarding imaging. For example, Bruguier et al showed that despite MRI eligibility criteria for NFS symptoms and injuries, only 11 of the 112 survivors over a 4-year period received an MRI following clinical assessment. These referral biases, particularly in retrospective studies, may lead to an inflated claim of the effectiveness of MRI in detecting of NFS injuries, particularly among those who have no externally visible symptoms. Future research in examining internal injuries from NFS should focus on prospective reviews of NFS investigation, particularly where evidence is limited, but promising, such as for ALS. Aside from biases, a considerable limitation of this review was the lack of available literature reviewing CT scans for strangulation injuries. Despite CT angiography the current ‘gold standard’ for detecting vascular injuries, there were a surprising lack of articles available for review in this area. This may have been due to language restrictions used in this review that may have excluded relevant research conducted and published in other languages. Notably, several titles and abstracts able to be screened in English from Russian articles, may have been relevant to this review. Future reviews should, therefore, consider including Russian, among other, language articles if possible. Another limitation of the available evidence was the primarily qualitative assessments from authors as experienced clinicians and prosecutors describing the utility of documentation tools in court cases, the use of direct quotations in notes, and the importance of photographs taken by health professionals in underpinning prosecutions of NFS. More research is needed in this area to confirm the most robust clinical documentation tool that assists with both the care of the patient and, separately, their utility as evidence in the prosecution of NFS. It is possible that further research on documentation tools efficacy in prosecution and further detail on investigation of internal injuries from NFS are available in other languages that were not able to be assessed within the bounds of this scoping review. Despite these limitations, this review provides a first and comprehensive review of the literature to guide clinicians regarding clinical decision making, referral and criminal prosecution of NFS, when the victim–survivor wishes to take that path. Medical personnel will often be the first point of disclosure for NFS by victim–survivors of domestic violence and are otherwise often referred by police for medical attention in the aftermath of NFS. Therefore, it is essential that medical responses to NFS include consistent investigation and documentation of internal injuries, the experience of the assault, and the signs and symptoms resulting from NFS. These records can assist with clinical referrals, and provide additional corroborating evidence of the assault supporting victim–survivors who choose to engage with legal pathways in the future. Reviewer comments Author's manuscript
Social Mycology: Using Social Media Networks in the Management of Aspergillosis and Other Mycoses
4dd121f1-399e-410f-80b9-4c6d46272027
10078039
Microbiology[mh]
Aspergillosis remains a significant challenge, affecting an estimated 42 per 100,000 of the population globally . Diagnosis and treatment remain a significant challenge, resulting from lack of awareness, lack of access to diagnostics and the diagnostics themselves being limited, which often results in delayed diagnosis. Even when diagnosed, management is not straightforward. The disease itself manifests as a broad spectrum of clinical syndromes, from Chronic Pulmonary Aspergillosis (CPA) to acute and subacute invasive aspergillosis (IA) as well as allergic bronchopulmonary aspergillosis (ABPA) . As a result, management may consist of surveillance alone, antifungal therapy, through to thoracic surgery and even management on an intensive care unit . Consequently, this necessitates a multi-disciplinary approach to management, which may include microbiologists or mycologists, Infectious Diseases physicians, respiratory physicians (pulmonologists), radiologists, and intensive care specialists. Additionally, many of the most severe cases occur in those who are profoundly immune suppressed such as those undergoing haematopoietic stem cell transplantation (HSCT). These patients will be under the primary care of a haemato-oncologist who will be the decision maker in the patient’s treatment. Multidisciplinary Team (MDT) working is not a new concept in medicine and is the bedrock of a large proportion of clinical practice. However, this usually takes place within a single institution. In recent years there has been an explosion of the use of social media by healthcare professionals worldwide to communicate, which includes case discussion. Additionally, the platform can be used for education, professional networking, advocacy and engagement. This article considers the advantages and pitfalls of using social media as a forum for managing complex infections such as aspergillosis and other mycoses. Social Media and Infectious Disease Medicine Social networks and social media have become an integral part of twenty-first century life and culture. One of the foremost networks is Twitter, founded in 2006, which is a free to access and use social media network based on “microblogging”, posting notes known as “tweets” which are limited to 280 characters. This has proved highly popular worldwide with nearly 400 million active users . The infectious disease community, including physicians, pharmacists, nurses, academics and clinicians, have embraced this platform to communicate professionally. In 2015 there were already an estimated 75,000 healthcare practitioners (HCP) using the platform . Since the COVID-19 pandemic, public interest in infectious diseases has never been higher, with an explosion of engagement on Twitter to disseminate news, share journal articles, and discuss every aspect of the pandemic. Aspergillosis is a neglected disease. Funding for research related to the pathogenesis, diagnosis and management of aspergillosis, and fungal infections in general, is a fraction of that of other infectious diseases . Many laboratories do not have dedicated facilities for diagnosing aspergillosis and have a lack of health personnel specifically trained in the management of fungal diseases, in lower income settings . This makes a readily accessible, free to access, global network of professionals through sites such as Twitter particularly attractive for the management of relatively neglected infections such as aspergillosis. Twitter can be used in several way to aid the management of aspergillosis—these can be categorised as follows: Education Research networking Case discussions/MDTs Public awareness and patient engagement Social networks and social media have become an integral part of twenty-first century life and culture. One of the foremost networks is Twitter, founded in 2006, which is a free to access and use social media network based on “microblogging”, posting notes known as “tweets” which are limited to 280 characters. This has proved highly popular worldwide with nearly 400 million active users . The infectious disease community, including physicians, pharmacists, nurses, academics and clinicians, have embraced this platform to communicate professionally. In 2015 there were already an estimated 75,000 healthcare practitioners (HCP) using the platform . Since the COVID-19 pandemic, public interest in infectious diseases has never been higher, with an explosion of engagement on Twitter to disseminate news, share journal articles, and discuss every aspect of the pandemic. Aspergillosis is a neglected disease. Funding for research related to the pathogenesis, diagnosis and management of aspergillosis, and fungal infections in general, is a fraction of that of other infectious diseases . Many laboratories do not have dedicated facilities for diagnosing aspergillosis and have a lack of health personnel specifically trained in the management of fungal diseases, in lower income settings . This makes a readily accessible, free to access, global network of professionals through sites such as Twitter particularly attractive for the management of relatively neglected infections such as aspergillosis. Twitter can be used in several way to aid the management of aspergillosis—these can be categorised as follows: Education Research networking Case discussions/MDTs Public awareness and patient engagement Aspergillosis and fungal infections in general have historically been poorly covered in medical school curricula , resulting in a lack of expertise and knowledge even amongst infection and respiratory specialists. Education and awareness is therefore a fundamental starting point for ultimately providing optimal management for these patients. Twitter provides numerous and unique educational opportunities which have been and can be further exploited to enhance education on aspergillosis. These include; Journal Clubs Journal clubs are a long established medical educational tool, which involves a group discussing a journal article relevant to their practice. Twitter can be used to host “virtual journal clubs”, where a paper is discussed online with participants from across the globe. Often the journal authors themselves can participate in the discussion. A dedicated infectious diseases journal club, #IDJClub, was established in late 2019, attracting participation of not only infection specialist doctors but pharmacists, nurses, and other allied health professionals. Mycology topics have been included, and the Twitter account has followers from 114 countries, a global reach which would not be possible with traditional, intra departmental journal clubs . Additionally, a Twitter based discussion forum specifically on mycology topics, run by the Mycosis Study Group (MSG) and known as “OpenMyc”, has been launched. Tweetorials “Tweetorial” is a portmanteau of “tweet” and “tutorial”. It is an educational tool which consists of a thread (a series of tweets) which uses concise summaries and visuals to educate about a particular topic. They have become widely used in medical education . Several aspergillosis related tweetorials have proved highly popular . The mix of imaging, question and answer, polls and the open forum, allow for an accessible, time efficient method of education. Another successful example is a review of data on different azole antifungals for the treatment of aspergillosis in a concise, clear format . Additionally, dedicated mycology teaching and courses including on the diagnosis and management of aspergillosis have a presence on Twitter, for example LIFE Worldwide @LifeWorldwide. Papers and Research Articles Peer reviewed journal articles remain the bedrock of disseminating research and practice updates in any field, and mycology is no exception. Social media such as Twitter allows this to occur much faster, with the authors themselves posting links to the paper, as well as presenting the data. The open access nature of Twitter allows instant interaction with the authors to ask questions and clarifications in real time, when novel research or clinical guidelines are produced . Beyond research articles, sharing of case reports on Twitter allows for up to the minute surveillance of new trends in fungal infection—for example, some of the first ever clinical reports of what came to be known as Covid Associated Pulmonary Aspergillosis (CAPA), appeared on Twitter, as early as April 2020 . These must be viewed with caution, however, as pre-prints of articles which are not yet peer reviewed and may be flawed can have significant impact when shared on social media. Journal clubs are a long established medical educational tool, which involves a group discussing a journal article relevant to their practice. Twitter can be used to host “virtual journal clubs”, where a paper is discussed online with participants from across the globe. Often the journal authors themselves can participate in the discussion. A dedicated infectious diseases journal club, #IDJClub, was established in late 2019, attracting participation of not only infection specialist doctors but pharmacists, nurses, and other allied health professionals. Mycology topics have been included, and the Twitter account has followers from 114 countries, a global reach which would not be possible with traditional, intra departmental journal clubs . Additionally, a Twitter based discussion forum specifically on mycology topics, run by the Mycosis Study Group (MSG) and known as “OpenMyc”, has been launched. “Tweetorial” is a portmanteau of “tweet” and “tutorial”. It is an educational tool which consists of a thread (a series of tweets) which uses concise summaries and visuals to educate about a particular topic. They have become widely used in medical education . Several aspergillosis related tweetorials have proved highly popular . The mix of imaging, question and answer, polls and the open forum, allow for an accessible, time efficient method of education. Another successful example is a review of data on different azole antifungals for the treatment of aspergillosis in a concise, clear format . Additionally, dedicated mycology teaching and courses including on the diagnosis and management of aspergillosis have a presence on Twitter, for example LIFE Worldwide @LifeWorldwide. Peer reviewed journal articles remain the bedrock of disseminating research and practice updates in any field, and mycology is no exception. Social media such as Twitter allows this to occur much faster, with the authors themselves posting links to the paper, as well as presenting the data. The open access nature of Twitter allows instant interaction with the authors to ask questions and clarifications in real time, when novel research or clinical guidelines are produced . Beyond research articles, sharing of case reports on Twitter allows for up to the minute surveillance of new trends in fungal infection—for example, some of the first ever clinical reports of what came to be known as Covid Associated Pulmonary Aspergillosis (CAPA), appeared on Twitter, as early as April 2020 . These must be viewed with caution, however, as pre-prints of articles which are not yet peer reviewed and may be flawed can have significant impact when shared on social media. The global reach and sheer numbers of Twitter users provides a greater platform than ever before to disseminate information on courses, meetings, clinical networks, and professional societies. The following medical mycology organisations have a Twitter presence: They do however lag significantly behind larger infection organisations such as the Infectious Diseases Society of America @IDSAinfo which has nearly 70,000 followers, reflecting the relatively smaller niche mycology continues to occupy in the infectious diseases space. Encouragingly, mycology societies in lower income settings which historically have had less opportunity and resource to participate in clinical and research efforts to combat aspergillosis, have been able to harness Twitter to network and host events, for example in West Africa, including Medical Mycology Society of Nigeria—@MMSNIG and the Ghana Medical Mycology Society—@GMMS_Ghana. Healthcare professionals and scientist in other regions such as Latin America and the Caribbean and Asia–Pacific, which have also faced challenges in access to mycology diagnostics , are able to use this platform to engage with the mycology community worldwide. Twitter can be used for personal professional development, as it allows the user to keep abreast of courses, meetings, and job opportunities. The site flattens hierarchies – leaders in the field can be accessed directly, whereas at a traditional conference it may be intimidating to approach eminent experts in a hall of thousands of delegates. Research ideas and fruitful collaborations have fertile ground. The most direct application of Twitter for direct patient care is clinical case discussion. As previously discussed, the management of aspergillosis is often complex, requiring input from specialists across several disciplines, and critically, review of imaging with a pulmonary radiologist. Using similar methods to tweetorials or paper discussions, the Twitter platform allows readily accessible information including sharing of images, open to a wide audience globally. This can result in a “virtual MDT”. The advantages are clear—prompt access to high level of expertise with no geographical limitation. The interactive nature and lack of hierarchical barrier allows for robust discussion of imaging, microbiology results, and the clinical case. This can be especially helpful for those working in more remote settings who may have more limited access to relevant expertise. It allows access to some of the most highly regarded experts in mycology and aspergillosis to provide advice on management. There is even a poll function where participants can vote on management strategies. This has the potential to be harnessed not only for ad hoc cases, but for formalised regular aspergillosis case conferences, which can be local, regional, or even global. There are, however, numerous pitfalls and challenges to this which can be summarised as follows. Maintenance of patient confidentiality is fundamental to medical practice, even more so on social media, where a breach of confidentiality can result in catastrophic release of confidential information. It is imperative that confidentiality is maintained, patient identifiers removed, patient consent requested and obtained when required, and professional guidelines on social media use by medical professionals are adhered to . Additionally, a clinical governance framework is key. The lack of regulation can have consequences for patient treatment decisions. It should remain clear that the ultimate treatment decision rests with the supervising clinician. Discussions should be recorded in a formal way and participants noted. The open access nature of the site, whilst a strength, also makes this difficult. Moreover, participants may not have the expertise they claim to, as credentials are not verified by the site. The system of “verification” of Twitter accounts has now been overturned and is available to anyone who pays a subscription. This can lead to unqualified persons providing treatment advice, solicited or otherwise. It also must be recognised that despite the popularity of the platform, many experts in fungal infection are not active Twitter users. Image quality can be an issue when discussing for example CT scans—online images are unlikely to have the definition and clarity of the original images available to radiologists, limiting the usefulness of discussions. Videos can be useful but also may suffer from quality degradation for clinical purposes. As a relatively neglected disease, patients with aspergillosis often have delayed diagnosis and a difficult treatment course for a disease for which there is poor understanding and awareness. Twitter has an enormous reach well beyond the confines of the medical profession, allowing patient networks and support groups to reach out to each other and provide information, as well as mycology health professionals to engage the public on the topic, raise awareness, and connect with the press, government agencies and research funders. Examples of this include The Aspergillosis Trust @AsperTrust, the Sudan- based mycetoma patients’ group Mycetoma Patients Friends Association @MPFAglobal, and @Crypto_Mag_, which fights for equitable access to antifungal drugs. Awareness events such as World Aspergillosis Day (WAD) has been effectively amplified and publicised on Twitter using the hashtag #WorldAspergillosisDay (or year specific e.g. #WAD2022). As with all interaction with social media, there exists the problem of “trolling”—bad faith accounts whose sole purpose is to agitate, contradict and abuse. This is an increasing problem in medicine with anti-science and anti-medical sentiment becoming a growing issue on online platform, including organised campaigns of disinformation . Even in a relatively less high-profile field as mycology, quality control of content and combatting misinformation is essential if it is to be successfully used for patient benefit. Social media networks such as Twitter are an integral part of modern life and are here to stay. The potential for it’s use in all fields including mycology and the management of aspergillosis are numerous. The advantages of a global community to tackle this relatively neglected topic in a collaborative way are clear, especially when there are relatively few scientists and clinicians dedicated to addressing the problem. However, there are numerous pitfalls, in particular governance issues, quality control and verification of participants. The use of social media in medicine is still in its infancy, and structured research on it’s impact remains scarce. It is essential to have a framework of recording data, clear ground rules for use, strict adherence to patient confidentiality, and maintain standards and principles of disseminating peer reviewed data. Despite its challenges, those who choose not to engage with these platforms are going to find it increasingly difficult to communicate their knowledge to an ever more online world.
Role-Specific Curricular Needs for Identification and Management of Immune-Related Adverse Events
89714f4e-7c58-4441-9431-aec1bfe763a7
10078044
Internal Medicine[mh]
Oncologists are prescribing immune checkpoint inhibitors (ICIs) more frequently for a larger breadth of cancer diagnoses, including nivolumab for melanoma and pembrolizumab for lung cancer .This likely corresponds to a higher incidence of immune-related adverse events (irAEs) given consistent frequencies of irAEs across trials . Any organ in the body could potentially be affected by irAEs, which include enteritis, colitis, thyroiditis, hypophysitis, dermatitis, and hepatitis . A single-center descriptive report noted an irAE incidence of 34% in immunotherapy clinical trials, with the most common irAEs being rash (dermatitis), hormonal (hypophysitis), elevated liver function tests (hepatitis), and diarrhea (enterocolitis) . Generally, cytotoxic T-lymphocyte-associated antigen-4 (CTLA-4) inhibitors such as ipilimumab have a higher irAE incidence (60–70%) than programed cell death protein 1 (PD-1) inhibitors such as pembrolizumab or nivolumab (~ 40%) . Many clinicians can be involved in the diagnosis and management of this broad range of presentations for irAEs, including clinicians who work in both outpatient and inpatient settings depending on the severity of the patient’s symptoms. While oncologists, rheumatologists, and other specialists treat irAEs in outpatient clinics, some oncology clinicians and hospitalists have roles in identification and management of irAEs in inpatient settings. Additionally, irAEs that require inpatient treatment are typically of higher grade and have a higher cost burden on the healthcare system . Given the rise in ICI use and the consequent increase in irAEs as well as the number of specialists involved in irAE diagnosis and management, several prior studies have sought to characterize current experience and confidence levels. Each of these studies identified knowledge gaps for rheumatologists regarding irAEs . The one study that surveyed oncologists had only a 2% response rate but did show that they had higher ICI knowledge and rheumatic irAE experience than rheumatologists . There is a gap in our understanding of non-rheumatologists’ comfort in identifying and managing irAEs, and this gap is relevant given these clinicians regularly take care of patients. One intervention study demonstrated the positive effects of addressing knowledge gaps through pharmacist-led education efforts: patients had lower rates of ICI discontinuation due to irAEs . Improving irAE understanding among clinicians who diagnose and treat patients with irAEs through dedicated didactics can positively impact patient care. We quantified the knowledge, confidence, and experience levels of oncology and general medicine clinicians in various roles to inform the development of a future irAE curriculum. Participants In June and July 2022, we administered a web-based survey to all University of Chicago (UChicago)-affiliated oncology clinician-oncology fellows, attendings, nurse practitioners (NPs), and physician assistants (PAs), and all UChicago hospitalists and internal medicine (IM) residents, as well as a comprehensive list of community oncologists in Chicago. Survey invitations were sent via email, and those who had not responded received two reminder emails, spaced at roughly 1-week intervals. Survey implementation was coordinated by the UChicago Survey Lab, and survey responses were collected through Qualtrics. All survey participants had the option of receiving a $10 electronic gift card after completion of the survey. Survey Instrument We designed a 25-question survey that assessed knowledge, experience, confidence, and resource utilization related to irAE identification and management (Supplement ). Questions were designed with input from an irAE specialist (PR), oncologists, medical education experts, and the director of the UChicago Survey Lab. The questions were finalized after an iterative process. We created six knowledge-based multiple choice questions based on irAE literature and guidelines and informed by clinical experience. These assessed knowledge level of irAE diagnosis (i.e., myocarditis triad, risk of irAEs if preexisting autoimmune conditions, and diagnostic steps for ICI-associated colitis) and management (i.e., ICI-associated hepatitis, first-line therapy for irAEs, and ICI-related hypothyroidism). We quantified experience level with ICIs and irAEs over the past year. We determined the subspecialty referral patterns and ease of referral for patients with irAEs. We identified the main resources utilized to identify and manage patients with irAEs and the difficulty or ease of accessing resources. We assessed confidence in six aspects of irAE diagnosis and management (i.e., irAE identification, biopsy and lab timing, steroid dosing, steroid-sparing medication selection, and steroid side effect monitoring). We determined respondents’ openness to online resources and continuing medical education irAE sessions. Statistical Analysis Respondents were categorized as a community oncologist if not affiliated with UChicago and an oncology attending if affiliated with UChicago, including community satellite locations. Based on the recommendation of the Assistant Director of Advanced Practice, Cancer Service Line (GT), the hematology/oncology NP/PA respondents were retrospectively stratified into two categories: those who primarily treat patients with solid malignancies and those who treat hematologic malignancies. Knowledge question accuracy was calculated in two ways: 1) counting no idea: the total number of correct responses (indicated by an asterisk in Supplement ) divided by the total number of responses and 2 ) omitting no idea: the total number of correct responses divided by the total number of responses that were not “no idea.” Descriptive statistics were used for “no idea” response frequency as well as responses to experience, resource utilization, and confidence questions. We used Fisher’s exact tests to compare knowledge question responses between oncology physicians (attendings, fellows, or community oncologists) and non-oncology physicians or NPs/PAs (residents, hospitalists, and NPs/PAs). We used stratified simple quantile regressions to analyze the relationships between knowledge, experience, and confidence by clinician type. Ethical Approval The UChicago Institutional Review Board determined this study was exempt from further review as it is of minimal risk and comprised of de-identified survey data. In June and July 2022, we administered a web-based survey to all University of Chicago (UChicago)-affiliated oncology clinician-oncology fellows, attendings, nurse practitioners (NPs), and physician assistants (PAs), and all UChicago hospitalists and internal medicine (IM) residents, as well as a comprehensive list of community oncologists in Chicago. Survey invitations were sent via email, and those who had not responded received two reminder emails, spaced at roughly 1-week intervals. Survey implementation was coordinated by the UChicago Survey Lab, and survey responses were collected through Qualtrics. All survey participants had the option of receiving a $10 electronic gift card after completion of the survey. We designed a 25-question survey that assessed knowledge, experience, confidence, and resource utilization related to irAE identification and management (Supplement ). Questions were designed with input from an irAE specialist (PR), oncologists, medical education experts, and the director of the UChicago Survey Lab. The questions were finalized after an iterative process. We created six knowledge-based multiple choice questions based on irAE literature and guidelines and informed by clinical experience. These assessed knowledge level of irAE diagnosis (i.e., myocarditis triad, risk of irAEs if preexisting autoimmune conditions, and diagnostic steps for ICI-associated colitis) and management (i.e., ICI-associated hepatitis, first-line therapy for irAEs, and ICI-related hypothyroidism). We quantified experience level with ICIs and irAEs over the past year. We determined the subspecialty referral patterns and ease of referral for patients with irAEs. We identified the main resources utilized to identify and manage patients with irAEs and the difficulty or ease of accessing resources. We assessed confidence in six aspects of irAE diagnosis and management (i.e., irAE identification, biopsy and lab timing, steroid dosing, steroid-sparing medication selection, and steroid side effect monitoring). We determined respondents’ openness to online resources and continuing medical education irAE sessions. Respondents were categorized as a community oncologist if not affiliated with UChicago and an oncology attending if affiliated with UChicago, including community satellite locations. Based on the recommendation of the Assistant Director of Advanced Practice, Cancer Service Line (GT), the hematology/oncology NP/PA respondents were retrospectively stratified into two categories: those who primarily treat patients with solid malignancies and those who treat hematologic malignancies. Knowledge question accuracy was calculated in two ways: 1) counting no idea: the total number of correct responses (indicated by an asterisk in Supplement ) divided by the total number of responses and 2 ) omitting no idea: the total number of correct responses divided by the total number of responses that were not “no idea.” Descriptive statistics were used for “no idea” response frequency as well as responses to experience, resource utilization, and confidence questions. We used Fisher’s exact tests to compare knowledge question responses between oncology physicians (attendings, fellows, or community oncologists) and non-oncology physicians or NPs/PAs (residents, hospitalists, and NPs/PAs). We used stratified simple quantile regressions to analyze the relationships between knowledge, experience, and confidence by clinician type. The UChicago Institutional Review Board determined this study was exempt from further review as it is of minimal risk and comprised of de-identified survey data. Response Rate There was an overall response rate of 37% (171/467). This included 51 internal medicine residents, 10 oncology fellows, 30 NPs/PAs (10 solid malignancy and 20 hematologic malignancy), 41 oncology attendings, and 11 community oncologists. Response rates were highest for UChicago oncology clinicians (55–67%), lower for UChicago internal medicine clinicians (43–46%), and lowest for community oncologists (8%) (Table ). Knowledge Knowledge question accuracy was highest for oncology attendings (68%), oncology fellows (67%), and NPs/PAs who treat patients with solid malignancies (67%), and it was lowest for hospitalists (38%) and NPs/PAs who treat patients with hematologic malignancies (40%) (Table ). Oncology attendings, oncology fellows, and community oncologists were less likely to ever respond “no idea” than medicine residents, hospitalists, and hematology/oncology NPs/PAs (23% vs 60%, p < 0.001). Knowledge questions with the lowest accuracy were those related to treatment of ICI-associated hepatitis (23%) and the risk of de novo irAEs in patients with preexisting autoimmune conditions (33%). Experience Experience with ICIs and irAEs was highest for oncology attendings, solid malignancy NPs/PAs, and oncology fellows and lowest for hospitalists, hematologic malignancy NPs/PAs, and internal medicine residents (Table ). Unadjusted quantile regression models demonstrated that higher ICI and irAE experience was associated with higher knowledge scores for oncology attendings ( p = 0.02) and oncology NPs/PAs ( p = 0.03). Overall, experience was predictive of knowledge ( R 2 = 0.15, p < 0.001). Oncology attendings were the least likely to determine that patients with irAEs required subspecialty care beyond oncology, and internal medicine residents and hospitalists were the most likely. Community oncologists found the referral process to subspecialists the most difficult. Confidence Overall confidence in irAE diagnosis and management was the highest for community oncologists, oncology attendings, and solid malignancy NPs/PAs and was the lowest for internal medicine residents, hospitalists, and hematologic malignancy NPs/PAs (Supplement ). Generally, clinicians were the least confident with choosing steroid-sparing medications for irAE treatment. Unadjusted quantile regression models demonstrated that higher ICI and irAE experience was associated with higher confidence for medicine residents ( p = 0.026), oncology fellows ( p = 0.047), and hematology/oncology NPs/PAs ( p = 0.042) but not for oncology attendings ( p = 0.12), hospitalists ( p = 0.20), and community oncologists ( p > 0.99). Overall, experience was predictive of confidence ( R 2 = 0.28, p < 0.001). Unadjusted quantile regression models demonstrated that higher confidence was associated with higher knowledge for medicine residents ( p = 0.003), oncology attendings ( p = 0.04), and oncology NPs/PAs ( p = 0.03) but not for hospitalists ( p = 0.2), oncology fellows ( p = 0.3), and community oncologists ( p = 0.5). Overall, confidence was predictive of knowledge ( R 2 = 0.19, p < 0.001). Both experience ( p = 0.002) and confidence ( p = 0.001) were predictive of knowledge in the multivariable quantile regression model (pseudo R 2 = 0.13). Resources The most common resources utilized by the sample were colleagues (77%) and UpToDate (75%), while the least common were the Society for Immunotherapy of Cancer (SITC, 12%) and PubMed (17%). Respondents found resources on medications and dosing more difficult to access than those on work-up or treatment. While solid malignancy NPs/PAs were extremely open to online resources including online continuing medical education (CME) irAE sessions, oncology fellows and attendings were more likely to use online resources in general than to engage in irAE-specific CME sessions. Overall, most respondents were at least somewhat likely to utilize online resources including online CME irAE sessions (Supplement ). There was an overall response rate of 37% (171/467). This included 51 internal medicine residents, 10 oncology fellows, 30 NPs/PAs (10 solid malignancy and 20 hematologic malignancy), 41 oncology attendings, and 11 community oncologists. Response rates were highest for UChicago oncology clinicians (55–67%), lower for UChicago internal medicine clinicians (43–46%), and lowest for community oncologists (8%) (Table ). Knowledge question accuracy was highest for oncology attendings (68%), oncology fellows (67%), and NPs/PAs who treat patients with solid malignancies (67%), and it was lowest for hospitalists (38%) and NPs/PAs who treat patients with hematologic malignancies (40%) (Table ). Oncology attendings, oncology fellows, and community oncologists were less likely to ever respond “no idea” than medicine residents, hospitalists, and hematology/oncology NPs/PAs (23% vs 60%, p < 0.001). Knowledge questions with the lowest accuracy were those related to treatment of ICI-associated hepatitis (23%) and the risk of de novo irAEs in patients with preexisting autoimmune conditions (33%). Experience with ICIs and irAEs was highest for oncology attendings, solid malignancy NPs/PAs, and oncology fellows and lowest for hospitalists, hematologic malignancy NPs/PAs, and internal medicine residents (Table ). Unadjusted quantile regression models demonstrated that higher ICI and irAE experience was associated with higher knowledge scores for oncology attendings ( p = 0.02) and oncology NPs/PAs ( p = 0.03). Overall, experience was predictive of knowledge ( R 2 = 0.15, p < 0.001). Oncology attendings were the least likely to determine that patients with irAEs required subspecialty care beyond oncology, and internal medicine residents and hospitalists were the most likely. Community oncologists found the referral process to subspecialists the most difficult. Overall confidence in irAE diagnosis and management was the highest for community oncologists, oncology attendings, and solid malignancy NPs/PAs and was the lowest for internal medicine residents, hospitalists, and hematologic malignancy NPs/PAs (Supplement ). Generally, clinicians were the least confident with choosing steroid-sparing medications for irAE treatment. Unadjusted quantile regression models demonstrated that higher ICI and irAE experience was associated with higher confidence for medicine residents ( p = 0.026), oncology fellows ( p = 0.047), and hematology/oncology NPs/PAs ( p = 0.042) but not for oncology attendings ( p = 0.12), hospitalists ( p = 0.20), and community oncologists ( p > 0.99). Overall, experience was predictive of confidence ( R 2 = 0.28, p < 0.001). Unadjusted quantile regression models demonstrated that higher confidence was associated with higher knowledge for medicine residents ( p = 0.003), oncology attendings ( p = 0.04), and oncology NPs/PAs ( p = 0.03) but not for hospitalists ( p = 0.2), oncology fellows ( p = 0.3), and community oncologists ( p = 0.5). Overall, confidence was predictive of knowledge ( R 2 = 0.19, p < 0.001). Both experience ( p = 0.002) and confidence ( p = 0.001) were predictive of knowledge in the multivariable quantile regression model (pseudo R 2 = 0.13). The most common resources utilized by the sample were colleagues (77%) and UpToDate (75%), while the least common were the Society for Immunotherapy of Cancer (SITC, 12%) and PubMed (17%). Respondents found resources on medications and dosing more difficult to access than those on work-up or treatment. While solid malignancy NPs/PAs were extremely open to online resources including online continuing medical education (CME) irAE sessions, oncology fellows and attendings were more likely to use online resources in general than to engage in irAE-specific CME sessions. Overall, most respondents were at least somewhat likely to utilize online resources including online CME irAE sessions (Supplement ). Clinicians caring for patients with irAEs vary in their knowledge, confidence, experience, and openness to curricular interventions based on their roles in the healthcare system. Only rarely did respondents choose “very confident” to irAE diagnosis and management questions, and all respondent types had knowledge scores below 70% on average, reflecting key knowledge gaps that need to be addressed. Curricular interventions have the potential to positively impact the treatment courses of patients on ICIs by avoiding discontinuation due to irAEs . This needs assessment identifies curricular priority areas comprised of the greatest knowledge gaps and where clinicians had the lowest confidence. The two questions with the lowest percentage correct and highest “no idea” responses were related to (1) not using tumor necrosis factor-alpha inhibitors for ICI-hepatitis and (2) the risk of de novo irAEs when patients have preexisting autoimmune disease (pAID). Additionally, respondents expressed the lowest confidence level with choosing steroid-sparing medications. These gaps can be addressed through dedicated, interactive didactics that reference various guideline recommendations for steroid-sparing agents . Currently available management guidelines are available through the American Society of Clinical Oncology (ASCO), SITC, the National Comprehensive Cancer Network (NCCN), and the European Society for Medical Oncology (ESMO) . Patients with pAIDs such as thyroiditis, psoriasis, or rheumatoid arthritis should be counseled on their risks of developing irAEs before starting on ICIs. To accomplish this patient education, clinicians themselves must be informed regarding the risks of cancer ICI toxicities on pAID flares versus de novo irAEs. Future irAE teaching modules for clinicians should provide information on this topic and guidance for use of ICI therapy for patients with pAIDs. In general, more experience and higher confidence were associated with higher knowledge, but this was not always the case depending on the clinician type. Variations in experience level, confidence, and knowledge did follow roughly expected trends: oncology physicians and solid malignancy NPs/PAs more commonly treat patients with ICIs and at risk for irAEs and generally had higher experience, confidence, and knowledge. On the other hand, generalists and hematologic malignancy NPs/PAs less commonly treat patients with ICIs and thus interface with irAEs with less regularity and generally had lower confidence and knowledge in our study. These three dimensions of experience, confidence, and knowledge can help inform role-specific curricular interventions to boost knowledge and confidence to role-specific responsibility levels that likely correspond to anticipated experience levels. Future curricular interventions should consider a clinician’s role in the diagnosis and management of irAEs and customize interventions to these roles accordingly. Generalists can play a key role in irAE identification with oncologists and autoimmune disease specialists playing vital roles as irAE management experts. Interactive or simulation didactics could supplement the paucity of opportunity for on-the-job experience for clinicians who only take care of occasional patients with irAEs. Such an intervention can help raise confidence and knowledge for those whose clinical work does not allow for organic growth via clinical experience. All respondents were open to online resources. Given these preferences and likely limited clinician availability due to clinical and administrative responsibilities, future curricular interventions should consider easily-accessible, freely-available online didactics and potentially hybrid courses with in-person discussions and online resources. Virtual didactic options include online non-interactive websites with links to manuscripts and key reference material, prerecorded online presentations, educational podcasts, interactive online modules, and live interactive sessions. SITC is actively developing online resources on toxicities of cancer immunotherapy through their Advances in Cancer Immunotherapy educational series . This current resource does require SITC membership and is not freely accessible to all clinicians, particularly generalists who are less likely to be members. Educational resources available for all relevant clinicians are needed. In addition to didactic resources, the availability and feasibility of clinical teachers within a given health system are important aspects to consider. These teachers could provide experiential learning opportunities as well as “hands-on,” case-based, role-specific seminars for oncology trainees during ASCO or SITC national meetings and for generalists at American College of Physicians (ACP) or Society of Hospital Medicine (SHM) national meetings. Locally, institutions could develop objective structure clinical examinations (OSCEs) for medicine residents or oncology fellows, or educational interventions could start even further upstream at the medical student stage . Since experience was generally correlated with higher knowledge and confidence, simulated experiences could have direct impact on patient care, particularly if specialized oncology or irAE clinics are not feasible options for clinical rotations. These future educational curricula should be efficacious and sustainable. They could be evaluated by the Kirkpatrick evaluation model and annually updated by expert panels given the rapidly growing nature of this field (Supplement ) . Limitations of this study included the sample selection, response rate, and question design. First, most of the sample was comprised of generalists and oncology specialists from a single academic institution, and the list of community oncologists was obtained from the UChicago’s Assistant Director of Physician Relations. While our sample was limited to the Chicago area, the survey was sent to 467 clinicians of diverse roles, including generalists and NPs/PAs who have not been studied prior. The response rate was much lower for community oncologists (8%) than for all other respondent types (50%). This is likely because non-community oncologists were all affiliated with University of Chicago and may have recognized the email address domain for the survey invitation. Conclusions from this study regarding community oncologists are likely affected by selection bias. Finally, because there are no validated survey instruments specifically on irAEs, we relied on expert-driven question generation. Future curricula will provide role-specific didactics tailored to expectations for various clinicians who care for patients with irAEs. Key gaps in knowledge of irAE treatment with steroid-sparing agents as well as ICI use in at-risk patient populations (such as that with pAID) will be addressed in future teaching modules within this irAE curricular plan. Finally, irAE education for healthcare professionals with busy clinical schedules should be conducted through a hybrid model that facilitates teaching through online resources, interactive modules, and efficient in-person options. Didactics dedicated to ICI toxicities are vital as ICI use becomes more common and various types of clinicians are increasingly participating in irAE patient care. Our findings justify prospective study of curricular development and implementation that will lead to a multifaceted approach to irAE education for healthcare professionals with aims of improving evaluation and care for patients who suffer from ICI toxicities. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 102 KB)
J Guy Edwards, FRCPsych, FRCP, FRCGP (Hon), DPM, HonMFPH, Founding Editor of
e1a0ddfa-fed9-43ee-b199-642cd1dc487e
10078100
Pharmacology[mh]
A new theoretical perspective on concealed information detection
85909d12-e713-417d-9bf5-da476af98023
10078248
Forensic Medicine[mh]
INTRODUCTION The concealed information test (CIT) is designed to detect concealed knowledge and may serve as an aid in criminal investigations (e.g., Osugi, ; Verschuere et al., ). This method, originally labeled, “the guilty knowledge test” (GKT, see Lykken, , ), utilizes a series of multiple‐choice questions, each having one relevant alternative, also labeled as Probe (e.g., a feature of the crime under investigation) and several neutral (control) alternatives, chosen so that an innocent suspect would not be able to discriminate them from the probe (Lykken, ). To demonstrate the application of the CIT in criminal investigation, let us imagine a bank robbery. The investigators know that a sum of one million US dollars was stolen, that the robber threatened the bank employee with a knife and, as shown by the security camera footage, that the robber fled the crime scene in a Subaru car. Policemen arrested a suspect who denied being at the crime scene and interrogated him with the CIT. The suspect may be asked: (1) Did you steal a sum of: 0.5 million; 1 million; 1.5 million; 2 million; 2.5 million?; (2) Did you flee the crime‐scene in: a Nissan; a Toyota; a Subaru; a Honda; a Mazda?; (3) Did you threaten the employee with: a shot gun; a revolver; a knife; a baseball bat; a pair of scissors? The suspect's physiological responses to the actual probe items are compared with his responses to the control items. A consistent measurement of differential reactions to the probe items will bring the examiner to the conclusion that the suspect knows them. Extensive research has demonstrated that knowledgeable individuals exhibit differential responses to the probes, compared to the irrelevant alternatives, while un‐knowledgeable examinees respond similarly to all alternatives (e.g., Meijer et al., ). This differential response pattern to the probes was labeled “the CIT effect” (e.g., Ben‐Shakhar, ) and it is reflected by enhanced skin conductance responses (SCR), a shorter respiration line length (RLL), heart rate (HR) deceleration, as well as pupil dilation (e.g., Lubow & Fein, ). The RLL measure reflects the length of the breathing curve and is affected by both respiratory amplitude (depth of breathing) and respiratory cycle (speed of breathing). In addition to these autonomic nervous system (ANS) changes, the CIT effect is also reflected by an enhanced amplitude of the P300 component of the brain event‐related potentials (ERPs) and enhanced reaction times (RTs; e.g., Meijer et al., ; Suchotzki et al., ). Three meta‐analyses of CIT studies have demonstrated impressive detection efficiency estimates with ANS, RT, and ERP measures (Ben‐Shakhar & Elaad, ; Meijer et al., ; Suchotzki et al., ). Although empirical support is crucial, scientific validation of this method requires also a theoretical understanding of the CIT effect. The purpose of this article is to refine the theoretical underpinnings of the CIT. Specifically, we intend to elaborate and clarify several features of Orienting Response (OR) theory and propose for the first time that the voluntary, rather than the involuntary OR, modulates the CIT effect. Second, we argue that motivational‐emotional theories (see klein Selle et al., ) are consistent with OR theory and cannot be considered as proper accounts of the CIT effect. Third, while traditional theoretical approaches assumed that a given mechanism would account for the CIT effect regardless of the specific response, we emphasize the idea that different cognitive mechanisms drive the responding of different physiological measures in the CIT. OR THEORY Various theories have been proposed to explain the CIT effect (e.g., klein Selle et al., ), but OR theory has played a major role in the CIT literature since the early 1970s (e.g., Lykken, ). The concept of the OR was originally introduced by Pavlov ( ) who used the term “Orienting Reflex” and argued that it is elicited by a slightest change in the environment. Sokolov ( , ) described the OR as a complex of behavioral and physiological reactions in response to any novel stimulus or a change in stimulation and proposed the “comparator model” as an account for OR elicitation and habituation. According to this model, the sensory input is compared to existing representations (termed “neuronal models” by Sokolov), which are formed by exposure to previous stimulation and by expectations. When the input does not match these neuronal models (or expectations), the OR is elicited. Importantly, when a stimulus carries a special significance (e.g., one's own name) an enhanced orienting response occurs. The application of OR theory to the CIT is based on the notion that the responses elicited by probes are similar to ORs because they are modulated by both stimulus novelty and stimulus significance (e.g., Gati & Ben‐Shakhar, ; Siddle, ; Sokolov, ). Specifically, the selected probes carry a special importance, i.e., significance, only for knowledgeable individuals. Furthermore, as each CIT question is followed by one probe and several irrelevant alternatives, the frequency of the probe's presentation is relatively small. Indeed, research has demonstrated that the CIT effect is greatly attenuated when probes are presented more frequently (Ben‐Shakhar, ). Thus, according to OR theory, both stimulus significance and relative novelty are necessary conditions for the CIT effect. 2.1 Stimulus novelty versus stimulus significance A notable weakness of OR theory is that the two major factors responsible for OR elicitation – stimulus novelty and stimulus significance – are not well defined. Stimulus significance can be related to many different factors. For example, stimuli may be significant for successfully performing a task (task‐significant stimuli), such as detecting a certain target in a visual search paradigm. But stimuli may also be personally significant (e.g., one's own name, or other personal details). Threatening and painful stimuli form another category of significance. Thus, significance is context‐related and must be defined for each specific situation. In addition, stimulus significance, in any given context is a continuous dimension because some stimuli can be more significant than others (e.g., more threatening, more self‐related). In CIT studies and applications, two definitions of significance have been used. Stimuli are significant to “guilty” examinees because they are related to a crime scene (either mock‐crimes in the experimental context, or real crimes in the forensic application of the CIT) and their knowledge might implicate these examinees. Personally significant items have also been used in the CIT, when someone tries to conceal his or her own personal details (false identity cases). Indeed, the recent meta‐analysis of CIT studies relied on these two types of significance (Meijer et al., ). Novelty too can be defined in more than one way, but basically it is related to an occurrence of an unexpected event. Of course, events can be unexpected for many different reasons. For example, Berlyne ( ) demonstrated that in addition to novelty, stimulus complexity and incongruity can affect the OR, as measured by the SCR. These different manipulations of the presented stimuli have however one thing in common, namely, they create an un‐expectancy, which is a crucial antecedent for OR elicitation. This was eloquently described by Sokolov ( ). In CIT studies and applications, novelty refers to the relatively small presentation frequency of the probes. Ben‐Shakhar and Gati proposed a feature‐matching approach for measuring both stimulus significance and stimulus novelty (e.g., Gati & Ben‐Shakhar, ; Gati et al., ). This model attempts to specify how both the novelty and the significance value of the stimulus input are determined by a comparison of the input with existing representations. Specifically, they proposed feature‐matching algorithms derived from Tversky's model of similarity judgment (Tversky, ). In the CIT, the degree of match/mismatch of the input with the previously presented stimuli determines the novelty value of the input and the degree of match/mismatch between the input and the critical probe stimulus determines the significance value of the input. The levels of novelty and significance are then integrated additively to determine the magnitude of the OR. Although this theory was largely corroborated in a series of CIT studies conducted by Ben‐Shakhar and Gati (e.g., Gati & Ben‐Shakhar, ; Gati et al., ), not all predictions could be confirmed (see Ben‐Shakhar & Gati, ). 2.2 Involuntary OR versus voluntary OR An additional difficulty in adopting OR theory as an account for the CIT effect concerns the fact that the significant stimulus (the probe) must be familiar to knowledgeable examinees. Consequently, a question can be raised regarding its novelty. In fact, this issue can be raised more generally regarding the role of stimulus significance in OR elicitation because a significant stimulus must be familiar (e.g., one's own name, a conditioned stimulus). Two solutions can be offered to this dilemma. First, as indicated above, the novelty value of the probe is reflected by its relatively small frequency of presentation (e.g., Ben‐Shakhar, ). Thus, in spite of its familiarity, its presentation is relatively unexpected. A second and more general approach has been offered by several prominent psychophysiologists in the late 1970s–early 1980s. These researchers, while using different terminology, were consistent in postulating two types of ORs. Most relevant for the present discussion is Naatanen's ( ) conceptualization. He noted that Sokolov's original theory cannot account for the activation of the OR by familiar, but significant stimuli. Consequently, Naatanen made a distinction between the term orienting reflex, which refers to the involuntary, organismic response evoked by novel stimuli and the term orienting reaction for the longer latency, less automatic orienting responses. A similar conceptualization was made by Maltzman and his colleagues (e.g., Maltzman, Gould, et al., ; Maltzman, Vincent, & Wolff, ), who coined the term “involuntary OR” to describe the response evoked by an unexpected novel stimulus and the term “voluntary OR” to describe the response to a predictable (significant) stimulus (e.g., a stimulus for which expectations have been formed through instructions). A similar distinction was made by Ohman ( ) between what he termed as “signal OR” and “non‐signal OR”. Non‐signal OR refers to a situation where the input does not match representations in short‐term memory (neuronal model, according to Sokolov's terminology). Signal OR refers to a situation where the input matches a memory representation that has been primed as significant. In other words, the terms orienting reflex, involuntary OR, and non‐signal OR describe orienting to unexpected events, as originally formulated by Sokolov ( , ), while the terms, orienting reaction, voluntary OR, and signal OR describe responses evoked by significant stimuli. Based on this formulation, we propose that the CIT effect is reflected by the voluntary OR. Recently, several studies employed a modified version of the CIT based on eye‐movement measures (e.g., Millen & Hancock, ; Millen et al., ; Nahari et al., ; Peth et al., ). The results reported by Lancry‐Dayan et al. ( , ) and by Nahari et al. ( ) are particularly relevant to the present discussion. Specifically, when 4 items were presented simultaneously (one probe and three control items), gaze was initially directed to the probe even when task demands prioritized focusing on the control, unfamiliar items. As the probe was significant and familiar to the participants, this was interpreted as a reflection of orientation toward significant stimuli. However, when specific countermeasure instructions (i.e., to look equally on all items) were given to the participants, this initial preference was largely avoided, implying that it is not an automatic response, but can be voluntarily controlled. This is another indication that the initial preference to significant items may reflect the voluntary rather than the involuntary OR. In conclusion, this brief discussion suggests that the CIT effect reflects orientation to significant items, which are familiar to the examinee. Thus, it suggests that it is the voluntary OR (or the signal OR) that modulates the CIT effect. It should be emphasized, however, that the voluntary OR (just as the involuntary OR) is affected by relative presentation frequency (e.g., Ben‐Shakhar, ). Thus, as indicated by Gati and Ben‐Shakhar ( ), it is the combination of stimulus significance and stimulus novelty (as reflected by the presentation frequency of the probes) that determines the CIT effect. It should also be stressed that the reviewed OR literature relied almost exclusively on the SCR measure. We believe that many previous CIT studies (particularly the early studies) relied only on the SCR because of the assumption that the same mechanism underlies the CIT effect regardless of the specific measure. Consequently, it was assumed that results observed with the SCR would generalize to other physiological measures. Moreover, many CIT researchers might have used the SCR because it is easily measured and analyzed, and at the same time very sensitive to concealed information (see Gamer, ). All in all, this means that the “voluntary OR” account of the CIT is supported by the SCR, but not by other physiological measures such as respiration and heart rate. Stimulus novelty versus stimulus significance A notable weakness of OR theory is that the two major factors responsible for OR elicitation – stimulus novelty and stimulus significance – are not well defined. Stimulus significance can be related to many different factors. For example, stimuli may be significant for successfully performing a task (task‐significant stimuli), such as detecting a certain target in a visual search paradigm. But stimuli may also be personally significant (e.g., one's own name, or other personal details). Threatening and painful stimuli form another category of significance. Thus, significance is context‐related and must be defined for each specific situation. In addition, stimulus significance, in any given context is a continuous dimension because some stimuli can be more significant than others (e.g., more threatening, more self‐related). In CIT studies and applications, two definitions of significance have been used. Stimuli are significant to “guilty” examinees because they are related to a crime scene (either mock‐crimes in the experimental context, or real crimes in the forensic application of the CIT) and their knowledge might implicate these examinees. Personally significant items have also been used in the CIT, when someone tries to conceal his or her own personal details (false identity cases). Indeed, the recent meta‐analysis of CIT studies relied on these two types of significance (Meijer et al., ). Novelty too can be defined in more than one way, but basically it is related to an occurrence of an unexpected event. Of course, events can be unexpected for many different reasons. For example, Berlyne ( ) demonstrated that in addition to novelty, stimulus complexity and incongruity can affect the OR, as measured by the SCR. These different manipulations of the presented stimuli have however one thing in common, namely, they create an un‐expectancy, which is a crucial antecedent for OR elicitation. This was eloquently described by Sokolov ( ). In CIT studies and applications, novelty refers to the relatively small presentation frequency of the probes. Ben‐Shakhar and Gati proposed a feature‐matching approach for measuring both stimulus significance and stimulus novelty (e.g., Gati & Ben‐Shakhar, ; Gati et al., ). This model attempts to specify how both the novelty and the significance value of the stimulus input are determined by a comparison of the input with existing representations. Specifically, they proposed feature‐matching algorithms derived from Tversky's model of similarity judgment (Tversky, ). In the CIT, the degree of match/mismatch of the input with the previously presented stimuli determines the novelty value of the input and the degree of match/mismatch between the input and the critical probe stimulus determines the significance value of the input. The levels of novelty and significance are then integrated additively to determine the magnitude of the OR. Although this theory was largely corroborated in a series of CIT studies conducted by Ben‐Shakhar and Gati (e.g., Gati & Ben‐Shakhar, ; Gati et al., ), not all predictions could be confirmed (see Ben‐Shakhar & Gati, ). Involuntary OR versus voluntary OR An additional difficulty in adopting OR theory as an account for the CIT effect concerns the fact that the significant stimulus (the probe) must be familiar to knowledgeable examinees. Consequently, a question can be raised regarding its novelty. In fact, this issue can be raised more generally regarding the role of stimulus significance in OR elicitation because a significant stimulus must be familiar (e.g., one's own name, a conditioned stimulus). Two solutions can be offered to this dilemma. First, as indicated above, the novelty value of the probe is reflected by its relatively small frequency of presentation (e.g., Ben‐Shakhar, ). Thus, in spite of its familiarity, its presentation is relatively unexpected. A second and more general approach has been offered by several prominent psychophysiologists in the late 1970s–early 1980s. These researchers, while using different terminology, were consistent in postulating two types of ORs. Most relevant for the present discussion is Naatanen's ( ) conceptualization. He noted that Sokolov's original theory cannot account for the activation of the OR by familiar, but significant stimuli. Consequently, Naatanen made a distinction between the term orienting reflex, which refers to the involuntary, organismic response evoked by novel stimuli and the term orienting reaction for the longer latency, less automatic orienting responses. A similar conceptualization was made by Maltzman and his colleagues (e.g., Maltzman, Gould, et al., ; Maltzman, Vincent, & Wolff, ), who coined the term “involuntary OR” to describe the response evoked by an unexpected novel stimulus and the term “voluntary OR” to describe the response to a predictable (significant) stimulus (e.g., a stimulus for which expectations have been formed through instructions). A similar distinction was made by Ohman ( ) between what he termed as “signal OR” and “non‐signal OR”. Non‐signal OR refers to a situation where the input does not match representations in short‐term memory (neuronal model, according to Sokolov's terminology). Signal OR refers to a situation where the input matches a memory representation that has been primed as significant. In other words, the terms orienting reflex, involuntary OR, and non‐signal OR describe orienting to unexpected events, as originally formulated by Sokolov ( , ), while the terms, orienting reaction, voluntary OR, and signal OR describe responses evoked by significant stimuli. Based on this formulation, we propose that the CIT effect is reflected by the voluntary OR. Recently, several studies employed a modified version of the CIT based on eye‐movement measures (e.g., Millen & Hancock, ; Millen et al., ; Nahari et al., ; Peth et al., ). The results reported by Lancry‐Dayan et al. ( , ) and by Nahari et al. ( ) are particularly relevant to the present discussion. Specifically, when 4 items were presented simultaneously (one probe and three control items), gaze was initially directed to the probe even when task demands prioritized focusing on the control, unfamiliar items. As the probe was significant and familiar to the participants, this was interpreted as a reflection of orientation toward significant stimuli. However, when specific countermeasure instructions (i.e., to look equally on all items) were given to the participants, this initial preference was largely avoided, implying that it is not an automatic response, but can be voluntarily controlled. This is another indication that the initial preference to significant items may reflect the voluntary rather than the involuntary OR. In conclusion, this brief discussion suggests that the CIT effect reflects orientation to significant items, which are familiar to the examinee. Thus, it suggests that it is the voluntary OR (or the signal OR) that modulates the CIT effect. It should be emphasized, however, that the voluntary OR (just as the involuntary OR) is affected by relative presentation frequency (e.g., Ben‐Shakhar, ). Thus, as indicated by Gati and Ben‐Shakhar ( ), it is the combination of stimulus significance and stimulus novelty (as reflected by the presentation frequency of the probes) that determines the CIT effect. It should also be stressed that the reviewed OR literature relied almost exclusively on the SCR measure. We believe that many previous CIT studies (particularly the early studies) relied only on the SCR because of the assumption that the same mechanism underlies the CIT effect regardless of the specific measure. Consequently, it was assumed that results observed with the SCR would generalize to other physiological measures. Moreover, many CIT researchers might have used the SCR because it is easily measured and analyzed, and at the same time very sensitive to concealed information (see Gamer, ). All in all, this means that the “voluntary OR” account of the CIT is supported by the SCR, but not by other physiological measures such as respiration and heart rate. ALTERNATIVE THEORETICAL APPROACHES 3.1 Motivational‐emotional theories As mentioned earlier, OR was not the only theory proposed to explain the CIT effect. Motivational‐emotional theories suggesting that motivation to avoid detection, or the emotional arousal induced by the probes, can account for the CIT effect, have been proposed (see a review in klein Selle et al., ). The hypothesis that motivation to avoid detection is associated with enhanced CIT effect was generally corroborated (e.g., Gustafson & Orne, , ). Yet, it is important to note that all the corroborating studies relied only on the SCR measure (Ben‐Shakhar & Elaad, ; Meijer et al., ). Moreover, although emotional‐motivational theories have been traditionally presented as alternatives to OR theory, they are actually consistent with the OR approach and even support it. When individuals are highly motivated to avoid detection, the significance level of the probes increases and, consequently, larger ORs are expected. Likewise, probes associated with emotional arousal are more significant than neutral probes. Indeed, klein Selle, Verschuere, Kindt, Meijer, Nahari, and Ben‐Shakhar ( ) demonstrated that emotional probes are associated with larger SCR‐CIT effect compared to neutral probes. Once again it should be noted that this effect was not obtained with respiration and cardiovascular measures. Importantly, however, motivational‐emotional theories cannot be considered as proper accounts for the CIT effect because neither motivation to avoid detection, nor the emotional value of the probes are necessary conditions for this effect. Many studies have demonstrated a fairly large CIT effect without any motivational instructions or incentive and with neutral probes (e.g., card‐test experiments, see Ellson et al., ; klein Selle et al., ). 3.2 Arousal inhibition theory Several researchers introduced cognitive load theory as a new approach for deception detection (e.g., Vrij et al., , ; Walczyk et al., ). Specifically, liars must inhibit the truthful response and the effort associated with such inhibition attempts results in cues to deception (e.g., delayed responses). While the cognitive load approach has gained considerable empirical support in lie‐detection studies (e.g., Vrij, ), it is unclear whether it is applicable for the CIT, which is designed to detect knowledge of concealed information, rather than deception. For example, many studies demonstrated that the CIT effect can be observed even in a silent condition (when examinees are not required to answer the questions, see Meijer et al., ). A possible account for the observed CIT effect under a silent condition was proposed by several researchers who introduced the concept of arousal inhibition, rather than response inhibition (e.g., Verschuere et al., ). According to this theory, guilty suspects who wish “to pass the test” will attempt to inhibit their experienced physiological arousal. Imagine, for instance, the previous example of a bank robbery. The robber might not only recognize (and orient to) the correct items, but also, in order to seem innocent, attempt to inhibit his physiological arousal. Several studies have demonstrated that such an effort is associated with enhanced, rather than reduced, physiological responses (Dan‐Glauser & Gross, ; Pennebaker & Chew, ). Initial support for this theory came from a study using a startle‐eye blink paradigm (Verschuere et al., ). This study revealed reduced, rather than increased, startle modulation to crime pictures (as measured by eye blinks), suggesting that inhibition contributes to the physiological responses in the CIT. Motivational‐emotional theories As mentioned earlier, OR was not the only theory proposed to explain the CIT effect. Motivational‐emotional theories suggesting that motivation to avoid detection, or the emotional arousal induced by the probes, can account for the CIT effect, have been proposed (see a review in klein Selle et al., ). The hypothesis that motivation to avoid detection is associated with enhanced CIT effect was generally corroborated (e.g., Gustafson & Orne, , ). Yet, it is important to note that all the corroborating studies relied only on the SCR measure (Ben‐Shakhar & Elaad, ; Meijer et al., ). Moreover, although emotional‐motivational theories have been traditionally presented as alternatives to OR theory, they are actually consistent with the OR approach and even support it. When individuals are highly motivated to avoid detection, the significance level of the probes increases and, consequently, larger ORs are expected. Likewise, probes associated with emotional arousal are more significant than neutral probes. Indeed, klein Selle, Verschuere, Kindt, Meijer, Nahari, and Ben‐Shakhar ( ) demonstrated that emotional probes are associated with larger SCR‐CIT effect compared to neutral probes. Once again it should be noted that this effect was not obtained with respiration and cardiovascular measures. Importantly, however, motivational‐emotional theories cannot be considered as proper accounts for the CIT effect because neither motivation to avoid detection, nor the emotional value of the probes are necessary conditions for this effect. Many studies have demonstrated a fairly large CIT effect without any motivational instructions or incentive and with neutral probes (e.g., card‐test experiments, see Ellson et al., ; klein Selle et al., ). Arousal inhibition theory Several researchers introduced cognitive load theory as a new approach for deception detection (e.g., Vrij et al., , ; Walczyk et al., ). Specifically, liars must inhibit the truthful response and the effort associated with such inhibition attempts results in cues to deception (e.g., delayed responses). While the cognitive load approach has gained considerable empirical support in lie‐detection studies (e.g., Vrij, ), it is unclear whether it is applicable for the CIT, which is designed to detect knowledge of concealed information, rather than deception. For example, many studies demonstrated that the CIT effect can be observed even in a silent condition (when examinees are not required to answer the questions, see Meijer et al., ). A possible account for the observed CIT effect under a silent condition was proposed by several researchers who introduced the concept of arousal inhibition, rather than response inhibition (e.g., Verschuere et al., ). According to this theory, guilty suspects who wish “to pass the test” will attempt to inhibit their experienced physiological arousal. Imagine, for instance, the previous example of a bank robbery. The robber might not only recognize (and orient to) the correct items, but also, in order to seem innocent, attempt to inhibit his physiological arousal. Several studies have demonstrated that such an effort is associated with enhanced, rather than reduced, physiological responses (Dan‐Glauser & Gross, ; Pennebaker & Chew, ). Initial support for this theory came from a study using a startle‐eye blink paradigm (Verschuere et al., ). This study revealed reduced, rather than increased, startle modulation to crime pictures (as measured by eye blinks), suggesting that inhibition contributes to the physiological responses in the CIT. THE RESPONSE FRACTIONATION APPROACH So far, all the theories discussed above can be viewed as unitary theories because they assume that a single mechanism underlies the CIT effect regardless of the specific measure being used. For example, the OR account for the CIT effect was supported almost exclusively by studies using only the SCR measure. Yet, this account has been proposed as a general theory for the CIT. Such a unitary approach might be questioned as several CIT studies demonstrated that some predictions derived from OR theory are supported when the SCR measure is used, but not with other measures (e.g., respiration suppression). For instance, habituation is an integral part of OR theory, yet some studies revealed that while the SCR measure was attenuated when questions were repeated, no habituation was observed with the respiration measure (e.g., Ben‐Shakhar & Elaad, ). Similarly, a number of experimental manipulations were found to divergently affect the SCR and cardiorespiratory (RLL and HR) measures (e.g., overt deception: Ambach et al., ; interfering task: Ambach et al., ; question repetition: Ben‐Shakhar & Elaad, ). Ambach et al. ( ), for example, introduced a parallel n‐back task during the CIT which was assumed to engage additional mental activity. While the parallel task enhanced the SCR CIT effect, it reduced the RLL and HR CIT effects. In addition, three studies revealed that mental countermeasures (i.e., making one or two control items significant by associating them with emotional events or with cognitive effort) attenuated detection efficiency with the SCR measure, but not when the RLL or HR were used (Ben‐Shakhar & Dolev, ; Honts et al., ; Peth et al., ). It should be noted that these mental countermeasures were based on the idea that once control items are also significant they elicit ORs which will undermine their differentiation from the probe. The finding that the RLL and HR measures were not affected by this type of countermeasure implies that they do not reflect orienting to significant stimuli. However, these discrepancies from OR theory as a general account for the CIT effect were largely ignored until recently. A series of studies conducted by klein Selle and her colleagues (klein Selle et al., , ; klein Selle, Verschuere, Kindt, Meijer, & Ben‐Shakhar, ) contrasted orienting and inhibition theory by comparing the classical conceal condition (assumed to induce both orienting and inhibition) to a novel reveal condition (assumed to induce only orienting). In both conditions, participants were explained that their physiological responses would change automatically when confronted with the probe items. Then, depending on the experimental session, participants were either motivated to allow (reveal condition), or not to allow (conceal condition), these automatic changes and, consequently, detection of the critical information. Thus, while participants in both conditions should orient to the significant critical information, only participants motivated to conceal should also inhibit their physiological arousal. The results contradicted the unitary approach and demonstrated that different peripheral physiological measures may reflect different mechanisms (see also Matsuda et al., ; Suchotzki et al., ). Specifically, these studies revealed that the CIT effects measured by the respiration and heart rate were observed only in the conceal condition, while the CIT effect measured by SCR was observed in both conditions. Thus, while OR theory can account for the CIT effect based on the SCR measure, HR deceleration and respiration suppression reflect attempts to inhibit physiological arousal. These findings are consistent with a response fractionation approach, suggesting that different physiological measures are associated with different processing stages of the sensory input. The idea of physiological response fractionation is not new and a series of studies conducted by Barry and his colleagues led to the development of the preliminary process theory (PPT; i.e., Barry, , , ). This theory, however, cannot fully explain the differential response in the CIT. For example, it fails to explain the differential HR responses to probe items. Specifically, as the PPT relates HR to the mere process of stimulus registration, all stimuli (both probes and controls) should elicit a similar HR deceleration (see klein Selle et al., ). Hence, a new response fractionation model, focusing on orienting and inhibition, was developed (see klein Selle et al., ; klein Selle, Verschuere, Kindt, Meijer, & Ben‐Shakhar, ). The mechanisms underlying other types of responses used in the CIT are yet unclear, but two recent studies by, klein Selle et al. ( ), klein Selle et al. ( ) suggest that the P300 component and pupil size measures may be similar to the SCR in reflecting orientation to significant stimuli (but see also Matsuda & Nittono, ; Rosenfeld et al., ). On the other hand, blinks and fixation suppression in the CIT were found to reflect attempts at arousal inhibition (klein Selle et al., ). As mentioned earlier, response time can be used as another CIT measure and several studies have demonstrated that this measure is very effective in differentiating between knowledgeable and un‐knowledgeable individuals (Lukács et al., ; Suchotzki et al., ; Verschuere et al., ). In the deception field, it has been generally assumed that RTs reflect cognitive load and inhibition processes (Vrij et al., ; Walczyk et al., ), because deceptive answers are associated with more effort than truthful responses. In addition, to elicit a deceptive response, it is necessary to inhibit the truthful response. Future research should be conducted to examine whether this theory will hold for the RT‐based CIT which is designed to detect concealed knowledge and does not necessarily involve deception. Interestingly, a study by Suchotzki et al. ( ) suggests that overt deception is necessary for the RT CIT‐effect, supporting the response inhibition account. Two subsequent studies further specified this inhibition mechanism, pinpointing to response conflict (i.e., Lukács et al., ; Suchotzki et al., ). THEORETICAL AND PRACTICAL IMPLICATIONS In its current form, the response fractionation approach of the CIT focuses on two underlying mechanisms: orienting and arousal inhibition. However, as discussed earlier, there are already several studies suggesting that the delayed RTs in the CIT are tied to another cognitive process, i.e., response conflict. Hence, with more research, the current fractionation model is expected to expand and include other dependent measures (e.g., RTs) and other cognitive processes (e.g., response conflict). Moreover, as the fractionation idea ties empirically established mechanisms (e.g., orienting) to different physiological and behavioral measures, it might also extend to other fields of research that rely on such mechanisms. For instance, several prominent emotion researchers have aimed to organize emotional reactions by underlying orienting and defensive responses (see Bradley et al., ). A better understanding of the theoretical basis of the CIT may also have practical implications and lead to real‐world recommendations. The first set of implications relate to the suggestion that the SCR reflects a voluntary OR which may be affected by habituation and item‐significance. Consequently, although the SCR has been repeatedly demonstrated to be the most efficient ANS measure (e.g., Gamer, ; Meijer et al., ), it is not always the preferred measure. For example, while a larger weight should be assigned to the SCR in the initial blocks of the CIT, measures such as RLL and HR should be weighted more in later blocks when questions and items are repeated (and SCRs are expected to habituate). Similarly, when the CIT includes only low salient items, more weight should be given to the RLL and HR measures. Nevertheless, CIT examiners should always aim to select the most significant CIT stimuli, e.g., stimuli that were encoded with a high level of arousal (see Osugi & Ohira, ; Peth et al., ; klein Selle, Verschuere, Kindt, Meijer, Nahari, & Ben‐Shakhar, ). Moreover, as the voluntary OR describes a response to significant stimuli for which expectations have been formed (e.g., through instructions), CIT examiners could provide more detailed instructions which emphasize the task relevance of probe items. The second set of implications refer to the issue of countermeasures. As indicated above, several studies demonstrated a significant reduction in SCR detection efficiency, but no reduction in RLL and HR detection efficiency, when participants tried to increase responses to neutral, control, items (e.g., Ben‐Shakhar & Dolev, ; Honts et al., ; Peth et al., ). This may not be so surprising under the fractionation model which suggests that the RLL and HR reflect arousal inhibition. Specifically, even if examinees attempt to increase responses to control items, when motivated to conceal, they will also try to inhibit physiological arousal associated with the probe items (and show the RLL and HR CIT effects). Hence, a recent study (klein Selle & Ben‐Shakhar, ) examined a novel type of countermeasure, designed to affect arousal inhibition attempts and the underlying RLL and HR responses. This study, unlike previous studies by klein Selle and colleagues, relied on a mock‐crime paradigm. Specifically, guilty examinees committed a mock‐crime and were instructed to try and reveal, instead of conceal, the probe items. Surprisingly, the RLL and HR CIT effects were unaffected by this novel type of countermeasure. It was accordingly suggested that when one is guilty of a crime, and inhibition is the default, it is difficult to stop inhibiting. Hence, when examiners suspect that their suspect is trying to use countermeasures, the RLL and HR measures should be used instead of the SCR. Clearly, these suggestions are made with caution (as additional studies should be conducted), but they may benefit practitioners and, hopefully, encourage a wider usage of the CIT in real criminal investigations. Nathalie klein Selle: Conceptualization; writing – review and editing. Gershon Ben‐Shakhar: Conceptualization; writing – original draft; writing – review and editing.
Chest X‐ray severity score Brixia: From marker of early
c6800c56-7565-4e4d-9d1b-c10b06e58f12
10078553
Internal Medicine[mh]
INTRODUCTION Two years and half after the first COVID‐19 outbreak, we keep on learning about it. Never before in science history, so much information was quickly generated, shared and deployed. Such an amount of research and scientific development contributed to face off the challenges imposed by the pandemic that need now to be deeply dived into and not be forgotten. A growing literature is focusing on a post‐COVID‐19 pandemic—likely endemic—world, with a call for the next pandemic. Early in the forerunner countries, preventing and identifying early in‐hospital COVID‐19 positivity was one of the leading challenges and required a substantial step forward in hospital management and patient care. , Here, we focused on radiological assessment performed at ED admission during the first pandemic wave. Radiological imaging has indeed gained a critical role in the diagnosis of COVID‐19 patients. Especially chest X‐ray (CXR) has early become a useful diagnostic and monitoring tool, due to the feasibility in the emergency setting, despite its low specificity in the early stage of disease. Multiple CXR‐based scoring systems have been proposed to stratify the risk of disease progression and mortality. , , Here, we investigated whether the semi‐quantitative scoring of CXR Brixia may deserve a role beyond the early COVID‐19 positivity such as prediction of late COVID‐19 infection and serious adverse outcome (i.e. thrombotic complications and gastrointestinal [GI] bleeding). METHODS 2.1 Patient enrolment and assessment This is a sub‐analysis of a previously published retrospective study carried out at the IRCCS Ospedale Policlinico San Martino in Genoa during the first wave of COVID‐19 pandemic. Briefly, patients tested negative for COVID‐19 infection during stay at ED admission were enrolled from 24 February to 24 May 2020. Brixia score was then consecutively assessed in the first 283 enrolled patients (Figure ). Clinical and biochemical data performed at ED admission have been collected from hospital records, as previously described. The present study was approved by the local ethics board of IRCCS Ospedale Policlinico San Martino (200/2020 – DB id 10,515). The study was carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans and adhered to the principles of the STROBE statement. 2.2 Chest X‐ray scoring by Brixia Brixia scoring system was calculated according with previous literature , (Figure ). Chest radiographs were performed with DRX Revolution and Kodak DirectView DR9000 (both from Carestream Health) or SEDECAL—Radiologico Mobile Digitale (ATS), or Roller 15/30 (SMAM). Image interpretation was carried out by two expert radiologists, blinded to the other clinical variables and outcome, through SuitEstensa RIS (Esaote). 2.3 Study endpoints adjudication and sample size calculation A case of delayed COVID‐19 positivity is defined by a patient tested positive for COVID‐19 infection in Internal Medicine wards after being negative during stay at ED. The primary outcome of the study is then to establish the predictive role of Brixia score towards delayed COVID‐19 positivity during hospital stay. As model with binary outcome, our sample size (>125) subjects satisfy the requirements for having a power greater than 80% and a type I error lower than 5% (see Appendix ). As secondary outcomes, we consider any COVID‐19 positivity during stay in both ED and Internal Medicine ward, occurrence of adverse events (a composite of thrombotic complications and gastrointestinal [GI] bleeding) and overall mortality. The latter was tested after Brixia score categorization (< vs. ≥ 8), as previously reported. 2.4 Statistical analysis Analyses are performed with GraphPad Prism version 9.0.0 for Windows (GraphPad Software, San Diego, CA) and R environment for statistical computing (URL http://www . R‐project. org/). Categorical data are presented as absolute and relative frequencies, whereas continuous ones as median and interquartile range [IQR] since the normality assumption is not demonstrated. Unpaired intergroup comparisons are drawn by Fisher's exact test and Mann–Whitney U‐ test, as appropriate. Spearman's rank coefficients are calculated to investigate the correlations between continuous/ordinal variables, whereas Cox proportional hazards models are built to test the predictive ability of Brixia scoring system towards any time and late (in‐ward) COVID‐19 positivity during hospitalization, and overall mortality as well. They are expressed as hazard ratio (HR) with 95% confidence interval (CI). For the latter, Brixia categorization also allows survival rate estimation through Log‐rank test and Kaplan–Meier curve. Logistic regression models (presented as odds ratio [OR] with 95% CI) are further built to address the predictive role of Brixia score towards adverse thrombotic events and GI bleeding. For the adjusted models, forward stepwise regression analysis is used and non‐normally variables are log‐transformed to meet the linearity requirement for regression. Performances of regression analyses are also tested for (i) calibration by Hosmer–Lemeshow goodness of fit test, (ii) discrimination through receiver operator characteristic (ROC) curve and (iii) internal validation by bootstrap resampling. For all statistical analyses, a 2‐sided p ‐Value <0.05 was considered as statistically significant. Patient enrolment and assessment This is a sub‐analysis of a previously published retrospective study carried out at the IRCCS Ospedale Policlinico San Martino in Genoa during the first wave of COVID‐19 pandemic. Briefly, patients tested negative for COVID‐19 infection during stay at ED admission were enrolled from 24 February to 24 May 2020. Brixia score was then consecutively assessed in the first 283 enrolled patients (Figure ). Clinical and biochemical data performed at ED admission have been collected from hospital records, as previously described. The present study was approved by the local ethics board of IRCCS Ospedale Policlinico San Martino (200/2020 – DB id 10,515). The study was carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans and adhered to the principles of the STROBE statement. Chest X‐ray scoring by Brixia Brixia scoring system was calculated according with previous literature , (Figure ). Chest radiographs were performed with DRX Revolution and Kodak DirectView DR9000 (both from Carestream Health) or SEDECAL—Radiologico Mobile Digitale (ATS), or Roller 15/30 (SMAM). Image interpretation was carried out by two expert radiologists, blinded to the other clinical variables and outcome, through SuitEstensa RIS (Esaote). Study endpoints adjudication and sample size calculation A case of delayed COVID‐19 positivity is defined by a patient tested positive for COVID‐19 infection in Internal Medicine wards after being negative during stay at ED. The primary outcome of the study is then to establish the predictive role of Brixia score towards delayed COVID‐19 positivity during hospital stay. As model with binary outcome, our sample size (>125) subjects satisfy the requirements for having a power greater than 80% and a type I error lower than 5% (see Appendix ). As secondary outcomes, we consider any COVID‐19 positivity during stay in both ED and Internal Medicine ward, occurrence of adverse events (a composite of thrombotic complications and gastrointestinal [GI] bleeding) and overall mortality. The latter was tested after Brixia score categorization (< vs. ≥ 8), as previously reported. Statistical analysis Analyses are performed with GraphPad Prism version 9.0.0 for Windows (GraphPad Software, San Diego, CA) and R environment for statistical computing (URL http://www . R‐project. org/). Categorical data are presented as absolute and relative frequencies, whereas continuous ones as median and interquartile range [IQR] since the normality assumption is not demonstrated. Unpaired intergroup comparisons are drawn by Fisher's exact test and Mann–Whitney U‐ test, as appropriate. Spearman's rank coefficients are calculated to investigate the correlations between continuous/ordinal variables, whereas Cox proportional hazards models are built to test the predictive ability of Brixia scoring system towards any time and late (in‐ward) COVID‐19 positivity during hospitalization, and overall mortality as well. They are expressed as hazard ratio (HR) with 95% confidence interval (CI). For the latter, Brixia categorization also allows survival rate estimation through Log‐rank test and Kaplan–Meier curve. Logistic regression models (presented as odds ratio [OR] with 95% CI) are further built to address the predictive role of Brixia score towards adverse thrombotic events and GI bleeding. For the adjusted models, forward stepwise regression analysis is used and non‐normally variables are log‐transformed to meet the linearity requirement for regression. Performances of regression analyses are also tested for (i) calibration by Hosmer–Lemeshow goodness of fit test, (ii) discrimination through receiver operator characteristic (ROC) curve and (iii) internal validation by bootstrap resampling. For all statistical analyses, a 2‐sided p ‐Value <0.05 was considered as statistically significant. RESULTS 3.1 Delayed COVID ‐19 positivity is associated with patients age and frailty and length of hospitalization Clinical characteristics of the study cohort are summarized in Tables and Tables ,S2. As in the original cohort, patients are elderly with a median age of 80 years and equally distributed across sexes (52.1% of males). Patients with in‐hospital COVID‐19 positivity have more frequently fever ( p = 0.03 and p = 0.099, respectively) and dyspnoea ( p = 0.003 and p = 0.042, respectively) at ED admission. Since tested positive patients were immediately transferred to COVID‐19 dedicated units, their median length of stay in Internal Medicine ward was lower (5 days vs. 10 days, p = 0.007), without significant differences in the overall hospitalization time. Rather, the time to delayed in‐hospital positivity is correlated with advanced age ( p = 0.006), comorbidity burden ( p < 0.001), impaired renal function ( p = 0.009 and 0.001 for creatinine and estimated glomerular filtration rate [eGFR], respectively) and inflammatory status ( p = 0.037 for C‐reactive protein [CRP]) (Table ). 3.2 Brixia score is associated with clinical suspicion of COVID ‐19 infection and in‐hospital positivity At ED admission, Brixia score is higher in patients with delayed in‐hospital positivity ( p = 0.0737), history of contact with COVID‐19 cases before hospitalization ( p ‐Value for trend 0.0067) and dyspnoea ( p = 0.0058) (Figure ; Tables ). 3.3 Brixia score independently predicts delayed in‐hospital COVID ‐19 positivity Delayed in‐hospital positivity occurred in 18 patients (6.4%) of total cohort and within 5 days in the half of cases (Figure ). Of them, 7 (38.9%) occurred at the first test just upon the admission in COVID‐19‐free Internal Medicine Units (Figure ). Brixia score shows a significant predictive value towards delayed in‐hospital COVID‐19 positivity (HR 1.124 [1.007–1.254]) (Table ). It also fits into a model with fever and dyspnoea (HR 1.164 [1.044–1.299]) (Figure ; Table ). With a p ‐Value of 0.123, the Hosmer–Lemeshow test confirmed the good calibration of this model, whereas the result of ROC curve analysis indicated a significant discrimination performance of the model with an area under the curve of 0.765. As internal validation, the HR estimated from the original dataset fall within the new confidence intervals calculated with bootstrap resampling. 3.4 Brixia score independently predicts adverse outcomes in the whole study cohort During hospitalization, forty patients suffer gastrointestinal bleeding ( n = 26) and/or thrombotic complications ( n = 17), namely venous/pulmonary thromboembolism ( n = 9), acute coronary syndrome ( n = 4) and ischaemic stroke ( n = 4) (Figure ). Brixia scoring system shows an independent association with serious hospital complications, fitting a model with haemoglobin and CRP towards overall adverse event (OR 1.131 [1.032–1.239]) and with D‐dimer for thrombotic events (OR 1.344 [1.116–1.617]) (Figure ). Death was recorded in 81 patients (28.6%) during a median follow‐up of 169 days (range 2 to 402 days). In‐hospital death occurred in 27 patients with only 10 cases directly related to the COVID‐19 infection (Figure ). Once categorized, Brixia values ≥8 are independent predictors of overall mortality (HR 1.948 [1.194–3.176]) and fit a model with Charlson comorbidity index, systolic blood pressure, eGFR and CRP (HR 1.708 [1.018–2.868]) (Figure ). Kaplan–Meier survival curve confirmed the higher mortality risk associated with a Brixia score ≥8 with a p ‐Value at log rank test of 0.021 (Figure ). Delayed COVID ‐19 positivity is associated with patients age and frailty and length of hospitalization Clinical characteristics of the study cohort are summarized in Tables and Tables ,S2. As in the original cohort, patients are elderly with a median age of 80 years and equally distributed across sexes (52.1% of males). Patients with in‐hospital COVID‐19 positivity have more frequently fever ( p = 0.03 and p = 0.099, respectively) and dyspnoea ( p = 0.003 and p = 0.042, respectively) at ED admission. Since tested positive patients were immediately transferred to COVID‐19 dedicated units, their median length of stay in Internal Medicine ward was lower (5 days vs. 10 days, p = 0.007), without significant differences in the overall hospitalization time. Rather, the time to delayed in‐hospital positivity is correlated with advanced age ( p = 0.006), comorbidity burden ( p < 0.001), impaired renal function ( p = 0.009 and 0.001 for creatinine and estimated glomerular filtration rate [eGFR], respectively) and inflammatory status ( p = 0.037 for C‐reactive protein [CRP]) (Table ). Brixia score is associated with clinical suspicion of COVID ‐19 infection and in‐hospital positivity At ED admission, Brixia score is higher in patients with delayed in‐hospital positivity ( p = 0.0737), history of contact with COVID‐19 cases before hospitalization ( p ‐Value for trend 0.0067) and dyspnoea ( p = 0.0058) (Figure ; Tables ). Brixia score independently predicts delayed in‐hospital COVID ‐19 positivity Delayed in‐hospital positivity occurred in 18 patients (6.4%) of total cohort and within 5 days in the half of cases (Figure ). Of them, 7 (38.9%) occurred at the first test just upon the admission in COVID‐19‐free Internal Medicine Units (Figure ). Brixia score shows a significant predictive value towards delayed in‐hospital COVID‐19 positivity (HR 1.124 [1.007–1.254]) (Table ). It also fits into a model with fever and dyspnoea (HR 1.164 [1.044–1.299]) (Figure ; Table ). With a p ‐Value of 0.123, the Hosmer–Lemeshow test confirmed the good calibration of this model, whereas the result of ROC curve analysis indicated a significant discrimination performance of the model with an area under the curve of 0.765. As internal validation, the HR estimated from the original dataset fall within the new confidence intervals calculated with bootstrap resampling. Brixia score independently predicts adverse outcomes in the whole study cohort During hospitalization, forty patients suffer gastrointestinal bleeding ( n = 26) and/or thrombotic complications ( n = 17), namely venous/pulmonary thromboembolism ( n = 9), acute coronary syndrome ( n = 4) and ischaemic stroke ( n = 4) (Figure ). Brixia scoring system shows an independent association with serious hospital complications, fitting a model with haemoglobin and CRP towards overall adverse event (OR 1.131 [1.032–1.239]) and with D‐dimer for thrombotic events (OR 1.344 [1.116–1.617]) (Figure ). Death was recorded in 81 patients (28.6%) during a median follow‐up of 169 days (range 2 to 402 days). In‐hospital death occurred in 27 patients with only 10 cases directly related to the COVID‐19 infection (Figure ). Once categorized, Brixia values ≥8 are independent predictors of overall mortality (HR 1.948 [1.194–3.176]) and fit a model with Charlson comorbidity index, systolic blood pressure, eGFR and CRP (HR 1.708 [1.018–2.868]) (Figure ). Kaplan–Meier survival curve confirmed the higher mortality risk associated with a Brixia score ≥8 with a p ‐Value at log rank test of 0.021 (Figure ). DISCUSSION During the first wave of pandemic, many diagnostic pitfalls challenged clinicians in their threshold of suspicion for COVID‐19 infection. Early, reliable and feasible tools for detecting COVID‐19 early at ED admission were missing and not easy to implement. Here, we point out the role of the semi‐quantitative CXR score Brixia on COVID‐19 diagnosis/prognosis , and beyond. The association with history of contact with suspicious/certain cases and the hallmark symptoms of COVID‐19 infection might support the supremacy of clinical/radiological diagnosis towards molecular positivity. Nasopharyngeal swabs were routinely repeated during ED stay and this make consistent the hypothesis of a greater predictive power of Brixia, at least compared with the first‐generation molecular tests for COVID‐19. As additional finding we here report a predictive value of Brixia towards death and atherothrombotic adverse events during hospitalization and post‐discharge. The fitting with d‐dimer in logistic regression model would also suggest a pathophysiological explanation. Similar to previous coronavirus pandemics (i.e. SARS‐CoV‐1 and MERS‐CoV‐1), the association of SARS‐CoV‐2 infection with D‐dimer and coagulation abnormalities is clearly established. Endothelial dysfunction and microvascular thrombosis are well‐described mechanisms by which COVID‐19 infection would trigger venous thromboembolism/pulmonary embolism and complement‐mediated thrombotic microangiopathies/disseminated intravascular coagulation. Whether thrombotic complications also underlie the previous reported association between Brixia score and clinical worsening (i.e. non‐invasive ventilation, intubation and/or admission in intensive care unit) cannot be ruled out. Even more intriguingly, we observed this association in the whole cohort that include a prevalence of COVID‐19 negative patients. This should pave the way for future studies addressing clinical features associated with these radiological patterns. The independent association with mortality is another intriguing finding. Here, we confirm a greater mortality risk in elderly and frail patients with high comorbidity burden—mainly cardiovascular and renal—and inflammatory status , Whether Brixia score is ‘the pitch of the iceberg’ that reflect underlying pathological conditions remains to be elucidated. Overall, semi‐quantitative scoring of CXR by Brixia would display a relevant prognostic role, not limited to COVID‐19 infection. It would also have a supremacy over CT scan, a measure burdened by high radiation dose, longer examination time and less feasibility in ED and peripheral healthcare centres. The lessons from the COVID‐19 pandemic may then prepare for further waves, new pandemics and open new application for the Brixia scoring systemic as well. CXR indeed meets the need to merge radiological, demographic, biochemical and clinical features in the shortest time, optimizing patient diagnosis/risk stratification and reducing the person‐to‐person transmission. Information from Brixia score might also be extended through radiomic analysis and/or artificial intelligence. As sub‐analysis of a larger retrospective cohort, this study has an intrinsic limitation, although the sample analysed is representative of the original cohort and they both share the primary outcome. Conversely, this—and the whole cohort—may be not representative of the global pandemic, being the recruitment time limited to the first wave of pandemic in a forerunner county (i.e. Italy). As above‐mentioned, the better performance of Brixia score as compared to a swab test for COVID‐19 need validation with later molecular/antigenic kits. Similarly, external validations with other pandemic waves where virus variants overlapped, and vaccinated people increased would be appropriate. In conclusion, this sub‐analysis of retrospective cohort points out the role of chest X‐ray scoring with Brixia as predictive tool of delayed COVID‐19 positivity. This would have a relevant impact in preserving COVID‐19‐free wards from in‐hospital cluster of contagion. Results from secondary outcomes (i.e. thrombotic complication and long‐term overall mortality) also suggest potential future applications of Brixia score beyond COVID‐19 pandemic. This would deserve specifically designed studies so that new tool may develop from the pandemic wave. The authors report no relationships that could be construed as a conflict of interest. Appendix S1: Click here for additional data file.
Has the Child Dental Benefits Schedule improved access to dental care for Australian children?
3d9b2d68-cb55-472b-b1fe-6e08c2e35f33
10078627
Dental[mh]
The Child Dental Benefits Schedule (CDBS) is a government‐funded schedule aimed at improving access to dental care for the Australian population. It has been reported to be underutilised and no studies have investigated if the schedule is meeting the policy goals. The CDBS is improving reported access to dental care for children in low‐income households. Innovation and policy reform must be explored to improve access for Aboriginal and Torres Strait Islander children and younger access for all income levels. Whilst the structure and administration of the CDBS has improved access for sections of the community, there are underserved populations that require urgent additional support to access dental services. INTRODUCTION Dental decay (caries) is the most common chronic disease and there have been increasing calls for action (Peres et al., ). Dental caries is not simply caused by poor tooth brushing behaviours and sugary diets, but complex interlinked social and system‐level factors (Watt, ). By early adolescence, more than half of Australian children experienced dental caries, and the burden of disease is inequitably distributed for disadvantaged and marginalised children (Stormon et al., ). Untreated dental caries results in pain, school absenteeism and lower quality of life for children (Ghorbani et al., ). Untreated caries also results in complex and costly treatment needs as well as preventable hospital admissions (Alsharif et al., ). Australia's dental care system is a mixed healthcare model, where individuals can access care through either public or privately operated services. In the private model, individuals pay on a fee‐for‐service basis and can purchase private health insurance to cover part of the expenses (Lam et al., ). The public sector is funded by the Commonwealth and State and Territory Governments and generally low socioeconomic groups and children are eligible for subsidised care (Queensland Government, ; Victoria State Government, ). Over half (56%) of Australian children reported accessing dental services in the private sector (Do & Spencer, ). Approximately 40% of the population hold private insurance for dental treatment, yet even these individuals often pay substantial gap fees (Lam et al., ). This unavoidable financial burden explains why over a third of Australians either postpone or evade dental treatment due to cost (Ellershaw & Spencer, ). In 2008, the Australian Commonwealth Government introduced the Medicare Teen Dental Plan (TDP) for 12‐ to 17‐year olds from low‐income families. Eligible teenagers received an annual voucher of around $150 (AUD) towards the cost of an annual preventative dental check‐up (Australian Research Centre for Population Oral Health, ). Underutilisation of the TDP in vulnerable populations contributed to a review of the schedule and its subsequent cessation on 31 December 2013. In 2014, the TDP was superseded by a new dental health reform package; the Child Dental Benefits Schedule (CDBS) aimed at improving access to a wider range of dental services and treatment (Australian National Audit Office, ). CDBS is an ongoing schedule administered through the Australian Government medical insurance schedule Medicare. Eligible 2‐ to 17‐year‐old children can access $1000(AUD) of dental treatment over 2 calendar years in the public or private sectors (Australian National Audit Office, ). The schedule objective was to ‘ Improve access to dental services for children ’ and ‘ help children develop good oral health habits early in life and help to arrest the increase in child dental decay ’. (Australian National Audit Office, ) In the most recent statutory review of the schedule, $1.4 billion worth of benefits was paid from implementation to 2018 in the schedule (Commonwealth of Australia, ). However, this was reported to be 41% lower than the projected expenditure on the schedule (Australian National Audit Office, ). The Australian Federal Budget 2021–2022 increased the schedule budget by $7.3 million over 4 years to include children less than 2 years of age. The Department of Human Services notified approximately 3.1 million children in 2014 and 2.9 million children between January and June 2015 of their eligibility under the schedule (Australian National Audit Office, ). An audit of the schedule in 2015 found that less than 30% of the eligible child population were utilising the program (Australian National Audit Office, ). The schedule audit recommended the administration of the schedule be reviewed and barriers to utilisation investigated (Australian National Audit Office, ). Since the publication of the audit of the CDBS, five studies have been published investigating CDBS utilisation. A study by Putri et al. ( ) reported a decline in service utilisation by 16.3% after the 1st year of the CDBS (Putri et al., ). Another study found that schedule utilisation rates were similar in Indigenous and non‐Indigenous Australians; however, less preventive services were claimed in Indigenous children (Orr et al., ). Mothers with mental health conditions and poor health behaviours (such as smoking) were found to be predictors of non‐utilisation of the schedule in the Longitudinal Study of Australian Children (Nguyen et al., ). The multi‐billion‐dollar schedule continues to be supported in the Commonwealth budget each year; however, the question still remains, does the Child Dental Benefits Schedule increase access to dental care for Australian children? The primary performance indicator of the schedule was a goal of 2.4 million eligible children accessing the schedule in the first 2 years since implementation and evidence suggests that this is not being met (Australian National Audit Office, ; Putri et al., ). Additionally, an audit of the schedule stated that these performance indicators do not provide a complete picture of the performance of the CDBS in meeting program objectives—improving access to dental services for children and improving population‐wide oral health (Australian National Audit Office, ). There is a lack of evidence to demonstrate that the schedule has resulted in an increased access to dental services for the Australian child population. With evidence of underutilisation of the schedule, further research is warranted to explore if the primary aim of the schedule is being met. This study, therefore, aims to assess the impact of the implementation of CDBS on access to dental care in the Australian child population, particularly children of low socioeconomic backgrounds who are the target of the schedule. METHODS 2.1 The longitudinal study of Australian children The LSAC is a cross‐sequential dual cohort study run biennially since baseline data collection in 2004 (referred to as wave one). At wave one, 5,107 children participated from the birth (B) cohort (aged 0–1 year) and 4,983 from the kindergarten (K) cohort (aged 4–5 years). Nine waves were available for use and Table reports the sample size and response rate across the study waves. The study child's primary caregiver completed either a telephone or computer‐assisted questionnaire. Ethical approval for the LSAC was granted by the Australian Institute of Family Studies Ethics Committee. Further information on the LSAC study design, ethics approval numbers and how to access the dataset can be found in data user guides and technical reports published online (Australian Institute of Family Studies, ). The study child's carer reported if dental service was used in the previous year and was collected in waves two and three for the K (6–7 years of age) and B (4–5 years of age) cohorts, respectively. The TDP was implemented on 1 July 2008 and ceased on 31 December 2013. The CDBS was implemented on 1 January 2014 and is ongoing. The B and K cohorts were 10 and 14 years of age, respectively, when the CDBS was implemented (Table ). Sociodemographic and socioeconomic variables included sex, Australian state of residence, Aboriginal and or Torres Strait Islander (herein respectfully referred to as Indigenous) status, Australian statistical geography standard (ASGC) (major city, inner regional, outer regional and remote/very remote), household income (recoded into tertiles) and Socio‐Economic Indexes for Areas Advantage/ Disadvantage (SEIFA) (recoded into tertiles). Carer‐reported receipt of the Family Tax Benefit, Parenting Payment Partnered or Single was reported as a proxy measure for CDBS eligibility as actual eligibility was not linked to the LSAC data. 2.2 Statistical analysis Stata 14.2 (College Station, TX) was used for data analysis and figures were created using the ggplot2 (v.3.3.3) and ggpubr (v.0.4.0) packages in RStudio (Boston, MA). Survey commands (svyset) were used to account for stratification by areas within states, clustering by postcodes and weighting due to potential non‐response. Population weights were applied, so analysis is representative of the Australian Bureau of Statistics‐estimated resident population counts of children in March 2004 for children 0 and 4 years of age, respectively. Further information on the survey design and the calculation of population weights is available from the LSAC technical reports (Australian Institute of Family Studies, ). Descriptive analysis of participant demographics and carer‐reported dental service use over time were reported by weighted population percentage and 95% confidence interval (CI). The percentage of carer‐reporting dental service use in the previous year was presented in a cohort table. Cohort effects were observed by examining intercohort changes (reading down the columns). Period effects were examined by comparing the same age group at one time point, with data at another time point (reading across the rows). Responses to sequential pairs of surveys were used to categorise children's dental visiting patterns as two surveys reporting visits (adequate), only one survey reporting visits (fair) and no reported visits in sequential surveys (poor). Unweighted and weighted longitudinal mixed effects Poisson models with individual identifiers as a random effect were used to assess the effect of government schedules on dental attendance and dental visiting patterns. An interaction term between schedule and household income and SEIFA groups was included in the models as the government schedules were income tested. Models were adjusted for cohort, age, sex, Indigenous status and ASGS and are reported as prevalence rate ratios (95% CI). Marginal analysis of the fixed effects of the dental attendance model was used to graph the adjusted dental attendance percentage across age groups and stratified by income. The longitudinal study of Australian children The LSAC is a cross‐sequential dual cohort study run biennially since baseline data collection in 2004 (referred to as wave one). At wave one, 5,107 children participated from the birth (B) cohort (aged 0–1 year) and 4,983 from the kindergarten (K) cohort (aged 4–5 years). Nine waves were available for use and Table reports the sample size and response rate across the study waves. The study child's primary caregiver completed either a telephone or computer‐assisted questionnaire. Ethical approval for the LSAC was granted by the Australian Institute of Family Studies Ethics Committee. Further information on the LSAC study design, ethics approval numbers and how to access the dataset can be found in data user guides and technical reports published online (Australian Institute of Family Studies, ). The study child's carer reported if dental service was used in the previous year and was collected in waves two and three for the K (6–7 years of age) and B (4–5 years of age) cohorts, respectively. The TDP was implemented on 1 July 2008 and ceased on 31 December 2013. The CDBS was implemented on 1 January 2014 and is ongoing. The B and K cohorts were 10 and 14 years of age, respectively, when the CDBS was implemented (Table ). Sociodemographic and socioeconomic variables included sex, Australian state of residence, Aboriginal and or Torres Strait Islander (herein respectfully referred to as Indigenous) status, Australian statistical geography standard (ASGC) (major city, inner regional, outer regional and remote/very remote), household income (recoded into tertiles) and Socio‐Economic Indexes for Areas Advantage/ Disadvantage (SEIFA) (recoded into tertiles). Carer‐reported receipt of the Family Tax Benefit, Parenting Payment Partnered or Single was reported as a proxy measure for CDBS eligibility as actual eligibility was not linked to the LSAC data. Statistical analysis Stata 14.2 (College Station, TX) was used for data analysis and figures were created using the ggplot2 (v.3.3.3) and ggpubr (v.0.4.0) packages in RStudio (Boston, MA). Survey commands (svyset) were used to account for stratification by areas within states, clustering by postcodes and weighting due to potential non‐response. Population weights were applied, so analysis is representative of the Australian Bureau of Statistics‐estimated resident population counts of children in March 2004 for children 0 and 4 years of age, respectively. Further information on the survey design and the calculation of population weights is available from the LSAC technical reports (Australian Institute of Family Studies, ). Descriptive analysis of participant demographics and carer‐reported dental service use over time were reported by weighted population percentage and 95% confidence interval (CI). The percentage of carer‐reporting dental service use in the previous year was presented in a cohort table. Cohort effects were observed by examining intercohort changes (reading down the columns). Period effects were examined by comparing the same age group at one time point, with data at another time point (reading across the rows). Responses to sequential pairs of surveys were used to categorise children's dental visiting patterns as two surveys reporting visits (adequate), only one survey reporting visits (fair) and no reported visits in sequential surveys (poor). Unweighted and weighted longitudinal mixed effects Poisson models with individual identifiers as a random effect were used to assess the effect of government schedules on dental attendance and dental visiting patterns. An interaction term between schedule and household income and SEIFA groups was included in the models as the government schedules were income tested. Models were adjusted for cohort, age, sex, Indigenous status and ASGS and are reported as prevalence rate ratios (95% CI). Marginal analysis of the fixed effects of the dental attendance model was used to graph the adjusted dental attendance percentage across age groups and stratified by income. RESULTS Table reports the population‐weighted characteristics of the B and K cohorts. There were minor differences between the two cohorts across the demographic variables. The rates of dental attendance for the two cohorts increased during childhood and then peaked at age 12–13 for the B cohort and age 14–15 for the K cohort (Table ). A similar trend was observed between children who received and did not receive eligible payments for the dental schedules (Figure ). Prior to the implementation of the CDBS for both cohorts, the B cohort reported the lowest attendance rate at age 4–5 but increased to comparable rates as the K cohort at ages 6–7 and 8–9 years. After the introduction of CDBS, population‐level reported dental attendance rates increased for both cohorts. The increases in the percentage of dental attendance were small as the K cohort increased by approximately 2.5% and the B cohort increased by 5.5%. However, the dental attendance rate decreased in the final survey of both cohorts, whilst CDBS was still operational. The introduction of the CDBS increased the rate of dental attendance for the low household income group by 8% (95% CI: 1%, 15%) after adjusting for age, cohort, sex, SEIFA, Indigenous status and ASGS (Table ). There was insufficient evidence that the dental attendance for the low‐ and middle‐income populations improved under the TDP schedule. The model provides strong evidence that dental attendance generally increased with age and the Indigenous population have 31% (95% CI: 4%, 55%) lower attendance rate after adjustment for other factors. The model‐adjusted estimates of dental attendance rates across ages for both cohorts were explored graphically by stratifying the populations into income categories (Figure ). There was evidence that the introduction of CDBS improved the favourable pattern of dental attendance for the population of children in the low‐income group after adjusting for other factors (Table ). Indigenous children were 73% (95% CI: 49%, 86%) less likely to have adequate dental visiting habits than non‐Indigenous children after adjustment. DISCUSSION This study explored the impact of two Medicare dental schedules by reported dental attendance in two cohorts of Australian children. The dual‐cohort design of the LSAC enabled the investigation of period and cohort effects on dental attendance and estimation of the effects of the schedules adjusted for confounders. Overall, this study found a marginal increase in reported dental attendance in low‐income groups in the CDBS schedule years. The World Health Organization defines universal health coverage (UHC) as health services that can be accessed by the population when they are needed, without financial and physical access barriers (The World Health Organization, ). Robust financial supports are central in UHC and the CDBS is an ongoing Medicare schedule which targets this aspect of universal coverage for children. Access to healthcare, however, is multifaceted with the availability of services, approachability, appropriateness and acceptability other key domains in patient‐centred access to healthcare (Levesque et al., ). The increase in reported access to dental services and favourable visiting patterns in low‐income households during operation of the CDBS provides some evidence that the schedule's primary aim to improve access to care in the child population is being met (Australian National Audit Office, ). However, the middle‐income group in this study did not have evidence of increased reported access to dental care as a result of the CDBS despite a large proportion being eligible. Other performance indicators used to discuss UHC in dental care include access to clinically relevant care, access in early childhood and improved oral health outcomes in the population (Briggs, ; Reich et al., ). An audit of the CDBS after its 1st year of implementation recommended that performance indicators needed to be defined as simply monitoring the number of children accessing the schedule was not sufficient to demonstrate the performance of the schedule (Australian National Audit Office, ). The Department of Health agreed to this recommendation made in the audit; however, no action to this recommendation has been published since the audit of the schedule (Australian National Audit Office, ). Evidence‐based performance indicators should be investigated in future studies and the cost‐effectiveness of the schedule explored. In successful UHC, it is essential to understand and address social inequities in access to health services (Reich et al., ). It is clear from this study that the Indigenous child population have substantially less‐reported utilisation of dental services. Even after adjustment for the CDBS, Indigenous children had 31% lower attendance rates than their non‐Indigenous counterparts. Another study investigating CDBS use in Indigenous children found overall use of this schedule similar to their non‐Indigenous counterparts (Orr et al., ). However, this result does not capture the difference in treatment modalities as Indigenous children had higher risk or requiring invasive dental treatment than non‐Indigenous children (Orr et al., ). This inequity has also been reflected in the distribution of dental disease with this population found to have an 18% higher rate of untreated caries than their non‐Indigenous counterparts (Do & Spencer, ). Access and utilisation of healthcare is multifaceted and the Levesque patient‐centred access to healthcare conceptualises the domains to access. Funding policies such as the CDBS may not overcome other barriers such as availability of culturally appropriate services. Tailored, culturally appropriate and evidence‐based models of care utilising the CDBS are urgently needed and should be implemented. Few children in this study reported access to care in early childhood despite first dental visits being recommended by the age of 2 years. The CDBS legislation has recently been amended to allow eligible children under the age of 2 years to access the schedule (Parliament of Australia, ). Access to dental care and preventative oral health education to facilitate early prevention of dental diseases should occur prior to and during the eruption of primary teeth during infancy. Numerous models of care have found early oral health screening by dental and non‐dental professionals to be effective in the prevention of early childhood caries (Heilbrunn‐Lang et al., ; Plonka et al., ). As the LSAC cohorts were mid‐childhood during the CDBS implementation, future studies should investigate younger cohorts of Australian children who have been eligible to the CDBS during infancy and early childhood and the impact of this schedule on early access to dental services. Further exploration of multi‐disciplinary models of care including screening by nurses, speech pathologists and other early childhood practitioners to encourage early access to dental services is also warranted. The population weights were another strength of the study, adjusting for participant attrition and allowing population‐level conclusions to be made. The large sample size of the LSAC allowed multiple interactions in a complex statistical model to investigate the research question. Using parental‐reported dental attendance may be limited by recall bias; however, this measure may be the strongest available measure on a population‐level overall dental service use due to the difficulty in measuring attendance in a system dominated by private practice services. Differences between Australian state and territory public dental services may have influenced utilisation, especially in areas with greater utilisation and access to public services such as regional and remote areas. Future studies should investigate utilisation of the schedule amongst private and public sectors to understand utilisation and improve access to the schedule. CONCLUSION This study explored the impact of two Medicare dental schedules reported dental attendance in two cohorts of Australian children. The increase in reported access to dental services and favourable visiting patterns in low‐income households during the operation of the CDBS provides some evidence that the schedule's primary aims to improve access to care in the child population are being met. The lower access to dental care in Aboriginal and Torres Strait Islander children and younger children warrants innovation and policy reform for these populations. Whilst the structure and administration of the CDBS has improved access for sections of the community, there are underserved populations that require urgent additional support to access dental services. Ethical approval for the LSAC was granted by the Australian Institute of Family Studies Ethics Committee. The authors have indicated that they have no potential conflicts of interest to disclose. Fig S1 Click here for additional data file. Table S1 Click here for additional data file.
Comparing the magnitude of oral health inequality over time in Canada and the United States
981a5d56-4728-4691-ba27-96e592ed0ffc
10078632
Dental[mh]
Socioeconomic inequality in oral health is well documented within high‐income countries, where oral disease is disproportionately prevalent in disadvantaged members of society . Country comparisons of oral health outcomes can provide insight into sociopolitical and health system factors that shape inequality [ , , ]. For example, evidence suggests that liberal democracies with market‐dominated economies and health systems accentuate differences in oral health between the rich and poor . Canada and the United States, in particular, have consistently demonstrated low public health expenditure, social spending, and increases in income inequality over time (Table ) [ , , ]. One notable difference between the two countries is the availability of Canada's national system of universal health insurance, which covers physician and hospital care, yet excludes oral health care. The Canadian and American approaches to oral health care are actually quite similar, with most care financed by employer and individually‐sponsored insurance and out‐of‐pocket payments, and limited contributions from government . In addition, most care in both countries is delivered in the private sector by dentists on a fee‐for‐service basis. Nevertheless, as liberal democracies, Canada still provides more support to its citizens than the United States in terms of unemployment insurance, social assistance for the poor, tax credits, and other universal benefits . Thus, despite similar demography and macroeconomic environments, the Canadian social safety is generally considered more extensive in terms of both population coverage and the level of benefits provided . Given the potential role played by political and social institutions in mediating oral health inequality, it would be reasonable to speculate that the extent of such differences may impact the distribution of oral health‐related outcomes in both countries. However, little comparative information on the magnitude of, and changes in, oral health inequality is available for Canada and the United States over time. Elani et al. reported a declining prevalence of untreated decay and edentulism in both Canada and the United States from the 1970s until the first decade of the new millennium, along with a flattening of socioeconomic gradients for filled teeth outcomes, with more low‐income individuals arguably consuming more restorative services in both countries over time . While there was persistent inequality, improvements for untreated decay were higher in Canada and, for edentulism in the United States . Farmer et al. supported these findings, reporting steeper income gradients in the United States than Canada, with adverse outcomes concentrated among the poor, which were attributed to the effects of income, gender, and age . These are the only two studies using nationally representative data to compare the magnitude of, and changes in, oral health inequality in Canada and the United States, yet they have shortcomings. Elani et al. only measured the association between socioeconomic status on oral health, but not the extent to which differences in socioeconomic position might impact the distribution of oral health in the respective populations . Farmer et al. used more robust measures to address the limitations of Elani et al.'s analysis, but only estimated the extent to which oral health outcomes were concentrated in certain segments of the population, and not changes in the magnitude of inequality over time. Measuring and monitoring inequality in oral health is considered important, yet research on trends over time remains limited . While it is known that the poor are worse‐off than the rich, there is almost no information on changes in the magnitude of the gap between the best and worst‐off members of society in Canada and the United States, particularly for clinical indicators. This study aims to quantify the extent to which differences in income impact the distribution of clinical oral health indicators, along with the percentage changes in inequality in Canada and the United States from the 1970s until the first decade of the new millennium. Data sources Data from four nationally representative surveys was used to obtain information on clinical oral health, demographic and socioeconomic status. For Canada, we used the Nutrition Canada National Survey 1970–1972 (NCNS) and the Canadian Health Measures Survey 2007–2009 (CHMS). The NCNS was conducted between October 1970 and September 1972 and collected data from 19,590 individuals aged 0–100 years, including Indigenous populations. The CHMS was conducted between March 2007 and 2009 and collected information from 5586 Canadians aged 6–79 years, excluding indigenous populations, institutionalized populations, and the Canadian Armed Forces. The NCNS and CHMS had unweighted response rates of 46.0% and 51.7%, respectively. Both surveys followed a stratified multistage sampling technique, collecting data over two phases, which included household interviews followed by clinical examination . For comparison with the NCNS and CHMS, we used the US Health and Nutrition Examination Survey 1971–1974 (HANES) and National Health and Nutrition Examination Survey 2007–2008 (NHANES). Both HANES and NHANES used stratified multi‐stage probability samples to collect information from noninstitutionalized Americans aged 0–74 and 0–80 years, respectively. The unweighted response rates for the surveys were 74.0% and 75.4%, respectively. Demographic and socioeconomic data were collected via household interviews, while oral health information was collected via clinical examination . Oral health outcomes We focused on three clinical oral health outcomes; (i) ≥1 untreated decayed teeth, which included pit and fissure, occlusal, proximal, overt, and grossly decayed teeth that had never been restored, to represent untreated decay levels in each population; (ii) ≥1 filled teeth comprising all permanent amalgam, composite resin, and glass ionomer surface restorations along with previously filled teeth presenting with secondary decay and fractured/defective restorations; and (iii) edentulism, as an indicator of unmet treatment need, utilization of services, and history of dental disease. Individual tooth counts with the assessment of each tooth surface was carried out in three of the four surveys to estimate both prevalence and severity of oral disease, while in NHANES only a basic screening examination was conducted to assess the prevalence of oral conditions. In order to maintain comparability, all the oral health outcomes were dichotomized and analyzed as binary variables. Income To measure inequality in oral health, we used total annual income as a socioeconomic indicator, as it was consistently reported in an ordinal form across all four surveys. The NCNS and HANES reported total annual family income, while the CHMS and NHANES reported total annual household income. The income variable was further ranked into quintiles, from highest to lowest. Indices of inequality Two complex regression‐based measures of inequality, the slope index of inequality (SII) and relative index of inequality (RII) were used to estimate absolute and relative inequality, respectively. The SII and RII not only reflect the socioeconomic dimension to inequality, they also incorporate the experiences of every socioeconomic group and are sensitive to changes in the distribution of socioeconomic groups in a population . The SII and RII were estimated by the regression of the midpoint value of the health outcome for each socioeconomic group along with a cumulative distribution, represented by a ridit score. The ridit scores were calculated by ranking weighted proportions of the income variable from the highest to lowest income groups, and assigning each category scores ranging from 0 to 1, based on the midpoint of the cumulative distribution of individuals within each group . The ridit scores were then incorporated in linear regression models, generating the regression coefficient, which represents the estimate of inequality. A positive value for the SII and an RII of greater than 1 is indicative of “pro‐rich” inequality, meaning the outcome is disproportionately distributed among higher‐income groups; while a negative value of the SII and an RII of less than 1 is indicative of “pro‐poor” inequality, meaning the outcome is disproportionately distributed among lower‐income groups . Analysis Data analysis using survey command was conducted in STATA version 15.0. Individuals aged ≥18 years, with complete data in all variables were included in the analysis. A very small percentage of participants ranging from 3% to 5% were excluded from the analysis due to missing data. Age‐standardized distributions of oral health outcomes across income groups were estimated for each country at both time points. Direct age‐standardization, using the US 2000 Census was performed to account for changes in distributions across time and country. The magnitude and direction of sex‐adjusted oral health inequality was estimated along with percentage change in inequality over time. Finally, an unpaired t‐test was conducted to determine the statistical significance of changes in the magnitude of inequality over time. Data from four nationally representative surveys was used to obtain information on clinical oral health, demographic and socioeconomic status. For Canada, we used the Nutrition Canada National Survey 1970–1972 (NCNS) and the Canadian Health Measures Survey 2007–2009 (CHMS). The NCNS was conducted between October 1970 and September 1972 and collected data from 19,590 individuals aged 0–100 years, including Indigenous populations. The CHMS was conducted between March 2007 and 2009 and collected information from 5586 Canadians aged 6–79 years, excluding indigenous populations, institutionalized populations, and the Canadian Armed Forces. The NCNS and CHMS had unweighted response rates of 46.0% and 51.7%, respectively. Both surveys followed a stratified multistage sampling technique, collecting data over two phases, which included household interviews followed by clinical examination . For comparison with the NCNS and CHMS, we used the US Health and Nutrition Examination Survey 1971–1974 (HANES) and National Health and Nutrition Examination Survey 2007–2008 (NHANES). Both HANES and NHANES used stratified multi‐stage probability samples to collect information from noninstitutionalized Americans aged 0–74 and 0–80 years, respectively. The unweighted response rates for the surveys were 74.0% and 75.4%, respectively. Demographic and socioeconomic data were collected via household interviews, while oral health information was collected via clinical examination . We focused on three clinical oral health outcomes; (i) ≥1 untreated decayed teeth, which included pit and fissure, occlusal, proximal, overt, and grossly decayed teeth that had never been restored, to represent untreated decay levels in each population; (ii) ≥1 filled teeth comprising all permanent amalgam, composite resin, and glass ionomer surface restorations along with previously filled teeth presenting with secondary decay and fractured/defective restorations; and (iii) edentulism, as an indicator of unmet treatment need, utilization of services, and history of dental disease. Individual tooth counts with the assessment of each tooth surface was carried out in three of the four surveys to estimate both prevalence and severity of oral disease, while in NHANES only a basic screening examination was conducted to assess the prevalence of oral conditions. In order to maintain comparability, all the oral health outcomes were dichotomized and analyzed as binary variables. To measure inequality in oral health, we used total annual income as a socioeconomic indicator, as it was consistently reported in an ordinal form across all four surveys. The NCNS and HANES reported total annual family income, while the CHMS and NHANES reported total annual household income. The income variable was further ranked into quintiles, from highest to lowest. Two complex regression‐based measures of inequality, the slope index of inequality (SII) and relative index of inequality (RII) were used to estimate absolute and relative inequality, respectively. The SII and RII not only reflect the socioeconomic dimension to inequality, they also incorporate the experiences of every socioeconomic group and are sensitive to changes in the distribution of socioeconomic groups in a population . The SII and RII were estimated by the regression of the midpoint value of the health outcome for each socioeconomic group along with a cumulative distribution, represented by a ridit score. The ridit scores were calculated by ranking weighted proportions of the income variable from the highest to lowest income groups, and assigning each category scores ranging from 0 to 1, based on the midpoint of the cumulative distribution of individuals within each group . The ridit scores were then incorporated in linear regression models, generating the regression coefficient, which represents the estimate of inequality. A positive value for the SII and an RII of greater than 1 is indicative of “pro‐rich” inequality, meaning the outcome is disproportionately distributed among higher‐income groups; while a negative value of the SII and an RII of less than 1 is indicative of “pro‐poor” inequality, meaning the outcome is disproportionately distributed among lower‐income groups . Data analysis using survey command was conducted in STATA version 15.0. Individuals aged ≥18 years, with complete data in all variables were included in the analysis. A very small percentage of participants ranging from 3% to 5% were excluded from the analysis due to missing data. Age‐standardized distributions of oral health outcomes across income groups were estimated for each country at both time points. Direct age‐standardization, using the US 2000 Census was performed to account for changes in distributions across time and country. The magnitude and direction of sex‐adjusted oral health inequality was estimated along with percentage change in inequality over time. Finally, an unpaired t‐test was conducted to determine the statistical significance of changes in the magnitude of inequality over time. Survey sample characteristics The characteristics of the sample population are presented in Table . The gender and age distribution were similar in both countries in the 1970s and 2000s. The age‐standardized prevalence of oral health outcomes by income category is presented in Figure . While income gradients persisted, the overall prevalence of untreated decay and edentulism decreased over time in both Canada (untreated decay: 64%–20.5%; edentulism: 16.4%–4.4%) and the United States (untreated decay: 45.7%–21.2%; edentulism: 15.5%–5.7%). For filled teeth, the overall prevalence increased over time in Canada (74.5%–89.7%), but remained stable in the United States (82%–82.6%). It also appears that increases in filled teeth among low and middle‐income groups in Canada were greater than in the United States. Finally, while the income gradient for filled teeth remained in both countries, it was more delineated in the United States in the 2000s. Income‐related inequality in oral health outcomes As seen in Table , among dentate adults, there was significant absolute income‐related inequality (SII) in the prevalence of untreated decay at both time points; however, this decreased by approximately 31% in Canada and remained unchanged in the United States. Relative income‐related inequality (RII) for untreated decay increased significantly over time in both countries. The increase in relative inequality in Canada (91%) was half of that in the United States (189%). For filled teeth, both the SII and RII declined significantly over time in both countries. The reduction in the SII in Canada (79%) was almost double that in the United States (38%). For filled teeth, the RII decreased over time by 63% and 16% in Canada and the United States, respectively. For edentulism, the SII decreased by 57.1% in Canada and 50.9% in the United States, while the RII rose by 200% in Canada and 78% in the United States. The characteristics of the sample population are presented in Table . The gender and age distribution were similar in both countries in the 1970s and 2000s. The age‐standardized prevalence of oral health outcomes by income category is presented in Figure . While income gradients persisted, the overall prevalence of untreated decay and edentulism decreased over time in both Canada (untreated decay: 64%–20.5%; edentulism: 16.4%–4.4%) and the United States (untreated decay: 45.7%–21.2%; edentulism: 15.5%–5.7%). For filled teeth, the overall prevalence increased over time in Canada (74.5%–89.7%), but remained stable in the United States (82%–82.6%). It also appears that increases in filled teeth among low and middle‐income groups in Canada were greater than in the United States. Finally, while the income gradient for filled teeth remained in both countries, it was more delineated in the United States in the 2000s. As seen in Table , among dentate adults, there was significant absolute income‐related inequality (SII) in the prevalence of untreated decay at both time points; however, this decreased by approximately 31% in Canada and remained unchanged in the United States. Relative income‐related inequality (RII) for untreated decay increased significantly over time in both countries. The increase in relative inequality in Canada (91%) was half of that in the United States (189%). For filled teeth, both the SII and RII declined significantly over time in both countries. The reduction in the SII in Canada (79%) was almost double that in the United States (38%). For filled teeth, the RII decreased over time by 63% and 16% in Canada and the United States, respectively. For edentulism, the SII decreased by 57.1% in Canada and 50.9% in the United States, while the RII rose by 200% in Canada and 78% in the United States. Absolute inequality in the prevalence of untreated decay and edentulism decreased over time in Canada. In the United States, absolute inequality decreased for edentulism only, and remained unchanged for untreated decay. However, relative inequality for untreated decay and edentulism increased over time in both countries. For untreated decay, the increase in relative inequality in Canada was half of that in the United States; for edentulism, relative inequality more than doubled in Canada compared to the United States. For filled teeth, both absolute and relative inequality declined over time in both countries, with improvements among lower and middle‐income groups appearing more pronounced in Canada than in the United States. Overall, apart from edentulism, the magnitude of oral health‐related inequality in untreated decay and filled teeth appears to be worse in the United States than in Canada. In their pioneering study, Sanders et al. demonstrated that high population coverage for social benefits contributes significantly toward mitigating oral health inequality, and that a high reliance on private dental insurance is ineffective in achieving equity in population oral health . These findings help to explain our own. For instance, a higher level of welfare benefits in Canada covering larger portions of the population [ , , ] may have contributed to lower oral health‐related inequality than in the United States, despite a high reliance on private dental insurance in both countries. Sanders et al. and other authors also suggest that the population's oral health and inequality therein might be impacted by the unequal distribution of income (or income inequality) in a country . Canada has had lower income inequality than the United States (Table ); thus, despite higher levels of social spending in the United States over the past 35 years, low population coverage in regard to this spending and higher income inequality relative to Canada [ , , ] may explain why oral health‐related inequality appears to be worse in the United States than Canada. On the other hand, both absolute and relative inequality for filled teeth declined over time, albeit to a greater extent in Canada than in the United States. The narrowing of inequality for this outcome is indicative of an increasing uptake of dental services among lower‐income individuals. Both countries have predominantly privatized oral health care systems, suggesting there would be similar barriers in accessing dental services . Yet, the utilization of dental services has arguably improved over time, particularly among the poor, as indicated by declining inequality for filled teeth, which may, in fact, suggest enhanced access to dental care over time . Nevertheless, whatever improvement in access to dental services has been present, it appears to be inadequate in mitigating inequality in oral disease. Similarly, a reduction in absolute inequality over time for adverse oral health outcomes, such as in untreated decay and edentulism, reflects an overall declining prevalence of these outcomes within the population, which is a desirable effect; yet this was followed by an unequal rise in relative differences. The widening of relative inequality suggests that improvements in oral health has occurred at a higher rate among those at the upper end of the income gradient . Further, the stabilization of absolute inequality in the United States reflects persistent and intractable gaps between income groups in terms of oral health‐related outcomes . Despite the relatively low availability of public dental services in both countries , and the rising costs of private insurance and dental care in real terms , the gaps between the rich and poor in the utilization of services still declined. However, as argued above, this does not fully address the distributional burden of oral disease, as is indicated by the concentration of unmet needs (e.g., untreated decay) among lower‐income individuals. Moreover, the concentration of edentulism among the poor over time suggests the inadequacy of oral health policies in addressing the lasting impacts of socioeconomic inequality over the lifespan. These findings are in line with the “inverse care law,” which states that health services structured by market forces are inversely available based on people's needs . Thus, while the oral healthcare system may play a mediating role in inequality, it alone appears to be insufficient in addressing inequality in Canada and the United States. Overall, our results suggest that while oral health has improved over time, inequality in oral disease has in fact worsened. Despite the negative unabating impacts of aging over time, the decline in the prevalence of untreated decay and edentulism could be attributed to period and cohort effects as well . However, the large significant rise in relative inequality in untreated decay and edentulism over time in both countries suggests that the decline in the prevalence of oral disease is largely attributed to improvements primarily among higher‐income groups. As per the “inverse equity” hypothesis, inequality emerges as a result of public health interventions being first and most accessible to those higher‐up on the socioeconomic ladder with a trickle‐down effect to those at the bottom . It is plausible then that inequality in untreated decay may be exacerbated through differential access and uptake of preventive dental services (e.g., topical fluorides). Moreover, the state of oral health, which is likely modified by behavior, such as smoking, diet, and tooth brushing, is related to the trajectory of behavior change along the income gradient, wherein those higher‐up on the gradient adopt new and healthy behaviors earlier than those below them, which serves as a potential explanation for the widening inequality observed in this study . Behavior change does not occur in isolation either, but is born out of social and living conditions , highlighting the important role of social determinants, which in turn points to the role of the Canadian and American welfare state (or failures therein) in mitigating inequality. This study presents with certain strengths and limitations. All analyses were based on nationally representative surveys, using comparable clinical data from both countries at two points in time. In addition, to the best of our knowledge, this is the first study to quantify both the magnitude and direction of change in oral health inequality over time in Canada and the United States, and to assess absolute and relative inequality using robust methods that align with World Health Organization recommendations. Some might question why we only focused on income‐related inequality in this study. The reason is that other indicators of socioeconomic status such as educational attainment and occupational status tend to be stable and provide little variation among adults over time, thus potentially underestimating socioeconomic inequality in health outcomes. Income has also been shown to be the strongest predictor of inequality in dental care use among organisation for economic co‐operation and development countries . Nevertheless, it must also be recognized that differences in income may be compounded by other socioeconomic indicators such as educational attainment and occupation , which were not accounted for in this study. In addition, this study did not account for the role race or ethnicity in exacerbating inequality. While the health differential between privileged and disadvantaged racial groups has existed across time and space, with disadvantaged racial groups bearing the greatest burden of poor oral health outcomes, a key feature explaining racial gaps in oral health is socioeconomic status, accounting for a considerable proportion of racial inequality . Finally, while there has been a clear consensus to prioritize research on trends in oral health inequality, due to data availability, this study was limited in this regard by only focusing on comparing inequality over two points in time. While our results are consistent with the limited research comparing oral health inequality between Canada and the United States , it has also captured the magnitude extent to which inequality has precipitated in the respective countries over time. Moreover, results from previous studies on inequality trends in high‐income countries such as Australia and the United Kingdom, have demonstrated small improvements over time, which were occurring predominantly among the rich, despite improved access to care overall , further corroborating our results. Future research opportunities include a comparative analysis on inequality trends in the distribution of oral and general health indicators in these countries. While there is some descriptive research in this area, time trend analyses using robust methodologies is limited in the North American context . Although this study did not empirically explore pathways to inequality, our findings do suggest a potential role for the sociopolitical environment. While research in this area exists in the European context , there is almost no information for North America and other nations. Moreover, while we assessed changes in oral health inequality among adults, extending such research into analyzing inequality patterns among children would augment knowledge on the extent to which public policy addresses differences by age. In conclusion, oral health appears to have improved significantly over time in Canada and the United States, however, this was accompanied by an increasing and disproportionate share of unmet needs and poor oral health among the poor, particularly in the United States. Despite highly privatized oral health systems in both countries with concomitant barriers to care, utilization of restorative services appears to have grown, particularly among the poor. Nevertheless, the oral healthcare system appears to be inadequate in mitigating inequality in the distribution of oral disease. Finally, while higher inequality in the United States may partially be explained by its weaker welfare state, our findings suggest the need for more upstream public health interventions in both countries to address the sociopolitical determinants of oral health. Dr. Carlos Quiñonez receives remuneration from Green Shield Canada for consulting services around dental care‐related issues. All the other authors have no conflict of interest to declare.
Cross‐sector pre‐registration trainee pharmacist placements in general practice across England: A qualitative study exploring the views of pre‐registration trainees and education supervisors
2cf549b5-d46e-43d8-b68d-06b498faf894
10078633
Family Medicine[mh]
Pharmacists traditionally spent their pre‐registration year in community or hospital, with variation between settings. Increasingly, pharmacists are employed in patient‐facing primary care settings, and their trainings need to adequately prepare them for these patient‐facing roles. Cross‐sector pre‐registration placements in general practice (GP) improve trainee pharmacists’ understanding of patient pathways and holistic patient care. General practice placements particularly support trainee pharmacists’ development of consultation and clinical assessment skills and multidisciplinary team working. Key considerations when implementing cross‐sector GP placements include: good operational planning; collaborative supervision; well‐supervised workplace learning in a supportive GP environment with appropriate opportunities for trainees to learn and harness skills. INTRODUCTION In recent years, pharmacists’ roles in England have changed (NHS England, , ), with increasing numbers working in a range of primary care settings, that is, general practice (GP – family medicine), urgent care, and care homes. The vision for a fit‐for‐purpose pharmacy workforce sees pharmacists able to work across integrated care pathways, providing patient‐centred care and medicine optimisation. Similar movements to integrate pharmacists within primary care teams can be seen internationally, such as in Canada (Raiche et al., ; Samir Abdin et al., ), the United States (Jacobi, ), Australia (Moles & Stehlik, ), and Malaysia (Saw et al., ). Reported benefits of pharmacists working with GPs include controlling prescribing expenditure, detecting and resolving drug‐related problems, and making clinical interventions to patients’ medicines (Khaira et al., ; Mann et al., ). The NHS Long Term Plan (2019) sets out proposals to significantly grow the number of pharmacists in primary care (NHS England, ), and to ensure that as independent prescribers they become a central part of multidisciplinary primary care teams. There are currently more than 1000 full‐time equivalent pharmacists working in GPs as well as urgent care settings and care homes, with funding available through the NHS England Pharmacy Integration Fund and the GP five‐year contract framework (NHS England, ). Delivering the NHS Long Term Plan will also require reform to initial education and training for pharmacists, who in Great Britain mainly undertake 4 years of university‐based education followed by 12 months of work‐based pre‐registration training where they are supervised by a pharmacist tutor (Sosabowski & Gard, ). Unlike medicine or nursing, undergraduate pharmacy education is funded as a science degree and incorporates limited experiential learning, with the pre‐registration year currently contributing the main patient‐facing experience prior to registration. Until 2021, pre‐registration trainees have had to meet 76 General Pharmaceutical Council (GPhC) set performance standards, against which their tutor signs them off during formal meetings after 13, 26, and 39 weeks (General Pharmaceutical Council, ). Following a final tutor sign‐off, trainees need to pass the GPhC registration assessment in order to apply for pharmacist registration. Limited undergraduate experiential learning and the traditional set‐up of pre‐registration training taking place in a single sector, usually hospital or community pharmacy, create the challenge of achieving a sustainable pharmacy workforce that has the knowledge, skills, and understanding to work in primary care and across the wider integrated care system (NHS England, ; NHS Health Education England., ). Pre‐registration placements in GP provide a possible solution to this challenge. 1.1 Pre‐registration trainee pharmacists in general practice project (2019–current) In 2019, the Pharmacy Integration Fund commissioned the Pre‐registration Pharmacists in General Practice Project, where 95 trainee pharmacists were employed in a base sector (community or hospital pharmacy) but spent between 13 and 26 weeks in GP throughout England. These GP placements were managed by Health Education England (HEE), the NHS statutory body responsible for the education and training of the health workforce. HEE appointed a national lead and regional facilitators to advise and support trainees, tutors, employers, and host sites in the development and delivery of GP placements. They also developed resources for base and particularly GP host sites including GP placement objectives, expected outcomes, and a framework outlining how to meet both the GPhC performance standards and HEE recommended outcomes (Appendix ). The structure of cross‐sector placements varied, encompassing one or more blocks, and weeks or days split between the base sector and GP setting. Trainees had a pre‐registration pharmacist tutor at the base sector, who retained overall responsibility for the trainee throughout the year, and a second pharmacist tutor working in GP placements who understood the scope of practice of the still emerging role of a primary care pharmacist (NHS Health Education England., ). Whilst in GP, trainees completed a reflective e‐portfolio to demonstrate competence against the GPhC performance standards. All trainees and tutors had access to this e‐portfolio, which included a number of formative assessment tools (Appendix ). The overall aim of this study was to evaluate implementation of cross‐sector GP/community and GP/hospital pre‐registration placements in England, and to identify barriers and enablers of a training placement that achieved its intended outcomes for learners – conceptualised here as a ‘successful training placement’. The purpose of this paper is to use our evaluation findings to shed light on how to best implement cross‐sector placements. Pre‐registration trainee pharmacists in general practice project (2019–current) In 2019, the Pharmacy Integration Fund commissioned the Pre‐registration Pharmacists in General Practice Project, where 95 trainee pharmacists were employed in a base sector (community or hospital pharmacy) but spent between 13 and 26 weeks in GP throughout England. These GP placements were managed by Health Education England (HEE), the NHS statutory body responsible for the education and training of the health workforce. HEE appointed a national lead and regional facilitators to advise and support trainees, tutors, employers, and host sites in the development and delivery of GP placements. They also developed resources for base and particularly GP host sites including GP placement objectives, expected outcomes, and a framework outlining how to meet both the GPhC performance standards and HEE recommended outcomes (Appendix ). The structure of cross‐sector placements varied, encompassing one or more blocks, and weeks or days split between the base sector and GP setting. Trainees had a pre‐registration pharmacist tutor at the base sector, who retained overall responsibility for the trainee throughout the year, and a second pharmacist tutor working in GP placements who understood the scope of practice of the still emerging role of a primary care pharmacist (NHS Health Education England., ). Whilst in GP, trainees completed a reflective e‐portfolio to demonstrate competence against the GPhC performance standards. All trainees and tutors had access to this e‐portfolio, which included a number of formative assessment tools (Appendix ). The overall aim of this study was to evaluate implementation of cross‐sector GP/community and GP/hospital pre‐registration placements in England, and to identify barriers and enablers of a training placement that achieved its intended outcomes for learners – conceptualised here as a ‘successful training placement’. The purpose of this paper is to use our evaluation findings to shed light on how to best implement cross‐sector placements. METHODS 2.1 Study design and sampling A qualitative study design was used, with study sites in England purposively selected on the basis of key situational variables (Gray, ): Pharmacy base: community and hospital Number of pre‐registration trainee pharmacists in base doing GP rotation Length of GP placement: 13 weeks versus 26 weeks Organisation of GP placement: block versus split week/day Regions within England At each study site, semi‐structured telephone interviews were conducted with trainees, pharmacy base tutor, and/or GP pharmacist tutor, using a dyad/triad approach. A dyad involved at least one trainee and one of their tutors being interviewed. A triad involved at least one trainee and both their base supervisor and GP tutors. Study sites had to have a trainee and tutor participate to be included in the study. 2.2 Recruitment The HEE national project lead provided the research team with 78 training sites and characteristics for purposive sampling. The research team initially selected 8–12 study sites using a sampling matrix based on key situational variables described above and emailed invitation letters and participant information sheets (PIS), with a request to contact the research team. These assured participants of confidentiality and that they could withdraw from the study without impact on their training. If sites from the initial sampling matrix didn't wish to participate, they were replaced by other sites with similar characteristics. 2.3 Data collection Telephone interviews were conducted with trainees and tutors at seven study sites between January and March 2020; interviews were paused due to the emerging COVID‐19 pandemic, and resumed in June to July 2020. All participants provided written or verbal consent before the interview commenced. Interview schedules were informed by existing research (Jee et al., , , ; Jones et al., ; Schafheutle et al., ), an earlier pilot evaluation (Gray, ), and the HEE‐GP pre‐registration handbook (NHS Health Education England., ). Schedules were revised following discussions with the HEE national lead, with questions tailored to understand the contribution of GP placements to the achievement of pre‐registration learning outcomes, and an opportunity at the end to reflect on their overall GP placement experience (Appendix ). This study received ethics approval by The University of Manchester Research Ethics Committee (Ref no. 2020‐7914‐16,794) and NHS Health Research Authority (Ref no. NHS001659). 2.4 Data analysis All interviews were audio‐recorded and transcribed verbatim. Interview transcripts were analysed by the first author, aided by NVivo 11 (QSR International Pty Ltd, ), using inductive data‐driven coding followed by thematic analysis to provide rich detailed descriptions (Braun & Clarke, ), focussing on the exploration of inter‐ and intra‐group themes. Analysis and themes were discussed with the co‐authors in regular meetings throughout analysis. Interpretation of findings were then checked with the programme national lead and relevant contacts from NHSE PhIF. Study design and sampling A qualitative study design was used, with study sites in England purposively selected on the basis of key situational variables (Gray, ): Pharmacy base: community and hospital Number of pre‐registration trainee pharmacists in base doing GP rotation Length of GP placement: 13 weeks versus 26 weeks Organisation of GP placement: block versus split week/day Regions within England At each study site, semi‐structured telephone interviews were conducted with trainees, pharmacy base tutor, and/or GP pharmacist tutor, using a dyad/triad approach. A dyad involved at least one trainee and one of their tutors being interviewed. A triad involved at least one trainee and both their base supervisor and GP tutors. Study sites had to have a trainee and tutor participate to be included in the study. Recruitment The HEE national project lead provided the research team with 78 training sites and characteristics for purposive sampling. The research team initially selected 8–12 study sites using a sampling matrix based on key situational variables described above and emailed invitation letters and participant information sheets (PIS), with a request to contact the research team. These assured participants of confidentiality and that they could withdraw from the study without impact on their training. If sites from the initial sampling matrix didn't wish to participate, they were replaced by other sites with similar characteristics. Data collection Telephone interviews were conducted with trainees and tutors at seven study sites between January and March 2020; interviews were paused due to the emerging COVID‐19 pandemic, and resumed in June to July 2020. All participants provided written or verbal consent before the interview commenced. Interview schedules were informed by existing research (Jee et al., , , ; Jones et al., ; Schafheutle et al., ), an earlier pilot evaluation (Gray, ), and the HEE‐GP pre‐registration handbook (NHS Health Education England., ). Schedules were revised following discussions with the HEE national lead, with questions tailored to understand the contribution of GP placements to the achievement of pre‐registration learning outcomes, and an opportunity at the end to reflect on their overall GP placement experience (Appendix ). This study received ethics approval by The University of Manchester Research Ethics Committee (Ref no. 2020‐7914‐16,794) and NHS Health Research Authority (Ref no. NHS001659). Data analysis All interviews were audio‐recorded and transcribed verbatim. Interview transcripts were analysed by the first author, aided by NVivo 11 (QSR International Pty Ltd, ), using inductive data‐driven coding followed by thematic analysis to provide rich detailed descriptions (Braun & Clarke, ), focussing on the exploration of inter‐ and intra‐group themes. Analysis and themes were discussed with the co‐authors in regular meetings throughout analysis. Interpretation of findings were then checked with the programme national lead and relevant contacts from NHSE PhIF. RESULTS 3.1 Site characteristics The characteristics of placement sites involved in this study are provided in Table . Of 33 placement sites approached, 11 participated as a dyad/triad (i.e. trainee and at least one of their tutors) [Table ]. Reasons for non‐participation are provided in the report (Hindi et al., ). Thirty‐four interviews were completed (14 trainees – 6 female, 8 male; 11 base tutors – 4 female, 8 male; 9 GP tutors – 4 female, 5 male). In one placement site, the superintendent (pharmacist with overall responsibility across a pharmacy chain) was interviewed instead of the base tutor. 3.2 Overview of GP placement model Trainees viewed the more flexible structure of GP placements with overarching goals, which was more learner‐centred and tailored to their needs than their base setting as important. During thematic analysis, findings were materialised into a model for implementation of cross‐sector placements in GP involving a number of key phases (Figure ), which are described next. 3.3 Preliminary phase This phase covers setting up/planning cross‐sector placements, and what needs to be in place prior to trainee arriving. 3.3.1 Setting up cross‐sector placements Setting up “successful” training placements required negotiation with GP sites to take on trainees, which was more straightforward when building on already established relationships. Because base sector and GP sites had to register with HEE a long time in advance, contingency planning/flexibility was needed to allow for changes in staffing and circumstances in base and GP site. 3.3.2 Preparing for GP placements An orientation event provided an important opportunity for trainee and base tutors to meet and discuss expectations, outcomes, and the placement structure. Many base tutors arranged for trainees to meet their GP tutors before the placement started, some arranged for the trainee to visit the GP site. 3.3.3 GP placement models Employer (base) and host GP practice sites needed to negotiate and agree on how to structure GP placements. Trainees and tutors highlighted advantages and disadvantages of different placement structures. Hospital tutors and trainees in single block placements believed that this structure enabled trainees to fully integrate in GP by spending uninterrupted time there. They also viewed preferred block placements as fitting well with a hospital's rotation structure. GP tutors perceived a block made it easier to incorporate a trainee into routine practice. “I think it’s better that it’s a block because I think it gives better continuity, it allows the pre‐regs to settle in because I think it is difficult for our pre‐regs rotating through these different areas and having to learn about new systems, new environments, new staff that they’re working with. I feel like they need to settle into the new rotation and set objectives that are consistent”. (Site 5, hospital, base tutor – single block) Both base and GP tutors in the two study sites perceived their multiple block placements enabled spiralling of learning (i.e. spread out over time rather than being concentrated in shorter periods). The main disadvantage of block placements was that it required trainees to relearn or refresh their understanding upon returning to base sector. Most community pharmacy/GP pairings used split week placements, which were viewed as helping trainees to develop in both sectors simultaneously throughout the year, and as enhancing cross‐sector communication between community pharmacy and GP. “I really enjoy the split weeks. It’s really nice to work on patient cases in both GP and in the community pharmacy […] And just building the relationship with both the colleagues in the community pharmacy and in the GP practice …. I think it's helped the community pharmacy’s communication with the GP practice”. (Site 9, community pharmacy, trainee – split weeks) Some trainees identified that split week placements meant not always being able to see through the resolution of problems, which was more pronounced in split day placements (morning GP, afternoon community pharmacy). “To a certain extent, it’s good, but I mean, there are opportunities or certain incidents where I miss certain components of the day to day activity of either or both the GP or the community. So, for example, because I leave early at the GP, I don’t see like the med reviews that happen towards the afternoon. Or if I’m in the pharmacy, I don’t actually do dispensing of the methadone or something like that, for patients who come in the morning. So I kind of sometimes miss aspects of both, but I have like snippets”. (Site 6, community pharmacy, trainee – split days) 3.3.4 Duration and timing of GP placement GP practice and base site agreed their preferred placement duration and its timing. Most trainees and base tutors in block placements preferred trainees to spend their initial 3 months in the base sector (until the 13‐week appraisal), so that trainees became accustomed to their base site, build confidence, and complete some of their hospital accreditations and logs (where relevant). All trainees and tutors in the hospital/GP pairings agreed that 13 weeks was an appropriate (minimum) duration offering with sufficient opportunities to undertake a range of activities and learn new skills. Trainees, hospital base, and GP tutors considered that 26 weeks in GP would make fitting in all hospital activities challenging. “I think less than nine months there, three months with us probably wouldn't be sufficient to cover everything you need to cover in hospital. All that you would do if you stayed longer is just develop further… So you wouldn't necessarily do any more in terms of what you do, it would just be more complex and possibly more independent if you stayed longer.” (Site 3, hospital, GP tutor – single block) In community pharmacy/GP pairings, all trainees and base (except one) and GP tutors considered 26 weeks in each sector optimal. “I'd probably say it's perfect the way it is 26 weeks because as much as there is to do in GP, there is always a lot to do in community pharmacy as well. So I think if you're in one place more than the other, then you're kind of missing out in either place”. (Site 8, community pharmacy, trainee 2 – split weeks) 3.4 Collaboration between base and GP sites Once GP placements were underway, base and GP tutors emphasised the importance of good communication particularly at handover to ensure all processes/procedures were set up for the trainee. Base tutors highlighted the importance of keeping the trainee linked to the base sector, by making sure that trainees had access to regular learning sets and training days at the base during their GP placement. 3.5 Phase 1: Transition This phase is about how trainees were introduced to the GP environment and factors which eased/supported trainees’ transition from base to GP sector. 3.5.1 Commencing GP placements Analysis showed the importance of GP sites to understand the pre‐registration trainee role, in terms of competence and scope of practice as non‐registered healthcare professionals. Trainees believed that GP staff were prepared for them at the practice to start their placement, but were commonly unclear about a trainee pharmacist's capabilities (i.e. skills and knowledge). This meant that clinical and non‐clinical staff had to spend time initially to better understand what trainees could be expected to do in GP. “I don’t know if they knew what they wanted me to do…and I didn’t really fit anywhere, but as time went on, obviously they figured out what I can do, what I’m comfortable doing, what I’m not comfortable doing, and therefore obviously created a template around me, and that will feel like I’m contributing to the team”. (Site 6, community pharmacy, trainee – split day) 3.5.2 Supporting trainees’ transition to GP sector When first entering their GP setting, trainees needed to adapt to the new work environment, and building rapport with staff, with a range of factors easing transition and creating a supportive learning environment. A positive welcome and an effective induction covering policies, procedures, and mandatory training was vital. To begin with, trainees spent time shadowing non‐clinical staff in order to get to know the IT system, how to book patient appointments, scanning in clinic letter, and referring patients on, although trainees did not immediately recognise the value of shadowing: “The feedback I got from the trainee was that she initially didn't understand why she needed to do those things, because it was an administrative task, it was something that a receptionist does. But after discussion she understood the purpose of doing those tasks is to get a wider understanding of how everything fits together in general practice, how things are triaged, how people end up in certain clinics and once that was explained to her she appreciated the task in hand a bit better. (Site 4, hospital, GP tutor – multiple blocks) 3.6 Phase 2: Learning the ropes This relates to the phase in the supervision model which supports trainees’ gradual transitioning from shadowing to more independent clinical practice, by starting to perform activities. 3.6.1 Activities undertaken by trainees and supervision to support work‐based learning Following the induction/shadowing period, trainees performed a range of activities that gradually increased in complexity, progressing from technical and administrative tasks (e.g. medication queries, medication reconciliations) to clinical tasks (e.g. medication reviews, basic clinical assessments). Supervision also changed over time, depending on a trainee's confidence and competence and the nature of the activity. With time, trainees became more capable of performing clerical tasks, audits, dealing with different kinds of medication queries on the telephone (e.g. queries about patient medications, repeat prescriptions), and reconciling medications for patients recently discharged from hospital. It took trainees time to undertake patient‐facing activities such as clinical assessments and medication reviews. GP tutors supported trainees to gradually take on an increasingly active role. In the beginning, trainees would observe their GP tutors undertake medication reviews. In preparation, tutors asked trainees to go through a patient's medicines, identify any problems, discuss changes, and potential discussion points with patient. “We watched the pharmacist do medication reviews and then he’d kind of give us patients that were coming in and research into the problems they might be having; going through their medication list, picking out any kind of health thing we want to do. It was kind of doing what they’re doing but in the prep beforehand, obviously, because we weren’t experienced enough to do it ourselves”. (Site 2, hospital, trainee 2 – split weeks) Following a period of observation and trainees developing their clinical skills (Table ), supervision progressed to GP tutors selecting patients prescribed a single medication or who required a single chronic disease medication review and asking trainees to consult under supervision. “I’ll pick out at least one or two patients from that list for them to actually do the review with me sitting in with them. So they’re starting to do the consultation skills, they might have to do a blood pressure check, they might have to do a peak flow”. (Site 3, hospital, GP tutor – single block) 3.7 Phase 3: Practice The practice phase of the implementation model relates to when trainees undertake more complex medication reviews and clinical assessments, underpinned by a medical education supervision model. 3.7.1 Pre‐brief to debrief (in presence of patient) As trainees learned how to apply their clinical knowledge, they moved to provided face‐to‐face medication reviews more independently, with most GP tutors basing their approach on that used with undergraduate medical students: “We’ve used the same structure as what we would do for the undergraduate medical students…. where he will see a patient and we’ll protect some time straight after, you know, for the supervisor which is myself. Then he’ll see the next patient and then there’ll be some protected time to debrief in front of the patient. So, we’ve used the same for the pre‐reg pharmacists and that seems to work really well because then he’s got confidence that if there’s something he’s unsure about, there’s going to be somebody, you know, there straightaway for him to handover to”. (Site 7, community pharmacy, GP tutor – multiple blocks) Whilst trainees learned different consultation styles and refined their clinical skills through also observing nurses and GPs during clinics, trainees reported very limited engagement with trainees from other healthcare professions. 3.8 Feedback and assessment during GP placements Trainees felt their GP tutors were very supportive and approachable, and reported having open and regular communication. GP tutors facilitated trainees’ learning and development by providing learning opportunities and formative feedback. This involved GP tutors discussing key learning objectives for activities; asking trainees thought‐provoking questions; and signposting to resources for self‐study. Having a shared/joint approach between sites to supporting trainees to achieve intended learning outcomes was important. However, some GP and base tutors strongly believed that in future, GP placements needed to be underpinned by a framework for assessing trainees’ competence to undertake patient‐facing activities. Furthermore, base and GP tutors sought reassurance that they were providing the GP placement appropriately, particularly as this type of cross‐sector placement was still in its infancy. “…there’s no competency framework for pre‐regs, so this is where we struggled a bit. But it’s a case of how many times do you get them to check a temperature or listen to a chest or do a peak flow before you can say that they’re competent to do it on their own, given the fact they did it for four to five years as part of the undergraduate degree as well.” (Site 7, community pharmacy, GP tutor – multiple blocks) GP tutors at all of the study sites only discussed the formative assessment tools (Appendix ) when prompted. It became clear that tutors either used these tools rarely or not at all. 3.9 Placement outcomes When trainees and base and GP tutors were asked about the benefits and drawbacks of a GP placement, all thought that trainees could apply the knowledge gained at university in practice, and that their consultation and clinical skills significantly improved. “With consultation skills, for pharmacists anyway I feel it’s something we don’t do enough of at university… you never really develop how to speak to a real person in front of you. So I think that’s an important part of what I try and do here is to develop those skills, because I think they’re the ones that we’re missing as pharmacists. And it is something that you have to develop your own way of consulting. So you can watch other people and see how they do it, but you need to develop your own way and your own confidence, and it’s nice to see that over the 13 weeks you see that starting to develop”. (Site 3, hospital, GP tutor – single block) All participants agreed that experiencing two different sectors produced a well‐rounded trainee pharmacist who could work in both sectors. Cross‐sector working also enabled trainees to be flexible/adaptable, learn new skills quickly, and form new relationships with different members of the multidisciplinary team. Trainees in both types of pairings gained a better understanding of the patient pathway across different care settings, and they appreciated the importance of good communication between settings. “I think it gives you a really good holistic view of healthcare, in that I think I’m now much more able to understand a patient’s journey from GP to hospital. But I think the bigger benefit of that actually is me understanding the importance of communication between the two sectors. […] In hospital you are told to make sure your discharge summaries are clear, but now I’ve actually seen the other end of it and had to fix those things”. (Site 1, hospital, trainee – single block) “Now I can work in two different places quite seamlessly. I think you learn to be a bit more flexible in your working and adapt in that sense as to what you’re doing on a daily basis. I think as well as that it helps that you’ve seen the whole process of primary care really – well, almost anyway – to see how medications are prescribed, reauthorized, sent across to the pharmacy and then dispensed.”. (Site 7, community pharmacy, trainee – multiple blocks) As placements progressed, trainees and GP tutors felt they became a valued member of the team who helped ease some of the GP workload: “At first I did kind of feel like I was a weight or a burden to obviously the GP, because I had to be taught everything from the beginning, but as time went on, I do feel like I’m being invited more, and people are coming to me more and asking me, can you help with this, or can you help with that issue.”. (Site 6, community pharmacy, trainee – split day) Most tutors felt that time commitment and procedure to run a cross‐sector placement was similar to single‐sector training. Trainees were supernumerary, so impact on day‐to‐day practice was minimal. What created some difficulty was a lack of flexibility in delivery/organisation of hospital/GP placements, which meant that hospital/GP trainees were expected to complete the same logs, assessments, etc. as those undertaking single‐sector training: “But I found that in hospital mainly I would be behind in a lot of things. So, for example, my dispensing competencies. Because I had GP in it the way that they structured my pre reg year they kind of cut certain rotations that normally for the other previous years would be two weeks, now mine is one week, or it would be four weeks, now mine is three weeks. But they’ve kind of kept the same expectations as if I was there the whole time. So because of that I found that I struggled in hospital because I have limited time to do something. (Site 4, hospital, trainee – multiple blocks) Split community pharmacy/GP trainees and tutors were more concerned about trainees missing opportunities to learn the management side of community pharmacy (i.e. how to run a branch and manage people). Site characteristics The characteristics of placement sites involved in this study are provided in Table . Of 33 placement sites approached, 11 participated as a dyad/triad (i.e. trainee and at least one of their tutors) [Table ]. Reasons for non‐participation are provided in the report (Hindi et al., ). Thirty‐four interviews were completed (14 trainees – 6 female, 8 male; 11 base tutors – 4 female, 8 male; 9 GP tutors – 4 female, 5 male). In one placement site, the superintendent (pharmacist with overall responsibility across a pharmacy chain) was interviewed instead of the base tutor. Overview of GP placement model Trainees viewed the more flexible structure of GP placements with overarching goals, which was more learner‐centred and tailored to their needs than their base setting as important. During thematic analysis, findings were materialised into a model for implementation of cross‐sector placements in GP involving a number of key phases (Figure ), which are described next. Preliminary phase This phase covers setting up/planning cross‐sector placements, and what needs to be in place prior to trainee arriving. 3.3.1 Setting up cross‐sector placements Setting up “successful” training placements required negotiation with GP sites to take on trainees, which was more straightforward when building on already established relationships. Because base sector and GP sites had to register with HEE a long time in advance, contingency planning/flexibility was needed to allow for changes in staffing and circumstances in base and GP site. 3.3.2 Preparing for GP placements An orientation event provided an important opportunity for trainee and base tutors to meet and discuss expectations, outcomes, and the placement structure. Many base tutors arranged for trainees to meet their GP tutors before the placement started, some arranged for the trainee to visit the GP site. 3.3.3 GP placement models Employer (base) and host GP practice sites needed to negotiate and agree on how to structure GP placements. Trainees and tutors highlighted advantages and disadvantages of different placement structures. Hospital tutors and trainees in single block placements believed that this structure enabled trainees to fully integrate in GP by spending uninterrupted time there. They also viewed preferred block placements as fitting well with a hospital's rotation structure. GP tutors perceived a block made it easier to incorporate a trainee into routine practice. “I think it’s better that it’s a block because I think it gives better continuity, it allows the pre‐regs to settle in because I think it is difficult for our pre‐regs rotating through these different areas and having to learn about new systems, new environments, new staff that they’re working with. I feel like they need to settle into the new rotation and set objectives that are consistent”. (Site 5, hospital, base tutor – single block) Both base and GP tutors in the two study sites perceived their multiple block placements enabled spiralling of learning (i.e. spread out over time rather than being concentrated in shorter periods). The main disadvantage of block placements was that it required trainees to relearn or refresh their understanding upon returning to base sector. Most community pharmacy/GP pairings used split week placements, which were viewed as helping trainees to develop in both sectors simultaneously throughout the year, and as enhancing cross‐sector communication between community pharmacy and GP. “I really enjoy the split weeks. It’s really nice to work on patient cases in both GP and in the community pharmacy […] And just building the relationship with both the colleagues in the community pharmacy and in the GP practice …. I think it's helped the community pharmacy’s communication with the GP practice”. (Site 9, community pharmacy, trainee – split weeks) Some trainees identified that split week placements meant not always being able to see through the resolution of problems, which was more pronounced in split day placements (morning GP, afternoon community pharmacy). “To a certain extent, it’s good, but I mean, there are opportunities or certain incidents where I miss certain components of the day to day activity of either or both the GP or the community. So, for example, because I leave early at the GP, I don’t see like the med reviews that happen towards the afternoon. Or if I’m in the pharmacy, I don’t actually do dispensing of the methadone or something like that, for patients who come in the morning. So I kind of sometimes miss aspects of both, but I have like snippets”. (Site 6, community pharmacy, trainee – split days) 3.3.4 Duration and timing of GP placement GP practice and base site agreed their preferred placement duration and its timing. Most trainees and base tutors in block placements preferred trainees to spend their initial 3 months in the base sector (until the 13‐week appraisal), so that trainees became accustomed to their base site, build confidence, and complete some of their hospital accreditations and logs (where relevant). All trainees and tutors in the hospital/GP pairings agreed that 13 weeks was an appropriate (minimum) duration offering with sufficient opportunities to undertake a range of activities and learn new skills. Trainees, hospital base, and GP tutors considered that 26 weeks in GP would make fitting in all hospital activities challenging. “I think less than nine months there, three months with us probably wouldn't be sufficient to cover everything you need to cover in hospital. All that you would do if you stayed longer is just develop further… So you wouldn't necessarily do any more in terms of what you do, it would just be more complex and possibly more independent if you stayed longer.” (Site 3, hospital, GP tutor – single block) In community pharmacy/GP pairings, all trainees and base (except one) and GP tutors considered 26 weeks in each sector optimal. “I'd probably say it's perfect the way it is 26 weeks because as much as there is to do in GP, there is always a lot to do in community pharmacy as well. So I think if you're in one place more than the other, then you're kind of missing out in either place”. (Site 8, community pharmacy, trainee 2 – split weeks) Setting up cross‐sector placements Setting up “successful” training placements required negotiation with GP sites to take on trainees, which was more straightforward when building on already established relationships. Because base sector and GP sites had to register with HEE a long time in advance, contingency planning/flexibility was needed to allow for changes in staffing and circumstances in base and GP site. Preparing for GP placements An orientation event provided an important opportunity for trainee and base tutors to meet and discuss expectations, outcomes, and the placement structure. Many base tutors arranged for trainees to meet their GP tutors before the placement started, some arranged for the trainee to visit the GP site. GP placement models Employer (base) and host GP practice sites needed to negotiate and agree on how to structure GP placements. Trainees and tutors highlighted advantages and disadvantages of different placement structures. Hospital tutors and trainees in single block placements believed that this structure enabled trainees to fully integrate in GP by spending uninterrupted time there. They also viewed preferred block placements as fitting well with a hospital's rotation structure. GP tutors perceived a block made it easier to incorporate a trainee into routine practice. “I think it’s better that it’s a block because I think it gives better continuity, it allows the pre‐regs to settle in because I think it is difficult for our pre‐regs rotating through these different areas and having to learn about new systems, new environments, new staff that they’re working with. I feel like they need to settle into the new rotation and set objectives that are consistent”. (Site 5, hospital, base tutor – single block) Both base and GP tutors in the two study sites perceived their multiple block placements enabled spiralling of learning (i.e. spread out over time rather than being concentrated in shorter periods). The main disadvantage of block placements was that it required trainees to relearn or refresh their understanding upon returning to base sector. Most community pharmacy/GP pairings used split week placements, which were viewed as helping trainees to develop in both sectors simultaneously throughout the year, and as enhancing cross‐sector communication between community pharmacy and GP. “I really enjoy the split weeks. It’s really nice to work on patient cases in both GP and in the community pharmacy […] And just building the relationship with both the colleagues in the community pharmacy and in the GP practice …. I think it's helped the community pharmacy’s communication with the GP practice”. (Site 9, community pharmacy, trainee – split weeks) Some trainees identified that split week placements meant not always being able to see through the resolution of problems, which was more pronounced in split day placements (morning GP, afternoon community pharmacy). “To a certain extent, it’s good, but I mean, there are opportunities or certain incidents where I miss certain components of the day to day activity of either or both the GP or the community. So, for example, because I leave early at the GP, I don’t see like the med reviews that happen towards the afternoon. Or if I’m in the pharmacy, I don’t actually do dispensing of the methadone or something like that, for patients who come in the morning. So I kind of sometimes miss aspects of both, but I have like snippets”. (Site 6, community pharmacy, trainee – split days) Duration and timing of GP placement GP practice and base site agreed their preferred placement duration and its timing. Most trainees and base tutors in block placements preferred trainees to spend their initial 3 months in the base sector (until the 13‐week appraisal), so that trainees became accustomed to their base site, build confidence, and complete some of their hospital accreditations and logs (where relevant). All trainees and tutors in the hospital/GP pairings agreed that 13 weeks was an appropriate (minimum) duration offering with sufficient opportunities to undertake a range of activities and learn new skills. Trainees, hospital base, and GP tutors considered that 26 weeks in GP would make fitting in all hospital activities challenging. “I think less than nine months there, three months with us probably wouldn't be sufficient to cover everything you need to cover in hospital. All that you would do if you stayed longer is just develop further… So you wouldn't necessarily do any more in terms of what you do, it would just be more complex and possibly more independent if you stayed longer.” (Site 3, hospital, GP tutor – single block) In community pharmacy/GP pairings, all trainees and base (except one) and GP tutors considered 26 weeks in each sector optimal. “I'd probably say it's perfect the way it is 26 weeks because as much as there is to do in GP, there is always a lot to do in community pharmacy as well. So I think if you're in one place more than the other, then you're kind of missing out in either place”. (Site 8, community pharmacy, trainee 2 – split weeks) Collaboration between base and GP sites Once GP placements were underway, base and GP tutors emphasised the importance of good communication particularly at handover to ensure all processes/procedures were set up for the trainee. Base tutors highlighted the importance of keeping the trainee linked to the base sector, by making sure that trainees had access to regular learning sets and training days at the base during their GP placement. Phase 1: Transition This phase is about how trainees were introduced to the GP environment and factors which eased/supported trainees’ transition from base to GP sector. 3.5.1 Commencing GP placements Analysis showed the importance of GP sites to understand the pre‐registration trainee role, in terms of competence and scope of practice as non‐registered healthcare professionals. Trainees believed that GP staff were prepared for them at the practice to start their placement, but were commonly unclear about a trainee pharmacist's capabilities (i.e. skills and knowledge). This meant that clinical and non‐clinical staff had to spend time initially to better understand what trainees could be expected to do in GP. “I don’t know if they knew what they wanted me to do…and I didn’t really fit anywhere, but as time went on, obviously they figured out what I can do, what I’m comfortable doing, what I’m not comfortable doing, and therefore obviously created a template around me, and that will feel like I’m contributing to the team”. (Site 6, community pharmacy, trainee – split day) 3.5.2 Supporting trainees’ transition to GP sector When first entering their GP setting, trainees needed to adapt to the new work environment, and building rapport with staff, with a range of factors easing transition and creating a supportive learning environment. A positive welcome and an effective induction covering policies, procedures, and mandatory training was vital. To begin with, trainees spent time shadowing non‐clinical staff in order to get to know the IT system, how to book patient appointments, scanning in clinic letter, and referring patients on, although trainees did not immediately recognise the value of shadowing: “The feedback I got from the trainee was that she initially didn't understand why she needed to do those things, because it was an administrative task, it was something that a receptionist does. But after discussion she understood the purpose of doing those tasks is to get a wider understanding of how everything fits together in general practice, how things are triaged, how people end up in certain clinics and once that was explained to her she appreciated the task in hand a bit better. (Site 4, hospital, GP tutor – multiple blocks) Commencing GP placements Analysis showed the importance of GP sites to understand the pre‐registration trainee role, in terms of competence and scope of practice as non‐registered healthcare professionals. Trainees believed that GP staff were prepared for them at the practice to start their placement, but were commonly unclear about a trainee pharmacist's capabilities (i.e. skills and knowledge). This meant that clinical and non‐clinical staff had to spend time initially to better understand what trainees could be expected to do in GP. “I don’t know if they knew what they wanted me to do…and I didn’t really fit anywhere, but as time went on, obviously they figured out what I can do, what I’m comfortable doing, what I’m not comfortable doing, and therefore obviously created a template around me, and that will feel like I’m contributing to the team”. (Site 6, community pharmacy, trainee – split day) Supporting trainees’ transition to GP sector When first entering their GP setting, trainees needed to adapt to the new work environment, and building rapport with staff, with a range of factors easing transition and creating a supportive learning environment. A positive welcome and an effective induction covering policies, procedures, and mandatory training was vital. To begin with, trainees spent time shadowing non‐clinical staff in order to get to know the IT system, how to book patient appointments, scanning in clinic letter, and referring patients on, although trainees did not immediately recognise the value of shadowing: “The feedback I got from the trainee was that she initially didn't understand why she needed to do those things, because it was an administrative task, it was something that a receptionist does. But after discussion she understood the purpose of doing those tasks is to get a wider understanding of how everything fits together in general practice, how things are triaged, how people end up in certain clinics and once that was explained to her she appreciated the task in hand a bit better. (Site 4, hospital, GP tutor – multiple blocks) Phase 2: Learning the ropes This relates to the phase in the supervision model which supports trainees’ gradual transitioning from shadowing to more independent clinical practice, by starting to perform activities. 3.6.1 Activities undertaken by trainees and supervision to support work‐based learning Following the induction/shadowing period, trainees performed a range of activities that gradually increased in complexity, progressing from technical and administrative tasks (e.g. medication queries, medication reconciliations) to clinical tasks (e.g. medication reviews, basic clinical assessments). Supervision also changed over time, depending on a trainee's confidence and competence and the nature of the activity. With time, trainees became more capable of performing clerical tasks, audits, dealing with different kinds of medication queries on the telephone (e.g. queries about patient medications, repeat prescriptions), and reconciling medications for patients recently discharged from hospital. It took trainees time to undertake patient‐facing activities such as clinical assessments and medication reviews. GP tutors supported trainees to gradually take on an increasingly active role. In the beginning, trainees would observe their GP tutors undertake medication reviews. In preparation, tutors asked trainees to go through a patient's medicines, identify any problems, discuss changes, and potential discussion points with patient. “We watched the pharmacist do medication reviews and then he’d kind of give us patients that were coming in and research into the problems they might be having; going through their medication list, picking out any kind of health thing we want to do. It was kind of doing what they’re doing but in the prep beforehand, obviously, because we weren’t experienced enough to do it ourselves”. (Site 2, hospital, trainee 2 – split weeks) Following a period of observation and trainees developing their clinical skills (Table ), supervision progressed to GP tutors selecting patients prescribed a single medication or who required a single chronic disease medication review and asking trainees to consult under supervision. “I’ll pick out at least one or two patients from that list for them to actually do the review with me sitting in with them. So they’re starting to do the consultation skills, they might have to do a blood pressure check, they might have to do a peak flow”. (Site 3, hospital, GP tutor – single block) Activities undertaken by trainees and supervision to support work‐based learning Following the induction/shadowing period, trainees performed a range of activities that gradually increased in complexity, progressing from technical and administrative tasks (e.g. medication queries, medication reconciliations) to clinical tasks (e.g. medication reviews, basic clinical assessments). Supervision also changed over time, depending on a trainee's confidence and competence and the nature of the activity. With time, trainees became more capable of performing clerical tasks, audits, dealing with different kinds of medication queries on the telephone (e.g. queries about patient medications, repeat prescriptions), and reconciling medications for patients recently discharged from hospital. It took trainees time to undertake patient‐facing activities such as clinical assessments and medication reviews. GP tutors supported trainees to gradually take on an increasingly active role. In the beginning, trainees would observe their GP tutors undertake medication reviews. In preparation, tutors asked trainees to go through a patient's medicines, identify any problems, discuss changes, and potential discussion points with patient. “We watched the pharmacist do medication reviews and then he’d kind of give us patients that were coming in and research into the problems they might be having; going through their medication list, picking out any kind of health thing we want to do. It was kind of doing what they’re doing but in the prep beforehand, obviously, because we weren’t experienced enough to do it ourselves”. (Site 2, hospital, trainee 2 – split weeks) Following a period of observation and trainees developing their clinical skills (Table ), supervision progressed to GP tutors selecting patients prescribed a single medication or who required a single chronic disease medication review and asking trainees to consult under supervision. “I’ll pick out at least one or two patients from that list for them to actually do the review with me sitting in with them. So they’re starting to do the consultation skills, they might have to do a blood pressure check, they might have to do a peak flow”. (Site 3, hospital, GP tutor – single block) Phase 3: Practice The practice phase of the implementation model relates to when trainees undertake more complex medication reviews and clinical assessments, underpinned by a medical education supervision model. 3.7.1 Pre‐brief to debrief (in presence of patient) As trainees learned how to apply their clinical knowledge, they moved to provided face‐to‐face medication reviews more independently, with most GP tutors basing their approach on that used with undergraduate medical students: “We’ve used the same structure as what we would do for the undergraduate medical students…. where he will see a patient and we’ll protect some time straight after, you know, for the supervisor which is myself. Then he’ll see the next patient and then there’ll be some protected time to debrief in front of the patient. So, we’ve used the same for the pre‐reg pharmacists and that seems to work really well because then he’s got confidence that if there’s something he’s unsure about, there’s going to be somebody, you know, there straightaway for him to handover to”. (Site 7, community pharmacy, GP tutor – multiple blocks) Whilst trainees learned different consultation styles and refined their clinical skills through also observing nurses and GPs during clinics, trainees reported very limited engagement with trainees from other healthcare professions. Pre‐brief to debrief (in presence of patient) As trainees learned how to apply their clinical knowledge, they moved to provided face‐to‐face medication reviews more independently, with most GP tutors basing their approach on that used with undergraduate medical students: “We’ve used the same structure as what we would do for the undergraduate medical students…. where he will see a patient and we’ll protect some time straight after, you know, for the supervisor which is myself. Then he’ll see the next patient and then there’ll be some protected time to debrief in front of the patient. So, we’ve used the same for the pre‐reg pharmacists and that seems to work really well because then he’s got confidence that if there’s something he’s unsure about, there’s going to be somebody, you know, there straightaway for him to handover to”. (Site 7, community pharmacy, GP tutor – multiple blocks) Whilst trainees learned different consultation styles and refined their clinical skills through also observing nurses and GPs during clinics, trainees reported very limited engagement with trainees from other healthcare professions. Feedback and assessment during GP placements Trainees felt their GP tutors were very supportive and approachable, and reported having open and regular communication. GP tutors facilitated trainees’ learning and development by providing learning opportunities and formative feedback. This involved GP tutors discussing key learning objectives for activities; asking trainees thought‐provoking questions; and signposting to resources for self‐study. Having a shared/joint approach between sites to supporting trainees to achieve intended learning outcomes was important. However, some GP and base tutors strongly believed that in future, GP placements needed to be underpinned by a framework for assessing trainees’ competence to undertake patient‐facing activities. Furthermore, base and GP tutors sought reassurance that they were providing the GP placement appropriately, particularly as this type of cross‐sector placement was still in its infancy. “…there’s no competency framework for pre‐regs, so this is where we struggled a bit. But it’s a case of how many times do you get them to check a temperature or listen to a chest or do a peak flow before you can say that they’re competent to do it on their own, given the fact they did it for four to five years as part of the undergraduate degree as well.” (Site 7, community pharmacy, GP tutor – multiple blocks) GP tutors at all of the study sites only discussed the formative assessment tools (Appendix ) when prompted. It became clear that tutors either used these tools rarely or not at all. Placement outcomes When trainees and base and GP tutors were asked about the benefits and drawbacks of a GP placement, all thought that trainees could apply the knowledge gained at university in practice, and that their consultation and clinical skills significantly improved. “With consultation skills, for pharmacists anyway I feel it’s something we don’t do enough of at university… you never really develop how to speak to a real person in front of you. So I think that’s an important part of what I try and do here is to develop those skills, because I think they’re the ones that we’re missing as pharmacists. And it is something that you have to develop your own way of consulting. So you can watch other people and see how they do it, but you need to develop your own way and your own confidence, and it’s nice to see that over the 13 weeks you see that starting to develop”. (Site 3, hospital, GP tutor – single block) All participants agreed that experiencing two different sectors produced a well‐rounded trainee pharmacist who could work in both sectors. Cross‐sector working also enabled trainees to be flexible/adaptable, learn new skills quickly, and form new relationships with different members of the multidisciplinary team. Trainees in both types of pairings gained a better understanding of the patient pathway across different care settings, and they appreciated the importance of good communication between settings. “I think it gives you a really good holistic view of healthcare, in that I think I’m now much more able to understand a patient’s journey from GP to hospital. But I think the bigger benefit of that actually is me understanding the importance of communication between the two sectors. […] In hospital you are told to make sure your discharge summaries are clear, but now I’ve actually seen the other end of it and had to fix those things”. (Site 1, hospital, trainee – single block) “Now I can work in two different places quite seamlessly. I think you learn to be a bit more flexible in your working and adapt in that sense as to what you’re doing on a daily basis. I think as well as that it helps that you’ve seen the whole process of primary care really – well, almost anyway – to see how medications are prescribed, reauthorized, sent across to the pharmacy and then dispensed.”. (Site 7, community pharmacy, trainee – multiple blocks) As placements progressed, trainees and GP tutors felt they became a valued member of the team who helped ease some of the GP workload: “At first I did kind of feel like I was a weight or a burden to obviously the GP, because I had to be taught everything from the beginning, but as time went on, I do feel like I’m being invited more, and people are coming to me more and asking me, can you help with this, or can you help with that issue.”. (Site 6, community pharmacy, trainee – split day) Most tutors felt that time commitment and procedure to run a cross‐sector placement was similar to single‐sector training. Trainees were supernumerary, so impact on day‐to‐day practice was minimal. What created some difficulty was a lack of flexibility in delivery/organisation of hospital/GP placements, which meant that hospital/GP trainees were expected to complete the same logs, assessments, etc. as those undertaking single‐sector training: “But I found that in hospital mainly I would be behind in a lot of things. So, for example, my dispensing competencies. Because I had GP in it the way that they structured my pre reg year they kind of cut certain rotations that normally for the other previous years would be two weeks, now mine is one week, or it would be four weeks, now mine is three weeks. But they’ve kind of kept the same expectations as if I was there the whole time. So because of that I found that I struggled in hospital because I have limited time to do something. (Site 4, hospital, trainee – multiple blocks) Split community pharmacy/GP trainees and tutors were more concerned about trainees missing opportunities to learn the management side of community pharmacy (i.e. how to run a branch and manage people). DISCUSSION This study explored views and experiences of cross‐sector GP/community and GP/hospital pre‐registration pharmacy placements, with a view to make recommendations for how to design and deliver multi‐sector learning. The study used a qualitative triad (dyad) approach involving 11 study sites, whereby the trainee and their tutor(s) were interviewed. Findings from this study have been applied to design a model (Figure ) to inform policy makers in relation to implementation of cross‐sector pre‐registration trainee placements in GP. Key factors to consider when rolling out this type of placement more widely include: good operational planning of GP placements and appropriate induction; collaborative supervision grounded in effective communication and working relationship between base and GP tutors; learner‐centred and well‐supervised workplace learning in a supportive GP environment with appropriate opportunities for trainees to learn and harness skills; and clear integration of GP placements and intended learning outcomes/competencies across the whole training year. Our findings indicate that GP placements should be progressive, increasing in complexity from shadowing and observation, onto simple tasks to application of consultation and clinical skills. This is consistent with medical supervision (Merritt et al., ), whereby learning should start with shadowing and observing, and be followed by incremental increases in complexity and responsibility/autonomy in practice. Tutor supervision needs to align with such gradual and incremental progression, being very direct initially and gradually moving to a model of pre‐ and de‐briefing. In our study, we have shown how this then enabled trainees to gradually, safely, and confidently take an increasingly independent (yet supported) approach to their clinical, patient‐facing (and eventually autonomous) practice. Regular contact and meaningful feedback by GP tutors along with both planned/formal and opportunistic/informal learning were found to be essential to support this progression (Haynes et al., ). In this study, whilst tutors provided informal feedback to trainees, formative assessment tools were used minimally. Drawing on evidence from medical education, formative assessment tools promote active, learner‐centred learning, accompanied by feedback from supervisors, and are perceived as having a positive effect on practice (Gooding et al., ; Preston et al., ; Thistlethwaite, ). Furthermore, incorporating such assessment tools into a more structured training programme in future would allow for formal assessment of trainees’ competencies to undertake patient‐facing activities in GP. Clear requirements will also ensure the expectations are well defined for both trainees and their tutors, and that set standards ensure all trainees experience equal and equitable access to a high‐quality learning experience. Similar to previous research (Christou et al., ; Gray, ), GP placements involved gradual progression, which started with an effective induction period to ease the transition into (and understanding of) GP. This study confirmed the importance of GP tutors being pharmacists, to role‐model and support, and bridge the understanding of a clinical pharmacist's scope of practice amongst the GP team (Christou et al., ; Gray, ). The reasons for trainees starting with observing non‐clinical staff need to be explained, so that trainees understand their relevance. A joined‐up approach is important, recognising that the GP placement is part of 12 months’ pre‐registration training, with GPhC standards/competences needing to be achieved over the total duration. Each partner in the base–GP pairing needs to recognise the transferability of skills developed rather than being concerned about trainees spending less time in any one sector. To facilitate such a joined‐up approach, good and regular communication and handover between the base and GP tutors and indeed a co‐ordinated approach to supervision are important. The importance of effective engagement and support between both tutors as a catalyst for better trainee integration within GP teams has been highlighted in previous evaluations (Christou et al., ; NHS Health Education England, ). Previous studies suggest that 26‐week GP placements are beneficial in developing the pre‐registration trainees’ clinical knowledge and confidence (NHS Health Education England, ), whereas 4–8‐week GP placements limit trainees’ opportunities to conduct supervised patient‐facing activities (Christou et al., ). Evidence from medical education suggests that placements longer than 8 weeks enable learners to better integrate into multidisciplinary teams, develop more autonomy, and undertake more complex tasks (Thistlethwaite et al., ). Our findings suggest that 13 weeks in GP is an appropriate minimum duration, whilst 26 weeks provided more opportunities for potentially more complex clinical and consultation skills learning. A longer duration was considered particularly welcome for community pharmacy. By purposively sampling study sites on the basis of key situational variables, our study demonstrated that all models of placement structure (block/split week) supported trainees’ learning and development. This was because flexibility in set‐up allowed pairing to establish what was most suitable for their local situation, and it enabled placement sites to create a learning environment that was learner‐centred. There did, however, appear to be a preference for block placements in hospital pairings, in effect turning a GP placement into one rotation. Split days/ weeks appeared to be favoured in community pharmacy pairings, particularly if pharmacy and GP were located in relatively close proximity. One of the main benefits of GP placements appeared to be trainees’ development of consultation and clinical assessments skills, which appear more difficult to achieve in both hospital and community pharmacy settings (Bullen et al., ; Jee et al., ). The GPhC introduced changes to the initial education and training of pharmacists in January 2021 which include replacing the pre‐registration year with a foundation training year, which will follow a revised 4‐year MPharm programme. The intention is to ensure pharmacists are equipped for their future roles, with revised learning outcomes ensuring pharmacists gain the skills, knowledge, and attributes needed for pharmacists to be independent prescriber ready at the point of registration. In light of these new GPhC standards and their focus on clinical and patient‐centred skills (General Pharmaceutical Council., ), GP placements will be critical to support the development of these competences and capabilities. Clarity on which pre‐registration trainee competencies should be achieved during GP placements is also important. Recognising further previous research indicates that pre‐registration learning in community pharmacy and hospital settings differs (Bullen et al., ; Jee et al., , ). We suggest a broader governance framework with minimum expectations to ensure consistency across the whole 12 months of foundation, with the need for standardised processes across different placements also recognised internationally (Lucas et al., ). Policy makers may also consider placements in all three sectors (hospital, community pharmacy, and GP). 4.1 Strength and limitations Using a dyad/triad sampling approach enabled data triangulation and generated a multi‐faceted understanding of factors impacting implementation of cross‐sector GP placements. To the authors’ knowledge, this is the first national evaluation of cross‐sector pre‐registration pharmacists in GP placements in England. Findings from this study are of great interest and importance currently due to the changes in primary care service provision that have taken place, the resultant greater opportunities for pharmacists in primary care, and the upcoming changes to undergraduate and foundation education and training. A key limitation was that there was potential self‐selection bias, and findings may be more positive. Furthermore, this qualitative study only represents the views of those who participated, and findings may be somewhat limited in their generalisability. It is also important to acknowledge that other countries will have differences in models of primary care service delivery and training of pharmacists. Therefore, further research is needed to determine the feasibility of implementing our cross‐sector training model within different countries and contexts. Strength and limitations Using a dyad/triad sampling approach enabled data triangulation and generated a multi‐faceted understanding of factors impacting implementation of cross‐sector GP placements. To the authors’ knowledge, this is the first national evaluation of cross‐sector pre‐registration pharmacists in GP placements in England. Findings from this study are of great interest and importance currently due to the changes in primary care service provision that have taken place, the resultant greater opportunities for pharmacists in primary care, and the upcoming changes to undergraduate and foundation education and training. A key limitation was that there was potential self‐selection bias, and findings may be more positive. Furthermore, this qualitative study only represents the views of those who participated, and findings may be somewhat limited in their generalisability. It is also important to acknowledge that other countries will have differences in models of primary care service delivery and training of pharmacists. Therefore, further research is needed to determine the feasibility of implementing our cross‐sector training model within different countries and contexts. CONCLUSION This study evaluated the implementation of cross‐sector pre‐registration placements in GP, and identified barriers to, and enablers of, ‘successful’ implementation. Key attributes of a successful pre‐registration cross‐sector training experience were identified and framed according to an implementation model which can inform policy reforms, including the new GPhC standards for the initial education and training of pharmacists and their focus on clinical and patient‐centred skills. After first piloting our implementation model through a feasibility study, it could also be applied by countries with similar advancements in pharmacy education and training. This study received ethics approval by The University of Manchester Research Ethics Committee (Ref no. 2020–7914–16794) and NHS Health Research Authority (Ref no. NHS001659). Written or verbal consent to participate in the study was obtained from each participant prior to starting data collection. None. Supplementary Material Click here for additional data file. Supplementary Material Click here for additional data file. Appendix S3 Click here for additional data file.
Effect of binocular disparity on learning anatomy with stereoscopic augmented reality visualization: A double center randomized controlled trial
a76fba8a-1943-40b0-b343-d86f70310ba3
10078652
Anatomy[mh]
Anatomical knowledge has been reported to be insufficient among medical students and junior doctors, who still experience difficulties in translating the acquired anatomical knowledge into clinical practice (McKeown et al., ; Spielman & Oliver, ; Prince et al., ; Bergman et al., ). The ability to translate this knowledge highly depends on their level of visual‐spatial abilities. It is defined as the ability to construct visual‐spatial (three‐dimensional [3D]) mental representations and mentally manipulate them (Gordon, ). Previous research has shown that visual‐spatial abilities are associated with anatomy knowledge assessment and technical skills assessment in the early phases of surgical training (Langlois et al., , ). Three‐dimensional visualization technology (3DVT) has a great potential to fill this gap, especially in students with lower visual‐spatial abilities (Yammine & Violato, ; Peterson & Mlynarczyk, ). Its contribution is becoming even more necessary in times of decreased teaching hours of anatomy and exposure to traditional teaching methods, such as cadaveric dissections (Drake et al., ; Bergman et al., ; Bergman et al., ; Drake et al., ; McBride & Drake, ; Holda et al., ; Rockarts et al., ). However, to know whether 3DVT is effective, is currently not enough. There is a need to know how this technology works to be able to implement it in everyone's unique educational setting (Cook, ). Stereoscopic versus monoscopic three‐dimensional visualization technology In real life, stereoscopic vision is obtained due to positioning of the human eyes in a way that generates two slightly different retinal images of an object, also referred to as binocular disparity (Cutting & Vishton, ). The same effect can be mimicked within 3DVT by presenting a slightly shifted and rotated image to the right and left eye. Stereoscopic vision can be obtained by supportive devices such as autostereoscopic displays e.g., Alioscopy 3D Display (Alioscopy, Paris, France), anaglyphic or polarized glasses, or by head‐mounted displays e.g., HoloLens™ (Microsoft Corp., Redmond, WA), Oculus Rift™ (Oculus VR, Menlo Park, CA), or HTC VIVE™ (High Tech Computer Corp., New Taipei City, Taiwan). Hololens™ is used to create interactive augmented reality (AR), also referred to as mixed reality. Oculus Rift™ and HTC VIVE™ are predominantly used to create virtual reality environments. A binocular vision of the viewer, though, is required to perceive the obtained visual depth. In the absence of stereoscopic vision, 3D effect is mimicked by monocular cues, such as shading, coloring, relative size and motion parallax (Johnston et al., ). The examples of monoscopic 3DVT include 3D anatomical models that can be explored from different angles on a computer, tablet or phone (Moro et al., ). Distinction between stereoscopic and monoscopic modalities within 3DVT is essential to make since different processes are involved. Research has shown that recognition of digital 3D objects appears to be greater when objects are presented stereoscopically (Kytö et al., ; Martinez et al., ; Railo et al., ; Anderson et al., ). More importantly, the type of modality can significantly affect learning. Monoscopic 3DVT has been demonstrated to have disadvantages for students with lower visual‐spatial abilities (Garg et al., , ; ; Levinson et al., ; Naaz, ; Bogomolova et al., ). The disadvantages are explained by the ability‐as‐enhancer hypothesis within the cognitive load theory (Hegarty & Sims, ; Mayer, ). Initially, it has been hypothesized that 3D objects are remembered as key view‐based two‐demensional (2D) images (Garg et al., ; Huk, ; Levinson et al., ; Khot et al., ). Consequently, when an unfamiliar 3D object is viewed from multiple angles, an increase in cognitive load occurs while generating a proper mental representation of a 3D object. During this process, individuals with higher visual‐spatial abilities are able to devote more cognitive resources to building mental connections, while students with lower visual‐spatial abilities get cognitively overloaded (Garg et al., ; Huk, ). The latter leads to underperformance among students with lower visual spatial abilities. However, as research has shown, with stereoscopic 3DVT, students with lower visual‐spatial abilities are able to achieve comparable levels of performance of students with higher visual‐spatial abilities (Cui et al., ). This can be explained by the fact that the mental 3D representations of the object are already built and provided by the stereoscopic projection and perception. Consequently, mental steps, that are required to build a mental 3D representation, can be skipped while leaving a sufficient amount of cognitive resources. In this way, students with lower visual‐spatial abilities are able to allocate these resources to learning. The role of stereopsis In health care, the benefits of stereoscopic visualization within 3D technologies have been recognized for years (Kang et al., ; Cutolo et al., , ; Sommer et al., ; Birt et al., ). Development and utilization of stereoscopic 3DVT is still growing, especially in the surgical field. Several examples include preoperative planning and identification of tumor with stereoscopic AR (Cutolo et al., ; Kumara et al., ; Checcucci et al., ). Another examples include stereoscopic visualization during minimal invasive surgeries where stereoscopic view of the surgical field would improve spatial understanding and orientation during laparoscopic surgeries (Kang et al., ; Schwab et al., ). Stereoscopic visualization even showed to shorten operative time of laparoscopic gastrectomy by reducing the intracorporeal dissection time (Itanini et al., ). The beneficial effect of stereopsis on learning anatomy has been recently demonstrated in a comprehensive systematic review and meta‐analysis (Bogomolova et al., ). In the meta‐analysis, the comparisons between studies were made within a single level of instructional design, e.g., stereopsis was isolated as the only true manipulated element in the experimental design. The positive effect of stereopsis was demonstrated across different types of 3D technologies combined together, predominantly using the VR headsets and 3D shutter glasses for desktop applications. How learning experience is affected by a particular type of stereoscopic 3DVT, remains a topic for further exploration. Stereoscopic augmented reality in anatomy education Stereoscopic AR is a new generation of 3DVT technology that combines stereoscopic visualization of 3D computer‐generated objects with the physical environment. The main distinguishing feature from other types of AR is the ability to provide stereoscopic vision, e.g., to perceive the anatomical model in real 3D. Additionally, it provides the ability to walk around the model and explore it form all possible angles without losing the sense of the user's own environment. This view can be obtained with e.g., HoloLens®, a head‐mounted display from Microsoft (Supporting Information ). In the previous study, authors evaluated the effectiveness of stereoscopic 3D AR visualization in learning anatomy of the lower leg among medical undergraduates (Bogomolova et al., ). Learning with a stereoscopic 3D AR model was more effective than learning with a monoscopic 3D desktop model. Interestingly, the observed positive learning effect was only present among students with lower visual‐spatial abilities. Stereoscopic vision was hypothesized to be one of the distinguishing features of intervention modality that could explain these differences. However, since the comparisons were made within different levels of instructional design, i.e., stereoscopic vision was not isolated as the only true manipulated element, the actual effect of stereoscopic vision remained unrevealed. A similar study design approach was used by Moro and colleagues ( ) who compared the effectiveness of HoloLens with mobile‐based AR environment. Although both learning modes were effective in terms of acquired anatomical knowledge, comparisons were still made within different levels of instructional design. In another recent study of the role of stereopsis in 3DVT, Wainman and colleagues have isolated binocular disparity by covering the non‐dominant eye of participants. Authors reported positive effect of stereoscopic vision in VR, but not in AR (Wainman et al., ). Although it was a simple and vivid way of isolating stereopsis, participants in the control group still remained aware of their condition which could have influenced the outcomes. Additionally, different effect measures of stereopsis in VR and AR suggest that the type of technology is decisive for the learning effect caused by stereoscopic vision. Objectives and aims Based on considerations described above and lessons learned from previous research, this study aimed to evaluate the role of binocular disparity in a stereoscopic 3D AR environment within a single level of instructional design. Therefore, the primary objective was to evaluate whether learning with a stereoscopic view of a 3D anatomical model of the lower extremity was more effective than learning with a monoscopic 3D view of the same model among medical undergraduates. The secondary objectives were to compare the perceived cognitive load among groups, and to evaluate whether visual‐spatial abilities would modify the outcomes. Authors hypothesized that learning within a stereoscopic 3D AR environment would be more effective than learning within a monoscopic 3D AR environment. Authors also hypothesized that the perceived cognitive load in the stereoscopic 3D view group would be lower than in the monoscopic 3D view group, and that the students with lower visual‐spatial abilities would benefit most from the stereoscopic 3D view of the model. In real life, stereoscopic vision is obtained due to positioning of the human eyes in a way that generates two slightly different retinal images of an object, also referred to as binocular disparity (Cutting & Vishton, ). The same effect can be mimicked within 3DVT by presenting a slightly shifted and rotated image to the right and left eye. Stereoscopic vision can be obtained by supportive devices such as autostereoscopic displays e.g., Alioscopy 3D Display (Alioscopy, Paris, France), anaglyphic or polarized glasses, or by head‐mounted displays e.g., HoloLens™ (Microsoft Corp., Redmond, WA), Oculus Rift™ (Oculus VR, Menlo Park, CA), or HTC VIVE™ (High Tech Computer Corp., New Taipei City, Taiwan). Hololens™ is used to create interactive augmented reality (AR), also referred to as mixed reality. Oculus Rift™ and HTC VIVE™ are predominantly used to create virtual reality environments. A binocular vision of the viewer, though, is required to perceive the obtained visual depth. In the absence of stereoscopic vision, 3D effect is mimicked by monocular cues, such as shading, coloring, relative size and motion parallax (Johnston et al., ). The examples of monoscopic 3DVT include 3D anatomical models that can be explored from different angles on a computer, tablet or phone (Moro et al., ). Distinction between stereoscopic and monoscopic modalities within 3DVT is essential to make since different processes are involved. Research has shown that recognition of digital 3D objects appears to be greater when objects are presented stereoscopically (Kytö et al., ; Martinez et al., ; Railo et al., ; Anderson et al., ). More importantly, the type of modality can significantly affect learning. Monoscopic 3DVT has been demonstrated to have disadvantages for students with lower visual‐spatial abilities (Garg et al., , ; ; Levinson et al., ; Naaz, ; Bogomolova et al., ). The disadvantages are explained by the ability‐as‐enhancer hypothesis within the cognitive load theory (Hegarty & Sims, ; Mayer, ). Initially, it has been hypothesized that 3D objects are remembered as key view‐based two‐demensional (2D) images (Garg et al., ; Huk, ; Levinson et al., ; Khot et al., ). Consequently, when an unfamiliar 3D object is viewed from multiple angles, an increase in cognitive load occurs while generating a proper mental representation of a 3D object. During this process, individuals with higher visual‐spatial abilities are able to devote more cognitive resources to building mental connections, while students with lower visual‐spatial abilities get cognitively overloaded (Garg et al., ; Huk, ). The latter leads to underperformance among students with lower visual spatial abilities. However, as research has shown, with stereoscopic 3DVT, students with lower visual‐spatial abilities are able to achieve comparable levels of performance of students with higher visual‐spatial abilities (Cui et al., ). This can be explained by the fact that the mental 3D representations of the object are already built and provided by the stereoscopic projection and perception. Consequently, mental steps, that are required to build a mental 3D representation, can be skipped while leaving a sufficient amount of cognitive resources. In this way, students with lower visual‐spatial abilities are able to allocate these resources to learning. In health care, the benefits of stereoscopic visualization within 3D technologies have been recognized for years (Kang et al., ; Cutolo et al., , ; Sommer et al., ; Birt et al., ). Development and utilization of stereoscopic 3DVT is still growing, especially in the surgical field. Several examples include preoperative planning and identification of tumor with stereoscopic AR (Cutolo et al., ; Kumara et al., ; Checcucci et al., ). Another examples include stereoscopic visualization during minimal invasive surgeries where stereoscopic view of the surgical field would improve spatial understanding and orientation during laparoscopic surgeries (Kang et al., ; Schwab et al., ). Stereoscopic visualization even showed to shorten operative time of laparoscopic gastrectomy by reducing the intracorporeal dissection time (Itanini et al., ). The beneficial effect of stereopsis on learning anatomy has been recently demonstrated in a comprehensive systematic review and meta‐analysis (Bogomolova et al., ). In the meta‐analysis, the comparisons between studies were made within a single level of instructional design, e.g., stereopsis was isolated as the only true manipulated element in the experimental design. The positive effect of stereopsis was demonstrated across different types of 3D technologies combined together, predominantly using the VR headsets and 3D shutter glasses for desktop applications. How learning experience is affected by a particular type of stereoscopic 3DVT, remains a topic for further exploration. Stereoscopic AR is a new generation of 3DVT technology that combines stereoscopic visualization of 3D computer‐generated objects with the physical environment. The main distinguishing feature from other types of AR is the ability to provide stereoscopic vision, e.g., to perceive the anatomical model in real 3D. Additionally, it provides the ability to walk around the model and explore it form all possible angles without losing the sense of the user's own environment. This view can be obtained with e.g., HoloLens®, a head‐mounted display from Microsoft (Supporting Information ). In the previous study, authors evaluated the effectiveness of stereoscopic 3D AR visualization in learning anatomy of the lower leg among medical undergraduates (Bogomolova et al., ). Learning with a stereoscopic 3D AR model was more effective than learning with a monoscopic 3D desktop model. Interestingly, the observed positive learning effect was only present among students with lower visual‐spatial abilities. Stereoscopic vision was hypothesized to be one of the distinguishing features of intervention modality that could explain these differences. However, since the comparisons were made within different levels of instructional design, i.e., stereoscopic vision was not isolated as the only true manipulated element, the actual effect of stereoscopic vision remained unrevealed. A similar study design approach was used by Moro and colleagues ( ) who compared the effectiveness of HoloLens with mobile‐based AR environment. Although both learning modes were effective in terms of acquired anatomical knowledge, comparisons were still made within different levels of instructional design. In another recent study of the role of stereopsis in 3DVT, Wainman and colleagues have isolated binocular disparity by covering the non‐dominant eye of participants. Authors reported positive effect of stereoscopic vision in VR, but not in AR (Wainman et al., ). Although it was a simple and vivid way of isolating stereopsis, participants in the control group still remained aware of their condition which could have influenced the outcomes. Additionally, different effect measures of stereopsis in VR and AR suggest that the type of technology is decisive for the learning effect caused by stereoscopic vision. Based on considerations described above and lessons learned from previous research, this study aimed to evaluate the role of binocular disparity in a stereoscopic 3D AR environment within a single level of instructional design. Therefore, the primary objective was to evaluate whether learning with a stereoscopic view of a 3D anatomical model of the lower extremity was more effective than learning with a monoscopic 3D view of the same model among medical undergraduates. The secondary objectives were to compare the perceived cognitive load among groups, and to evaluate whether visual‐spatial abilities would modify the outcomes. Authors hypothesized that learning within a stereoscopic 3D AR environment would be more effective than learning within a monoscopic 3D AR environment. Authors also hypothesized that the perceived cognitive load in the stereoscopic 3D view group would be lower than in the monoscopic 3D view group, and that the students with lower visual‐spatial abilities would benefit most from the stereoscopic 3D view of the model. Study design A single‐blinded double‐center randomized controlled trial was conducted at the Leiden University Medical Center (LUMC) and the Radboudumc University Medical Center (Radboudumc), the Netherlands. The study was conducted within a single level of instructional design, e.g., isolating binocular disparity as the only true manipulated element (Figure ). The study was approved by the Netherlands Association for Medical Education Ethical Review Board (NERB case number: 2019.5.8). Study population Participants were first‐year undergraduate students of Medicine and Biomedical Sciences with no prior knowledge of the lower extremity anatomy. The baseline knowledge was not assessed to avoid extra burden for students and possible influence on learning during the intervention and performance on the post‐tests (Cook & Beckman, ). Participation was voluntary and informed written consent was obtained from all participants. Participation did not interfere with the curriculum and the assessment results did not affect student's academic grades. Participants received a financial compensation at the completion of the experiment. Randomization and blinding of participants Participants were randomly allocated to either stereoscopic 3D view or monoscopic 3D view groups using an Excel Random Group Generator (Microsoft Excel for Office 365 MSO, version 2012). Participants were not aware of the distinction between stereoscopic and monoscopic 3D views and remained blinded to the type of condition during the entire experiment. The intended goal of the study and individual allocation to study arms was clarified and debriefed directly after experiment. Educational interventions An interactive AR application DynamicAnatomy for Microsoft HoloLens ® , version 1.0 (Microsoft Corp., Redmond, WA) was developed in the Department of Anatomy at LUMC and the Centre for Innovation of Leiden University. The application represented a dynamic and fully interactive stereoscopic 3D model of the lower extremity. Users perceived the 3D model as a virtual object in their physical space without losing the sense of their own physical environment. The object centered view, i.e., dynamic exploration, enabled learners to walk around the model and explore it from all possible angles. Active interaction included size adjustments, showing or hiding anatomical structures by group or individually, visual and auditory feedback on structures and anatomical layers, and animation of the ankle movements. The anatomical layers included musculoskeletal, connective tissue, and neuro‐vascular systems. During this experiment, study participants studied the musculoskeletal system. Prior to the experiment, participants completed a ten‐minute training module (without anatomical content) to get familiar with the use of application and device. The module consisted of a practical exploration of a house where students needed to remove and add various content including roof, walls and doors. In the intervention group, the 3D model of the lower extremity was presented and perceived stereoscopically as intended by the supportive AR device. In the control group, binocular disparity was eliminated technically by projecting an identical, i.e., non‐shifted and non‐rotated, image to both eyes. This adjustment resulted in a monoscopic view of the identical 3D anatomical model. Students observed identical model within an identical interface, as they would on a 2D screen of a computer. The only difference with the computer modality is that students were able to walk around the monoscopic model and still perceive it from different angles. Therefore, binocular disparity was isolated as the only true manipulated element in this experimental design. All other features of the AR application described above remained available and identical in both conditions. Baseline characteristics Informed consent and baseline questionnaire were administered prior to the start of the experiment. Stereovision of participants was measured by a Random Dot 3 ‐ LEA Symbols ® Stereoacuity Test (Vision Assessment Corp., Elk Grove Village, IL) prior to the experiment to identify individuals with absent stereovision. Students were asked to identify four symbols in four text boxes while wearing polarization glasses. Visual‐spatial abilities assessment Visual‐spatial abilities were assessed prior to the start of the learning session. Mental visualization and rotation, as the main components of visual‐spatial abilities, were assessed by the 24‐item mental rotation test (MRT), previously validated by Vandenberg and Kuse ( ) and redrawn by Peters and colleagues ( ). This psychometric test is being widely used in the assessment of visual‐spatial abilities and has repeatedly shown its positive association with anatomy learning and assessment (Guillot et al., ; Langlois et al., ). The post‐hoc level of internal consistency (Cronbach's alpha) of the MRT test in this study was 0.94. The duration of this test was ten minutes without intervals. Learning session Participants received a handout with a description of the learning goals and instructional activities. The development of learning goals and instructions was based on the constructive alignment theory to ensure alignment between the intended learning outcomes, instructional activities and knowledge assessment (Bogomolova et al., ) (Supporting Information ). Learning goals were formulated and organized according to Bloom's Taxonomy of Learning Objectives (Bloom et al., ). An independent expert verified the alignment between the learning goals and the assessment according to the constructive alignment theory and Bloom's Taxonomy of Learning Objectives. Learning goals included memorization of the names of bones and muscles, understanding the function of muscles based on their origin and insertion, and location and organization of these structures in relation to each other. Duration of the learning session was 45 minutes. Cognitive load assessment Cognitive load was measured by the validated NASA‐TLX questionnaire immediately after the session (Hart & Staveland, ) (Supporting Information ). The NASA‐TLX questionnaire is a subjective, multidimensional assessment instrument for perceived workload of task, in this case the workload required to study the anatomy of lower extremity (Hart & Staveland, ). The items included mental demand, physical demand, temporal demand, performance, effort and frustration level. Response options ranged from low (0 point) to high (10 points). The total score was calculated according to the prescriptions of the questionnaire and ranged also between 0 and 10 points. Written anatomy knowledge test A previously validated 30‐item knowledge test consisted of a combination of 20 extended matching questions and ten open‐ended questions (Bogomolova et al., ) (Supporting Information ). Anatomical knowledge was assessed in the factual (i.e., memorization/identification of the names of bones and muscles), functional (i.e., understanding the function of the muscles based on their course, origin and insertion) and spatial (i.e., location and organization of structures in relation to each other) knowledge domains. Content validation was assessed by two experts in the field of anatomy and plastic and reconstructive surgery. The test was piloted among 12 medical students for item clarity. The level of internal consistency (Cronbach's alpha) was 0.78. Duration of assessment was 30 minutes. Specimen knowledge test Plastinated specimen test covered a total of 30 anatomical structures on 12 specimens distributed over ten stations (Supporting Information ). Content validation was assessed by one expert in the field of anatomy. Each station included 3–4 structures that were labeled on one or more specimen. Participants were asked to provide the name of labeled structures or the type of movement that is initiated by a particular structure. The post‐hoc level of internal consistency (Cronbach's alpha) of the test was 0.90. Duration of this assessment was 20 minutes with a maximum of two minutes per station. Evaluation of learning experience Participants' learning experience was evaluated by a standardized self‐reported questionnaire (Supporting Information ). The evaluation included items on study time, perceived representativeness of the test questions, perceived knowledge gain, usability of and satisfaction with the provided study materials. Response options ranged from “very dissatisfied” (1 point) to “very satisfied” (5 points) on a five‐point Likert scale. Statistical analysis Participant's baseline characteristics were summarized using descriptive statistics. The differences in baseline measurements were assessed with an independent t ‐test for differences in means and chi‐square test for differences in proportions. Anatomical knowledge was defined as mean percentage of correct answers on the written knowledge test and the specimen test. Cognitive load was defined as the mean score on the NASA‐LTX questionnaire. Differences in outcome measures between groups were assessed with an independent t ‐test. Additionally, a ANCOVA was performed to measure the effect of the intervention for different levels of visual‐spatial abilities by including the interaction term “MRT score × intervention” in the model. MRT score was also included as a covariate to measure its effect on outcomes regardless of intervention. Additional analyses were performed for sex differences. Analyses were performed using SPSS statistical software package, version 23.0 for Windows (IBM Corp., Armonk, NY). Statistical significance was determined at the level of P < 0.05. A single‐blinded double‐center randomized controlled trial was conducted at the Leiden University Medical Center (LUMC) and the Radboudumc University Medical Center (Radboudumc), the Netherlands. The study was conducted within a single level of instructional design, e.g., isolating binocular disparity as the only true manipulated element (Figure ). The study was approved by the Netherlands Association for Medical Education Ethical Review Board (NERB case number: 2019.5.8). Participants were first‐year undergraduate students of Medicine and Biomedical Sciences with no prior knowledge of the lower extremity anatomy. The baseline knowledge was not assessed to avoid extra burden for students and possible influence on learning during the intervention and performance on the post‐tests (Cook & Beckman, ). Participation was voluntary and informed written consent was obtained from all participants. Participation did not interfere with the curriculum and the assessment results did not affect student's academic grades. Participants received a financial compensation at the completion of the experiment. Participants were randomly allocated to either stereoscopic 3D view or monoscopic 3D view groups using an Excel Random Group Generator (Microsoft Excel for Office 365 MSO, version 2012). Participants were not aware of the distinction between stereoscopic and monoscopic 3D views and remained blinded to the type of condition during the entire experiment. The intended goal of the study and individual allocation to study arms was clarified and debriefed directly after experiment. An interactive AR application DynamicAnatomy for Microsoft HoloLens ® , version 1.0 (Microsoft Corp., Redmond, WA) was developed in the Department of Anatomy at LUMC and the Centre for Innovation of Leiden University. The application represented a dynamic and fully interactive stereoscopic 3D model of the lower extremity. Users perceived the 3D model as a virtual object in their physical space without losing the sense of their own physical environment. The object centered view, i.e., dynamic exploration, enabled learners to walk around the model and explore it from all possible angles. Active interaction included size adjustments, showing or hiding anatomical structures by group or individually, visual and auditory feedback on structures and anatomical layers, and animation of the ankle movements. The anatomical layers included musculoskeletal, connective tissue, and neuro‐vascular systems. During this experiment, study participants studied the musculoskeletal system. Prior to the experiment, participants completed a ten‐minute training module (without anatomical content) to get familiar with the use of application and device. The module consisted of a practical exploration of a house where students needed to remove and add various content including roof, walls and doors. In the intervention group, the 3D model of the lower extremity was presented and perceived stereoscopically as intended by the supportive AR device. In the control group, binocular disparity was eliminated technically by projecting an identical, i.e., non‐shifted and non‐rotated, image to both eyes. This adjustment resulted in a monoscopic view of the identical 3D anatomical model. Students observed identical model within an identical interface, as they would on a 2D screen of a computer. The only difference with the computer modality is that students were able to walk around the monoscopic model and still perceive it from different angles. Therefore, binocular disparity was isolated as the only true manipulated element in this experimental design. All other features of the AR application described above remained available and identical in both conditions. Informed consent and baseline questionnaire were administered prior to the start of the experiment. Stereovision of participants was measured by a Random Dot 3 ‐ LEA Symbols ® Stereoacuity Test (Vision Assessment Corp., Elk Grove Village, IL) prior to the experiment to identify individuals with absent stereovision. Students were asked to identify four symbols in four text boxes while wearing polarization glasses. Visual‐spatial abilities were assessed prior to the start of the learning session. Mental visualization and rotation, as the main components of visual‐spatial abilities, were assessed by the 24‐item mental rotation test (MRT), previously validated by Vandenberg and Kuse ( ) and redrawn by Peters and colleagues ( ). This psychometric test is being widely used in the assessment of visual‐spatial abilities and has repeatedly shown its positive association with anatomy learning and assessment (Guillot et al., ; Langlois et al., ). The post‐hoc level of internal consistency (Cronbach's alpha) of the MRT test in this study was 0.94. The duration of this test was ten minutes without intervals. Participants received a handout with a description of the learning goals and instructional activities. The development of learning goals and instructions was based on the constructive alignment theory to ensure alignment between the intended learning outcomes, instructional activities and knowledge assessment (Bogomolova et al., ) (Supporting Information ). Learning goals were formulated and organized according to Bloom's Taxonomy of Learning Objectives (Bloom et al., ). An independent expert verified the alignment between the learning goals and the assessment according to the constructive alignment theory and Bloom's Taxonomy of Learning Objectives. Learning goals included memorization of the names of bones and muscles, understanding the function of muscles based on their origin and insertion, and location and organization of these structures in relation to each other. Duration of the learning session was 45 minutes. Cognitive load was measured by the validated NASA‐TLX questionnaire immediately after the session (Hart & Staveland, ) (Supporting Information ). The NASA‐TLX questionnaire is a subjective, multidimensional assessment instrument for perceived workload of task, in this case the workload required to study the anatomy of lower extremity (Hart & Staveland, ). The items included mental demand, physical demand, temporal demand, performance, effort and frustration level. Response options ranged from low (0 point) to high (10 points). The total score was calculated according to the prescriptions of the questionnaire and ranged also between 0 and 10 points. A previously validated 30‐item knowledge test consisted of a combination of 20 extended matching questions and ten open‐ended questions (Bogomolova et al., ) (Supporting Information ). Anatomical knowledge was assessed in the factual (i.e., memorization/identification of the names of bones and muscles), functional (i.e., understanding the function of the muscles based on their course, origin and insertion) and spatial (i.e., location and organization of structures in relation to each other) knowledge domains. Content validation was assessed by two experts in the field of anatomy and plastic and reconstructive surgery. The test was piloted among 12 medical students for item clarity. The level of internal consistency (Cronbach's alpha) was 0.78. Duration of assessment was 30 minutes. Plastinated specimen test covered a total of 30 anatomical structures on 12 specimens distributed over ten stations (Supporting Information ). Content validation was assessed by one expert in the field of anatomy. Each station included 3–4 structures that were labeled on one or more specimen. Participants were asked to provide the name of labeled structures or the type of movement that is initiated by a particular structure. The post‐hoc level of internal consistency (Cronbach's alpha) of the test was 0.90. Duration of this assessment was 20 minutes with a maximum of two minutes per station. Participants' learning experience was evaluated by a standardized self‐reported questionnaire (Supporting Information ). The evaluation included items on study time, perceived representativeness of the test questions, perceived knowledge gain, usability of and satisfaction with the provided study materials. Response options ranged from “very dissatisfied” (1 point) to “very satisfied” (5 points) on a five‐point Likert scale. Participant's baseline characteristics were summarized using descriptive statistics. The differences in baseline measurements were assessed with an independent t ‐test for differences in means and chi‐square test for differences in proportions. Anatomical knowledge was defined as mean percentage of correct answers on the written knowledge test and the specimen test. Cognitive load was defined as the mean score on the NASA‐LTX questionnaire. Differences in outcome measures between groups were assessed with an independent t ‐test. Additionally, a ANCOVA was performed to measure the effect of the intervention for different levels of visual‐spatial abilities by including the interaction term “MRT score × intervention” in the model. MRT score was also included as a covariate to measure its effect on outcomes regardless of intervention. Additional analyses were performed for sex differences. Analyses were performed using SPSS statistical software package, version 23.0 for Windows (IBM Corp., Armonk, NY). Statistical significance was determined at the level of P < 0.05. A total of 66 students were included (Table ). All participants were able to perceive spatial visual depth as measured by the stereoacuity test. MRT scores did not differ between intervention groups. As shown in Figure , participants in the stereoscopic 3D view group performed equally well as the participants in the monoscopic 3D view group on the written knowledge test (47.9 ± 15.8 vs. 49.1 ± 18.3; P = 0.635). Likewise, no differences were found for each knowledge domain separately ( factual : 34.1 ± 19.5 vs. 34.3 ± 19.0; P = 0.970; functional : 33.4 ± 16.4 vs. 31.5 ± 13.7; P = 0.611; spatial : 50.4 ± 15.2 vs. 47.3 ± 13.5; P = 0.384). Percentages correct answers on the specimen test were not significantly different between groups (43.0 ± 17.9 vs. 46.3 ± 15.1; P = 0.429) (Figure ). The observed similarities between groups on the knowledge tests were reflected by the cognitive load scores that were similar in both groups (6.2 ± 1.0 vs. 6.2 ± 1.3; P = 0.992) (Figure ). As shown in Table , there were no significant differences in learning experience between stereoscopic 3D view and monoscopic 3D view groups. All participants enjoyed studying (4.4 ± 0.7 vs. 4.3 ± 0.8; P = 0.492) and reported an improved anatomical knowledge of the lower extremity (4.2 ± 0.9 vs. 4.1 ± 0.7; P = 0.502). Five versus four participants reported the device to be heavy on their nose after a longer period of study time in stereoscopic and monoscopic 3D groups respectively ( P = 0.794). Headache and nausea were reported by one participant in the stereoscopic 3D group. The effect of visual‐spatial abilities In both study groups, mean scores on the written knowledge test and for each knowledge domain separately remained similar for all levels of MRT scores, as measured by the interaction term in the ANCOVA analysis (w ritten knowledge test : F (1,62) = 0.51, P = 0.393; factual : F (3,62) = 0.15, P = 0.925; functional : F (3,62) = 1.04, P = 0.381; spatial : F (3,62) = 0.92, P = 0.435). Similar effects were found for the specimen knowledge test ( F (1,62) = 0.00, P = 0.998). However, regardless of intervention, MRT scores were significantly and positively associated with the specimen test scores, as shown in Figure ( F (1,62) = 9.37, Partial η 2 = 0.13, P = 0.003). The perceived cognitive load scores remained similar for all levels of visual‐spatial abilities in both study groups ( F (1,62) = 2.26, P = 0.138). Regardless of intervention, MRT scores were not associated with the perceived cognitive load scores. ANCOVA analysis for learning experience revealed that participants in the monoscopic 3D view group found the anatomy test questions significantly less representative for the studied material than participants in the stereoscopic 3D view group. This difference was only present among individuals with lower visual‐spatial abilities scores ( F (1,62) = 2.26, P = 0.044). As independent variable, visual‐spatial abilities scores were significantly and positively associated with the perceived representativeness of the anatomy test questions ( P = 0.010) and subjective improvement in anatomy knowledge of the lower extremity ( P < 0.001). Sex differences On baseline, males achieved significantly higher MRT scores than females (17.5 ± 4.9 vs. 13.2 vs. 4.9; P = 0.001). Both sexes performed equally well on written anatomical knowledge test (w ritten knowledge test : 52.4 ± 18.9, P = 0.96; factual : 37.9 ± 20.4 vs. 31.5 ± 17.9, P = 0.180; functional : 36.3 ± 17.4 vs. 29.9 ± 12.7, P = 0.091; spatial : 51.4 ± 14.6 vs. 47.1 ± 14.1, P = 0.242). However, males achieved significantly higher scores on the specimen test (51.5 ± 15.8 vs. 40.0 ± 15.6; P 0.005). Perceived cognitive load remained similar for both sexes (6.2 ± 1.2 vs. 6.1 ± 1.1, P = 0.915). In both study groups, mean scores on the written knowledge test and for each knowledge domain separately remained similar for all levels of MRT scores, as measured by the interaction term in the ANCOVA analysis (w ritten knowledge test : F (1,62) = 0.51, P = 0.393; factual : F (3,62) = 0.15, P = 0.925; functional : F (3,62) = 1.04, P = 0.381; spatial : F (3,62) = 0.92, P = 0.435). Similar effects were found for the specimen knowledge test ( F (1,62) = 0.00, P = 0.998). However, regardless of intervention, MRT scores were significantly and positively associated with the specimen test scores, as shown in Figure ( F (1,62) = 9.37, Partial η 2 = 0.13, P = 0.003). The perceived cognitive load scores remained similar for all levels of visual‐spatial abilities in both study groups ( F (1,62) = 2.26, P = 0.138). Regardless of intervention, MRT scores were not associated with the perceived cognitive load scores. ANCOVA analysis for learning experience revealed that participants in the monoscopic 3D view group found the anatomy test questions significantly less representative for the studied material than participants in the stereoscopic 3D view group. This difference was only present among individuals with lower visual‐spatial abilities scores ( F (1,62) = 2.26, P = 0.044). As independent variable, visual‐spatial abilities scores were significantly and positively associated with the perceived representativeness of the anatomy test questions ( P = 0.010) and subjective improvement in anatomy knowledge of the lower extremity ( P < 0.001). On baseline, males achieved significantly higher MRT scores than females (17.5 ± 4.9 vs. 13.2 vs. 4.9; P = 0.001). Both sexes performed equally well on written anatomical knowledge test (w ritten knowledge test : 52.4 ± 18.9, P = 0.96; factual : 37.9 ± 20.4 vs. 31.5 ± 17.9, P = 0.180; functional : 36.3 ± 17.4 vs. 29.9 ± 12.7, P = 0.091; spatial : 51.4 ± 14.6 vs. 47.1 ± 14.1, P = 0.242). However, males achieved significantly higher scores on the specimen test (51.5 ± 15.8 vs. 40.0 ± 15.6; P 0.005). Perceived cognitive load remained similar for both sexes (6.2 ± 1.2 vs. 6.1 ± 1.1, P = 0.915). This study evaluated the effect of binocular disparity on learning anatomy in a stereoscopic 3D AR environment. Against author's expectations, no differences were found between stereoscopic 3D and monoscopic 3D view groups in terms of acquired anatomical knowledge and perceived cognitive load during learning. Visual‐spatial abilities, however, were significantly and positively associated with practical anatomical knowledge regardless of intervention. Additionally, visual‐spatial abilities were positively associated with the perceived representativeness of anatomy test questions and the subjective improvement in anatomy knowledge of the lower extremity. Although binocular disparity is generally considered to provide one of the important depth cues in 3D visualization, its exclusive effect on learning and cognitive load was revealed to be not significant in a stereoscopic 3D AR environment. To the author's knowledge only one study, performed by Wainman and colleagues ( ), has evaluated the role of binocular disparity within the same type of technology. Likewise, Wainman and colleagues found no beneficial effect of stereopsis on learning. The only difference between both studies was the way binocular disparity was eliminated. While in the current study a monoscopic view was obtained technically by presenting identical images to both eyes, Wainman and colleagues achieved monocular view by closing the dominant eye of participants with a patch. In addition, Wainman and colleagues have compared the effect of binocular disparity in AR to its effect in VR (Wainman et al., 2020). The effect of stereoscopic vision in VR appeared to be significantly greater than in AR. In fact, learning with a stereoscopic 3D model in AR was less effective than in VR. This effect was explained by various degrees of stereopsis that different types of technologies can generate. On the other hand, the findings suggest that other important depth cues could have compensated for the absence of stereopsis. During the experiment participants were able to walk around the 3D anatomical model and explore the model from all possible angles which is unique for a stereoscopic AR environment. This type of dynamic exploration, also referred to as motion parallax, is able to provide strong depth information (Rogers & Graham, ). Additional literature searches in the field of neurosciences education revealed that motion parallax in some cases can be even more effective than binocular disparity alone (Bradshaw & Rogers, ; Naepflin & Menozzi, ; Aygar et al., ). More interestingly, an interaction between both depth cues can exist (Lankheet & Palmen, ). For instance, subjects have been asked to perform series of explorative tasks under three depth cue conditions: binocular disparity, motion parallax and combination of both depth cues (Naepflin & Menozzi, ). The combination of binocular disparity and motion parallax resulted in an equal amount of correct answers as did the motion parallax condition (84% vs. 80%; P = 0.231). However, in the absence of motion parallax, binocular disparity condition contributed to significantly less correct answers (60% vs. 80%; p < 0.001). In another study, that motion parallax improved performance in recovering 3D shape of objects in a monoscopic view, but not in a stereoscopic view (Sherman et al., ). Therefore, motion parallax could have reasonably compensated for the absence of binocular disparity and generated a sufficient 3D perception of the monoscopically projected 3D model. Further research is needed to evaluate to what extent motion parallax, alone and in combination with binocular disparity, affects learning. Another effect of dynamic exploration, that could have occurred during this experiment, is the embodied cognition on learning (Oh et al., ; Dickson & Stephens, ; Cherdieu et al., ). Previous research has shown that using gestures and body movements helps students acquire anatomical knowledge. For instance, students who have engaged in miming using representational and metaphorical gestures while learning functions of central nervous system, have improved their marks with 42% in comparison with didactic learning (Dickson & Stephens, ). Similar concept applies for mimicking specific joint movements in order to memorize them and being able to recall the structures names and to localize them on a visual representation (Cherdieu et al., ). Students in the current experiment were also using gestures while dissecting the anatomical layers and structures. That could have helped them memorizing structures while using similar gestures again and again. Additionally, students tended to move their own leg in a synchronized manner with the animated 3D model. Such engagement could have resulted in embodied learning and contribute to better learning within both modalities. The effect of visual‐spatial abilities In the current study anatomical knowledge was tested both by written and practical examinations. Both assessment methods were chosen to ensure a better alignment between learning and assessment of spatial knowledge. Consistent with previous research, visual‐spatial abilities were positively associated with anatomical knowledge as measured by the practical specimen test (Langlois et al., ; Roach et al., ). However, visual‐spatial abilities did not modify the observed outcomes as expected. Individuals with lower visual‐spatial abilities did not show different trajectory of learning with either monoscopic or stereoscopic 3D views of the model. Also, they did not experience significant differences in perceived cognitive load. This is in contrast with previous body of evidence on an aptitude‐treatment effect caused by visual‐spatial abilities when learning with different types of 3DVT (Luursema et al., , ; Cui et al., ; Bogomolova et al., ). If motion parallax was reasonably compensating for the absence of binocular disparity, as discussed above, then it does explain why students with lower visual‐spatial abilities performed equally well within both conditions. These individuals were still able to generate proper 3D mental representations of the model within the monoscopic 3D view group and experienced equal amount of cognitive load during learning. Although the modifying effect of visual‐spatial abilities on objective outcomes was not observed in current study, it was affecting the subjective outcomes regarding learning experience. This is particularly interesting, since the monoscopic 3D group with low visual‐spatial abilities found the practical assessment items to be less representative of their learning environment than the stereoscopic 3D group. Another explanation for the absence of modifying effect of visual‐spatial abilities could lie within the scale of spatial abilities needed for the task at hand. For spatial abilities a division between small‐ and large‐scale space can be made, with small scale referring to space within arm's length, e.g., tabletop tasks. Large scale space refers to when locomotion is needed to interact with the spatial environment. As participants were walking around the model, large scale spatial processing takes place. As previously shown, a partial dissociation is found for small‐ and large‐scale spatial abilities (Hegarty et al., ). It could therefore be that the small‐scale task of mental rotation used here, may not substantially relate to the large‐scale spatial task of interacting with the model. Alternatively, large scale spatial tests, especially those relying on perspective taking, could show the interaction with task performance as hypothesized here. Lastly, the observed sex differences in visual‐spatial abilities scores in favor of males are in line with previous research (Baenninger & Newcombe, ; Langlois et al., ; Uttal et al., ; Nguyen et al., ; Guimarães et al., ). More interestingly, males significantly outperform females on the specimen test, but not on the written knowledge test. Again, these findings suggest that the practical examination questions rely on visual‐spatial abilities skills more than written knowledge test questions do. This is further supported by the work of Langlois and colleagues who have reviewed relationship between visual‐spatial abilities test and anatomy knowledge assessment (Langlois et al., ). Authors have found significant relationship between spatial abilities test and anatomy knowledge assessment using practical examination, while relationship between spatial abilities and spatial multiple‐choice questions remained unclear. Therefore, both findings suggest that practical examination questions are more reliable in testing spatial anatomical knowledge than multiple‐choice questions, even when designed properly. Further research is needed to explore how spatial multiple‐choice questions are mentally processed during examination in comparison to practical examination questions. Limitations of the study To authors' knowledge, this was the first single blinded randomized controlled trial to evaluate the effect of binocular disparity on learning in 3D AR environment within two academic centers and within one single level of instructional design. Along with the validated measurement instruments, it has maximized the internal and external validity of the results. On the other hand, participation was voluntary, and a selection bias could occur. The results could have been different if measured within the entire students' population. However, the baseline visual‐spatial abilities scores among current study sample bear strong resemblance to visual‐spatial abilities scores of the entire cohort of first‐year medical undergraduates (14.9 vs. 14.4), as measured previously by Vorstenbosch and colleagues ( ). Another limitation was the relatively small sample size. Due to the limited availability of devices, authors were restricted to a maximum number of participants. It is possible that a much larger sample size could have revealed significant differences between interventions. The possible compensating effect of motion parallax and the effect of large‐scale spatial abilities can also be considered as potential confounders that have influenced the internal validity. These new insights can help reveal the exact effect of both factors on learning. It is also important to note that the authors choose to not assess baseline knowledge to avoid extra burden for students and possible influence on learning during the intervention and performance on the post‐tests. In this way any differences in prior knowledge that could have been present among students were not taken into account. Lastly, spatial knowledge questions in this study were carefully designed to stimulate mental visualizations skills. However, these questions can still be processed without spatial reasoning or just being best guessed when questions get too difficult to answer. Consequently, stereoscopic visualization of anatomy would not be that helpful in processing these types of questions. Future implications The findings of this study have implications for both research and education. As stated previously, the aptitude treatment interaction caused by visual‐spatial abilities should be taken into account when designing new research, especially when evaluating 3D technologies and their effect on learning. Additionally, the results of this study suggest that stereoscopic visualization can be differently effective depending on the type of technology used. More importantly, the findings suggest that other possible mechanisms are responsible for the acquired 3D effect and positive effect on learning. Next research should focus on the working mechanisms that explain the effectiveness of stereoscopic 3DVT. Only by knowing why particular 3D technology works will enable educators and researcher to properly design and implement this tool in medical education. In the current study anatomical knowledge was tested both by written and practical examinations. Both assessment methods were chosen to ensure a better alignment between learning and assessment of spatial knowledge. Consistent with previous research, visual‐spatial abilities were positively associated with anatomical knowledge as measured by the practical specimen test (Langlois et al., ; Roach et al., ). However, visual‐spatial abilities did not modify the observed outcomes as expected. Individuals with lower visual‐spatial abilities did not show different trajectory of learning with either monoscopic or stereoscopic 3D views of the model. Also, they did not experience significant differences in perceived cognitive load. This is in contrast with previous body of evidence on an aptitude‐treatment effect caused by visual‐spatial abilities when learning with different types of 3DVT (Luursema et al., , ; Cui et al., ; Bogomolova et al., ). If motion parallax was reasonably compensating for the absence of binocular disparity, as discussed above, then it does explain why students with lower visual‐spatial abilities performed equally well within both conditions. These individuals were still able to generate proper 3D mental representations of the model within the monoscopic 3D view group and experienced equal amount of cognitive load during learning. Although the modifying effect of visual‐spatial abilities on objective outcomes was not observed in current study, it was affecting the subjective outcomes regarding learning experience. This is particularly interesting, since the monoscopic 3D group with low visual‐spatial abilities found the practical assessment items to be less representative of their learning environment than the stereoscopic 3D group. Another explanation for the absence of modifying effect of visual‐spatial abilities could lie within the scale of spatial abilities needed for the task at hand. For spatial abilities a division between small‐ and large‐scale space can be made, with small scale referring to space within arm's length, e.g., tabletop tasks. Large scale space refers to when locomotion is needed to interact with the spatial environment. As participants were walking around the model, large scale spatial processing takes place. As previously shown, a partial dissociation is found for small‐ and large‐scale spatial abilities (Hegarty et al., ). It could therefore be that the small‐scale task of mental rotation used here, may not substantially relate to the large‐scale spatial task of interacting with the model. Alternatively, large scale spatial tests, especially those relying on perspective taking, could show the interaction with task performance as hypothesized here. Lastly, the observed sex differences in visual‐spatial abilities scores in favor of males are in line with previous research (Baenninger & Newcombe, ; Langlois et al., ; Uttal et al., ; Nguyen et al., ; Guimarães et al., ). More interestingly, males significantly outperform females on the specimen test, but not on the written knowledge test. Again, these findings suggest that the practical examination questions rely on visual‐spatial abilities skills more than written knowledge test questions do. This is further supported by the work of Langlois and colleagues who have reviewed relationship between visual‐spatial abilities test and anatomy knowledge assessment (Langlois et al., ). Authors have found significant relationship between spatial abilities test and anatomy knowledge assessment using practical examination, while relationship between spatial abilities and spatial multiple‐choice questions remained unclear. Therefore, both findings suggest that practical examination questions are more reliable in testing spatial anatomical knowledge than multiple‐choice questions, even when designed properly. Further research is needed to explore how spatial multiple‐choice questions are mentally processed during examination in comparison to practical examination questions. To authors' knowledge, this was the first single blinded randomized controlled trial to evaluate the effect of binocular disparity on learning in 3D AR environment within two academic centers and within one single level of instructional design. Along with the validated measurement instruments, it has maximized the internal and external validity of the results. On the other hand, participation was voluntary, and a selection bias could occur. The results could have been different if measured within the entire students' population. However, the baseline visual‐spatial abilities scores among current study sample bear strong resemblance to visual‐spatial abilities scores of the entire cohort of first‐year medical undergraduates (14.9 vs. 14.4), as measured previously by Vorstenbosch and colleagues ( ). Another limitation was the relatively small sample size. Due to the limited availability of devices, authors were restricted to a maximum number of participants. It is possible that a much larger sample size could have revealed significant differences between interventions. The possible compensating effect of motion parallax and the effect of large‐scale spatial abilities can also be considered as potential confounders that have influenced the internal validity. These new insights can help reveal the exact effect of both factors on learning. It is also important to note that the authors choose to not assess baseline knowledge to avoid extra burden for students and possible influence on learning during the intervention and performance on the post‐tests. In this way any differences in prior knowledge that could have been present among students were not taken into account. Lastly, spatial knowledge questions in this study were carefully designed to stimulate mental visualizations skills. However, these questions can still be processed without spatial reasoning or just being best guessed when questions get too difficult to answer. Consequently, stereoscopic visualization of anatomy would not be that helpful in processing these types of questions. The findings of this study have implications for both research and education. As stated previously, the aptitude treatment interaction caused by visual‐spatial abilities should be taken into account when designing new research, especially when evaluating 3D technologies and their effect on learning. Additionally, the results of this study suggest that stereoscopic visualization can be differently effective depending on the type of technology used. More importantly, the findings suggest that other possible mechanisms are responsible for the acquired 3D effect and positive effect on learning. Next research should focus on the working mechanisms that explain the effectiveness of stereoscopic 3DVT. Only by knowing why particular 3D technology works will enable educators and researcher to properly design and implement this tool in medical education. In summary, binocular disparity alone does not contribute to better learning of anatomy in a stereoscopic 3D AR environment. Motion parallax, enabled by dynamic exploration, should be considered as a potential strong depth cue without or in combination with binocular disparity. Regardless of intervention, visual‐spatial abilities were significantly and positively associated with the specimen test scores. The authors have no conflicts of interest to disclose. Supplementary Material Click here for additional data file.
Fostering uncertainty tolerance in anatomy education: Lessons learned from how humanities, arts and social science (HASS) educators develop learners' uncertainty tolerance
c2c5300d-1d68-4925-a23b-73ce3190ab9e
10078696
Anatomy[mh]
The Covid‐19 pandemic ignited a global collective uncertainty, demonstrating the extant and omnipresent nature of healthcare unknowns. Healthcare‐related uncertainties also exist outside of the pandemic context. From clinical presentations, to diagnostic interpretation, to treatment responses and outcomes—healthcare uncertainties are ubiquitous. How healthcare professionals manage these uncertainties, known as uncertainty tolerance, becomes an essential clinical skill in dynamic, ever‐changing healthcare environments. In recognition of this, there is an increasing focus on uncertainty tolerance as a healthcare graduate ‘competency’ (Osler, ; Geller et al., ; Harden et al., ; Simpson et al., ; Toohey et al., ; Englander et al., ; ACGME, ; GMC, ; Cumming & Ross, ; AAMC, , ), with many calling for evaluation of uncertainty tolerance as part of entrance into healthcare education programs and/or with program progression (Albanese et al., ; Geller, ; ACGME, ; AAMC, ). Despite this desire to foster uncertainty tolerance in healthcare education, the impact of teaching practices on students' uncertainty tolerance remains embryonic (Moffett et al., ). Uncertainty tolerance is a psychological construct referring to the way an individual perceives and processes information about ambiguous situations (i.e., stimuli) when confronted by an array of unfamiliar, complex, or incongruent clues (Budner, ; Furnham & Ribchester, ). This increasing desire to integrate uncertainty tolerance teaching practices across healthcare degrees (Luther & Crandall, ; Simpkin & Schwartzstein, ; Cooke & Lemay, ) can be challenging, as guidance on operationalizing and executing teaching practices supportive of uncertainty tolerance development remains limited (Rieckmann, ; Kim & Lee, ; Moffett et al., ). There is some recent research, though, that supports the relationship between uncertainty tolerance and education. In the context of Covid‐19 pandemic teaching, university students' uncertainty tolerance was critical for their reported satisfaction during pandemic related educational changes (Grace et al., ), suggesting that (at minumum) uncertainty tolerance impacts learners' capacity to learn. Anatomy is often one of the first foundational healthcare sciences students encounter in their professional education, and remains a science topic that students are vested in (Older, ; Moxham & Plaisant, ; Nabil et al., ; Triepels et al., ). Students preparedness for transitioning into healthcare education, where uncertainty is present, varies widely (Strout et al., ). Some students commencing their healthcare professional degrees are identified as markedly in tolerant of uncertainty (Han et al., ). Indeed, students largely appear to respond negatively to the initial phases of anatomy teaching when uncertainties are present (Stephens et al., ), supporting the notion that students entering healthcare education may not yet be prepared for the uncertainties facing them in their future careers. The anatomy education learning environment stimulates uncertainty through human anatomy variations (Willan & Humpherson, ; Wheble & Channon, ; Cullinane & Barry, ), the breadth of anatomical knowledge (Swick, ), and the socio‐cultural threshold that student's experience through their first anatomy dissections (Stephens et al., ). Sources of uncertainty are not unique to anatomy education, however, as similar uncertainty stimuli exist across the entirety of healthcare education and future clinical practice (Hillen et al., ; Strout et al., ), justifying uncertainty tolerance as a core healthcare graduate competency (Osler, ; Geller et al., ; Harden et al., ; Simpson et al., ; Toohey et al., ; Englander et al., ; ACGME, ; GMC, ; Cumming & Ross, ; AAMC, , ). Debates abound between the role of healthcare curricula in preparing healthcare students for real‐world uncertainties versus teaching “certain” discipline content (White & Williams, ; Ilgen et al., ). For example, healthcare professional course selection processes typically favor those who excel at “single‐best‐answer” examinations (Sladek et al., ) and anatomy summative assessments mimic this selection by focusing on “rightness” and “wrongness” (Harrison et al., ; Bird et al., ), with the predominant form of anatomy examinations being multiple‐choice and “spot” assessments (where students identify tagged structures on images, specimens or models) (Pandey & Zimitat, ). This failure to support and assess learners' uncertainty tolerance may be negatively impacting their transitions to clinic and healthcare practice. Upon entering clinical rotations, students are confronted with a plethora of ambiguous stimuli and dynamic clinical contexts for which they appear underprepared (Fox, , ; Han et al., ; Gheihman et al., ). While there are calls to improve healthcare learner uncertainty tolerance, a gap still remains in actioning this call (Luther & Crandall, ; Domen, ). This partition between healthcare teaching practices (e.g., content vs. uncertainty), and the realities of future careers filled with uncertainty appears to have detrimental effects on students' wellbeing (Hancock & Mattick, ). There appear to be many relationships between healthcare practice and healthcare providers' uncertainty tolerance, with medical doctors being the primary focus of much of this research (Strout et al., ). Evidence suggests that healthcare providers' uncertainty tolerance impacts their approaches to ordering diagnostic tests and their use of resources (Lysdahl & Hofmann, ; Strout et al., ), as well as influencing their decision‐making processes (Ghosh, ; Lysdahl & Hofmann, ; Burman et al., ). Furthermore, many uncertainty tolerance studies link low uncertainty tolerance to burnout and emotional distress (Lally & Cantillon, ; Kimo Takayesu et al., ; Hancock & Mattick, ), and higher uncertainty tolerance to well‐being (Kuhn et al., ; Cooke et al., ). Uncertainty tolerance may also be related to future medical speciality choice (Borracci et al., ), further supporting a potentially important role of uncertainty tolerance in healthcare education and preparation of the future healthcare workforce. The modern healthcare uncertainty tolerance conceptual model (Hillen et al., ) proposes that an uncertain stimulus is perceived and responded across three domains (cognitive, emotional, behavioral) across a spectrum of negative to positive. This model includes a step where the perception, and thus related responses, can be modulated through so‐called “moderators.” These moderators are only generally described in the conceptual model (Hillen et al., ), and include factors such as age and prior experiences. While education was not originally included in the modern conceptual uncertainty tolerance model, there is increasing evidence that education, including anatomy education, moderates uncertainty tolerance. Some studies suggest that learners' educational progression improves uncertainty tolerance (Han et al., ; Strout et al., ), while other research is beginning to elucidate how different types of educational styles impact uncertainty tolerance development (either fostering or hindering) (Nevalainen et al., ; Gowda et al., ; Moffett et al., ; Stephens et al., ). Findings across healthcare education suggest that teaching practices such as: team‐focused learning activities (Stephens et al., ) and creating opportunities for reflective practice (Nevalainen et al., ; Gowda et al., ) foster learner uncertainty tolerance. In contrast, didactic stand‐alone approaches appear to result in the opposite effect, by hindering learner uncertainty tolerance (Stephens et al., ). There is evidence that HASS (humanities, arts and social sciences) disciplines and sub‐disciplines foster uncertainty tolerance effectively through their teaching practices (García Ochoa et al., ; Haidet et al., ; Bentwich & Gilbey, ; Richardson, ; Felsman et al., ; García Ochoa & McDonald, ). The use of arts and humanities‐based teaching methodologies for effectively fostering healthcare students' uncertainty tolerance is gaining momentum in medical education with a systemic review of 49 separate articles finding that arts‐based pedagogy challenges concrete thinking, fosters reflection and improves uncertainty tolerance (Haidet et al., ). Furthermore, a recent scoping review found that arts‐based teaching was repeatedly linked to helping healthcare students engage with uncertainty (Moffett et al., ). This study also concluded that a large gap remains in the understanding of specific teaching practices impacting learner uncertainty tolerance, suggesting that the solution may be research focusing on “cross‐cultural studies” (i.e., outside healthcare education) to help address this gap. Therefore, the aim of this research was to explore, in greater detail, HASS teaching practices moderating learner uncertainty tolerance in an effort to develop an uncertainty tolerance pedagogical framework for application to anatomy and healthcare education. In addition, this research served to build upon the previously identified natural uncertainties present in the anatomy learning environment (Stephens et al., ) and learn from HASS academic teaching practices of successful learner uncertainty tolerance development, particularly in relation to healthcare curriculum (DeForge & Sobal, ; Haidet et al., ; Gowda et al., ; Moffett et al., ). A 2014 Australian University sector review of HASS disciplines found that these degrees make up the largest component of the university system (~65% of all undergraduate and postgraduate student enrollments), and that student satisfaction and job placements remain high (Turner & Brass, ). As healthcare education in some Australian universities remains undergraduate entry, these students often have little exposure to HASS education prior to their healthcare professional degree. Based on this collective evidence, this study sought to purposively explore Australian HASS educators' perspective of uncertainty tolerance teaching practices. Through semi‐structured focus groups and interviews, and purposive sampling, this study explored the following research questions: (1) What teaching practices do HASS academics' perceive as impacting learners' uncertainty tolerance, and (2) How do HASS academics execute these teaching practices? From this, recommendations are made for anatomy educators interested in exploring and fostering learners' uncertainty tolerance development in their own learning environments. Site and participants selection The Faculty of Medicine, Nursing and Health Sciences within Monash University teaches ~16,680 students per year as part of the health professions degrees including: Biomedical degrees, undergraduate and graduate entry medicine, nursing, paramedicine, physiotherapy, psychology, and nutrition, with HASS academics contributing to the healthcare humanities components of these degrees (details below). To explore HASS academics' teaching practices that foster learner uncertainty tolerance, the research team purposefully sampled HASS educators who deliberately designed and delivered teaching to foster students' uncertainty tolerance at an Australian University. Additional to email invitations, snowball sampling facilitated the identification of appropriate educators. A total of 14 HASS educators across two campuses from five different faculties (ten teaching areas), agreed to participate in face‐to‐face focus groups or interviews over two months in 2019. Although participants were from HASS faculties, seven participants taught students across both HASS and STEMM (science, technology, engineering and mathematics, medicine) degrees (all participants in focus group (F G‐1), one participant in focus group two and three). Together, the academics' varied disciplines and faculties, along with purposeful sampling of HASS educators, helped achieve information power (Malterud et al., ). To be included in the study, participants' teaching area (not necessarily related to their faculty) needed to be related to the Australian definition of HASS fields of research and education. These fields include: Architecture and Building, Education, Management and Commerce, Society and Culture, and Creative Arts (Turner & Brass, ). Data collection All participants completed a demographic survey (see Table ), with 56% of participants self‐reporting female gender, and no representation from gender diverse participants. The survey was followed by audio‐recorded, semi‐structured interview (four total) or focus group discussions (three total). Authors were facilitators (M.D.L., G.B., A.Z.). The semi‐structured protocol for both the focus groups and interviews were the same. The difference between these two data collection strategies were related to participant characteristics. Larger focus groups (FG‐1) consisted of academics who taught into the same degree, and thus depth of discussion was based on shared context fostering interactive discussion (Davidson, et al., ; Ng et al., ), whereas smaller focus groups taught into different degrees, and the smaller participant number enhanced depth of data from these diverse experiences (Davidson, et al., ). Finally, those engaging in interviews enhanced further depth of discussion of the research topic (DiCicco‐Bloom & Crabtree, ). Prior to commencement, participants were read the uncertainty tolerance definition (see introduction) and asked how they (within the classroom) prepare students for managing uncertainty. Semi‐structured questions were designed to focus on eliciting discussion related to the domains of the uncertainty tolerance model (Hillen et al., ). Herein, questions explored how educators introduce, teach, integrate and foster uncertainty tolerance across units, courses and/or curriculum, and included questions focused on classroom stimuli, moderators and educators' perceptions of their students' classroom responses. Finally, participants were asked about what support academics need to consider including when implementing uncertainty tolerance teaching practices. Team reflexivity Prior to data analysis, all authors participated in a team reflexive exercise to improve team communication, function and research rigor (Barry et al., ). The team shared experiences and interest in the uncertainty tolerance topic, and this collectively drove the teams' research focus. The team were involved with teaching, though their learner population was diverse (undergraduate, graduate and professional learner populations). The research team had a variety of methodological research experiences and worldviews ranging from positivistic and quantitative, to interpretivism and extensive qualitative experiences. All team members, however, were positively oriented toward qualitative research for exploring the uncertainty tolerance construct in this HASS context. Data analysis Discussions were analyzed using an abductive approach (Lingard, ) with the uncertainty tolerance model as the theoretical lens (Hillen et al., ) and framework analysis as the methodology (Ritchie & Spencer, ). Framework analysis consists of five phases: (1) familiarization; (2) thematic framework identification; (3) indexing; (4) charting; and (5) mapping and interpretation. Audio files were uploaded to Otter transcription tool (Otter.ai, Los Altos, CA), and facilitators (M.D.L., G.B., A.Z.) listened to and edited transcriptions in this platform (Phase 1, Familiarization). Familiarization was further enhanced with team (M.D.L., G.B., A.Z.) discussions about broad areas of alignment with, or extension of, the uncertainty tolerance model (Hillen et al., ). Later, A.G.V. read discussion files several times over to gain a broad understanding of themes. Phase 2 was led by M.D.L. and A.G.V., whereby the data were coded (led by A.G.V.) and regularly discussed between A.G.V., G.B., and M.D.L. until a final codebook, inclusive of definition and quotes, was reached. A qualitative data analysis software, NVivo 12 (QSR International, Melbourne, Australia) was utilized for data management. Code associations (i.e., when one distinct code occurred in concurrence with another distinct code but were interrelated within the participant's narrative) were identified in Phases 3 and 4. Once all interviews were completely coded, matrix maps were constructed to evaluate two‐way stimulus‐response associations/pathways. To explore more complex pathways/associations of three or more (including positive/negative sentiment), project maps were later constructed (Phase 5). The Faculty of Medicine, Nursing and Health Sciences within Monash University teaches ~16,680 students per year as part of the health professions degrees including: Biomedical degrees, undergraduate and graduate entry medicine, nursing, paramedicine, physiotherapy, psychology, and nutrition, with HASS academics contributing to the healthcare humanities components of these degrees (details below). To explore HASS academics' teaching practices that foster learner uncertainty tolerance, the research team purposefully sampled HASS educators who deliberately designed and delivered teaching to foster students' uncertainty tolerance at an Australian University. Additional to email invitations, snowball sampling facilitated the identification of appropriate educators. A total of 14 HASS educators across two campuses from five different faculties (ten teaching areas), agreed to participate in face‐to‐face focus groups or interviews over two months in 2019. Although participants were from HASS faculties, seven participants taught students across both HASS and STEMM (science, technology, engineering and mathematics, medicine) degrees (all participants in focus group (F G‐1), one participant in focus group two and three). Together, the academics' varied disciplines and faculties, along with purposeful sampling of HASS educators, helped achieve information power (Malterud et al., ). To be included in the study, participants' teaching area (not necessarily related to their faculty) needed to be related to the Australian definition of HASS fields of research and education. These fields include: Architecture and Building, Education, Management and Commerce, Society and Culture, and Creative Arts (Turner & Brass, ). All participants completed a demographic survey (see Table ), with 56% of participants self‐reporting female gender, and no representation from gender diverse participants. The survey was followed by audio‐recorded, semi‐structured interview (four total) or focus group discussions (three total). Authors were facilitators (M.D.L., G.B., A.Z.). The semi‐structured protocol for both the focus groups and interviews were the same. The difference between these two data collection strategies were related to participant characteristics. Larger focus groups (FG‐1) consisted of academics who taught into the same degree, and thus depth of discussion was based on shared context fostering interactive discussion (Davidson, et al., ; Ng et al., ), whereas smaller focus groups taught into different degrees, and the smaller participant number enhanced depth of data from these diverse experiences (Davidson, et al., ). Finally, those engaging in interviews enhanced further depth of discussion of the research topic (DiCicco‐Bloom & Crabtree, ). Prior to commencement, participants were read the uncertainty tolerance definition (see introduction) and asked how they (within the classroom) prepare students for managing uncertainty. Semi‐structured questions were designed to focus on eliciting discussion related to the domains of the uncertainty tolerance model (Hillen et al., ). Herein, questions explored how educators introduce, teach, integrate and foster uncertainty tolerance across units, courses and/or curriculum, and included questions focused on classroom stimuli, moderators and educators' perceptions of their students' classroom responses. Finally, participants were asked about what support academics need to consider including when implementing uncertainty tolerance teaching practices. Prior to data analysis, all authors participated in a team reflexive exercise to improve team communication, function and research rigor (Barry et al., ). The team shared experiences and interest in the uncertainty tolerance topic, and this collectively drove the teams' research focus. The team were involved with teaching, though their learner population was diverse (undergraduate, graduate and professional learner populations). The research team had a variety of methodological research experiences and worldviews ranging from positivistic and quantitative, to interpretivism and extensive qualitative experiences. All team members, however, were positively oriented toward qualitative research for exploring the uncertainty tolerance construct in this HASS context. Discussions were analyzed using an abductive approach (Lingard, ) with the uncertainty tolerance model as the theoretical lens (Hillen et al., ) and framework analysis as the methodology (Ritchie & Spencer, ). Framework analysis consists of five phases: (1) familiarization; (2) thematic framework identification; (3) indexing; (4) charting; and (5) mapping and interpretation. Audio files were uploaded to Otter transcription tool (Otter.ai, Los Altos, CA), and facilitators (M.D.L., G.B., A.Z.) listened to and edited transcriptions in this platform (Phase 1, Familiarization). Familiarization was further enhanced with team (M.D.L., G.B., A.Z.) discussions about broad areas of alignment with, or extension of, the uncertainty tolerance model (Hillen et al., ). Later, A.G.V. read discussion files several times over to gain a broad understanding of themes. Phase 2 was led by M.D.L. and A.G.V., whereby the data were coded (led by A.G.V.) and regularly discussed between A.G.V., G.B., and M.D.L. until a final codebook, inclusive of definition and quotes, was reached. A qualitative data analysis software, NVivo 12 (QSR International, Melbourne, Australia) was utilized for data management. Code associations (i.e., when one distinct code occurred in concurrence with another distinct code but were interrelated within the participant's narrative) were identified in Phases 3 and 4. Once all interviews were completely coded, matrix maps were constructed to evaluate two‐way stimulus‐response associations/pathways. To explore more complex pathways/associations of three or more (including positive/negative sentiment), project maps were later constructed (Phase 5). Across the focus groups and interviews, 386 min of data were analyzed (Table ) resulting in a robust and in‐depth coding hierarchy (Supplementary Material Appendices 1–3). From this, it was discovered that the participants conceptualized healthcare education‐related uncertainty in different ways. Educators appeared to define uncertainty as either complexity or unknowns, but not synonymously: I think, in the way I've been teaching it, ambiguity is not necessarily associated with complexity, because you can have a simple situation that is still ambiguous. For example, I may work very hard on my essay, but I still don't know what I'm going to get … the standard is different. So, when students submit their first assignment, their first essay, they're really anxious because they don't know how good their best is. So, there's no great complexity involved in that. There's just uncertainty. (FG‐2, Global Studies) Other participants described learner uncertainty as “blind‐spots” and/or “bias”, and thus appeared to be conceptualizing uncertainty both on what it is, and on what it is not: You know what you know, but then you don't know what you don't know. There's the blank spots in the blind spot; The blind spots that you know, we don't really want to pay much attention to. We know that if we don't pay any attention [to] ‘the blind spots’, ‘we’ don't even know that we don't even know it. And that kind of adds to the complexity of complexity, because complexity says that I can describe a system that's complex, but I still know all the elements and their main interactions. But the blind spots are when we don't even know elements that are there. We don't even know what's there. So, there's kind of layers of uncertainty, there's a cascade of chasms between what we think we know and what we actually live in. (FG‐1, Sustainability) Extending these broad conceptualizations of uncertainty, more detailed descriptions of classroom teaching themes relating to the uncertainty tolerance model components are described in the following sections. Stimuli Identified uncertainty stimuli spanned four broad teaching strategies (Supplementary Material Appendix ): (1) Questioning student pre‐conceptions, (2) Learning transfer to different contexts, (3) Purposeful design and implementation of authentic “grey” case scenarios, and (4) Content presented from multi‐disciplinary/faceted perspective. Inculcated across all stimulus‐related participant discussion was that these teaching practices are both purposeful and integrated at a broad curricular level (across the entire semester and/or year and/or degree) as opposed to being one‐off, ad ‐ hoc teaching practice. Stimulus—Questioning student pre‐conceptions Educators described designing learning activities and/or assignments which purposefully challenged learner views, beliefs, and assumptions. Here, an educator describes challenging students to rethink what a chair represents: … we just use the chair…, we say ‘okay, this is a chair. So, we have socially determined this is an article for sitting on. However, someone else could, you know, come into this, and it could be a cupboard … and you can see their minds being blown. (FG‐3, Sustainability) Stimulus—Learning transfer to different contexts For some educators, classroom practices encouraging “multiple tools for multiple contexts” (I‐4, Primary and secondary health and physical education) stimulated uncertainty by challenging learners to transfer knowledge between contexts: … students need to transpose the analytical skills that they develop when they read a text to real life. So, in the same way that they read a scene, they must learn how to read a situation. So, they start … with short stories, then they move on to film, and then they move on to a real‐life scenario. (FG‐2, Global Studies) Stimulus—Purposeful design and implementation of authentic “grey” cases/scenarios Educators also described deliberate presentations of ambiguous or complex scenarios (i.e., grey cases) wherein they challenge students to consider the “grey” areas of discipline content. Examples of these included future‐focused, complex, and/or ambiguous workplace scenarios: But then you might say, ‘but what about if we move to this particular model of powering cities? What does that do in terms of the economy?’ And students will say, ‘but, you know, it's really good for the environment.’ I said ‘yeah, but what happens to those people who lose their jobs because the type of power in cities changes?’ And so, they have to actually learn to live with a whole range of factors, and not just consider the right answer, because the right answer is we need to do something about climate change. But there's complexities within those right answers … that can be very challenging. (FG‐3, Business and Economics) Stimulus—Content presented from multi‐disciplinary/faceted perspectives Educators' described encouraging learners to expand their worldviews by presenting a variety of viewpoints about a given topic. They look at Indigeneity, radicalization and genocide … So, you'll have an economist talking about genocide, a social scientist talking about genocide, a medical doctor talking about radicalization … because when we're talking about global studies and addressing global issues, it can't be done from a single discipline. So, this allows them to see that these problems can be approached from, you know, a myriad of perspectives. (FG‐2, Global Studies) Moderators Moderators, within the higher education context, refer to factors impacting learner responses to educational uncertainty stimuli, including “situational characteristics as well as cultural and social factors” (Hillen et al., ). Within this current study, three moderator themes were identified (Supplementary Material Appendix ; Table ): (1) Knowledge and experience relative to uncertainty; (2) Educator approach; and (3) Learners' personal attributes. Each moderator theme had multiple subthemes and codes described below. Moderator—Knowledge and experience relative to uncertainty The interrelated nature of learners' prior knowledge and prior exposure (or lack thereof) to uncertainty stimuli were described as: (1) Subject mastery and/or experiences (high and low) and (2) Discipline background. High mastery included learners with prior uncertainty experiences, often through experiential learning opportunities, and/or previously acquired discipline knowledge. Both types of prior uncertainty experience were perceived as fostering learner uncertainty tolerance. Low mastery predominantly related to educators describing learners new to university or new to the discipline content. Participants discussed investing effort in scaffolding uncertainty stimulus exposure by developing ways to support “low mastery” learners', as these students were perceived as struggling with uncertainty. (see related quotes in Table ). The moderator of discipline background (Table ) refers to educator perception of learner's worldview, as it relates to their knowledge (not as an individual characteristic). A subjective worldview was predominantly linked to HASS students, while an objective worldview related to STEMM students. There were also some educators that acknowledged a spectrum of learner worldviews, but noted that the location of learners on this spectrum appeared dependent on their study field, which was described below as discipline tension: There's a big range in terms of how students respond, because I don't think there's necessarily as clear a tradition of how you would approach a problem solving or a research question [in sustainability] as there would be in the physical sciences, or what a lawyer would do. You know, with lawyers, they've been trained in a very specific way of attacking of a problem, and physical scientists another way, whereas there's a few degrees that kind of sit in a space that's a bit more flexible. And then there's a few that are sitting in the space of just like, ‘everything is contested’, and let's just have lots of discourse and arguments, and they're the ones who can often thrive in the context of discussing worldviews and uncertainty, but maybe be less useful on the sharp end of sustainability in terms of what do you do. (FG‐2, Sustainability) Moderator—Educator approach The second identified moderator encompassed the educators' teaching approaches and practices ranging from practical classroom methods to purposeful pedagogical design. Subthemes included: (1) Challenging student assumptions and worldviews; (2) Uncertainty management tools; (3) Pedagogical Instruction (open or closed) through scaffolding uncertainty; (4) Exposure to diversity through teamwork and collaboration; (5) Exposure to unease and/or discomfort; and (6) Intellectual streaking or candor, and are defined further below. Learners' assumptions and worldviews were challenged by embedding uncertainty within the curriculum, including cultural immersion in global overseas programs or educators intentionally designing experiential learning exposures to “help them experience it and learn to live with it” (FG‐3, Sociology). Many participants discussed weaving a variety of uncertainty management tools into the curriculum to foster learners' uncertainty tolerance (Table ). While some participants did not provide explicit details about these tools (categorized as general tools ), others discussed specific approaches including: (1) Self‐reflection; (2) Strategies for managing risk and accepting error; and (3) Providing uncertainty dress‐rehearsals. Self‐reflection, in particular was a dominant tool described in the data (Table ). Educators also described moderating uncertainty tolerance through open and closed pedagogy . Open pedagogical instruction referred to less prescriptive guidelines which were often not attached to formal assessment, or through providing choice for assessed components. Closed pedagogy included educators' descriptions of ‘bounding’ the classroom uncertainty through calculated steps, especially for students with no or limited uncertainty experiences (e.g., low subject mastery ). Exposure to diversity through teamwork and collaboration was another educational moderator used to expose learners to alternate ways of thinking and doing, by taking deliberate steps to assemble teams from diverse cultural, socioeconomic, gender (etc.) backgrounds. Moderating learner uncertainty tolerance development was also seen in educators' setting clear expectations that discomfort and/or unease is implicit to deep learning, helping learners become aware, and explore ‘sitting with’, this uneasiness. In this way, the theme of exposure to unease and discomfort differed from the theme of uncertainty management tools because the former was not a tool, but a learned practice: You have to set the expectations that these WILL be uncomfortable, you WILL feel very … [pause] … it could feel painful. But that [is the] very point of learning for you. (FG‐2, Sustainability) Intellectual streaking and / or candor focuses on the educator embracing their own vulnerabilities around uncertainty, and being transparent in order to help normalize the learner's uncertainty experiences (Bearman & Molloy, ; Molloy & Bearman, ). Intellectual streaking included examples where the educator was “fully exposed” in these vulnerabilities, whereas intellectual candor is relevant to the learners' assigned tasks, and thus becomes a bounded exposure of educator vulnerability. Moderator‐learners' personal attributes Educators perceived that learners' personal attributes influenced learner uncertainty tolerance, and included subthemes: (1) Extrinsically merit‐minded; (2) humility; and (3) cognitive flexibility. The moderator of extrinsically merit ‐ minded described learners who were hyper‐focused on assessment and/or class performance, and as a consequence appeared to struggle developing uncertainty tolerance. This is contrasted with learner humility , for example, permitting space for “others to be right,” appeared to positively moderate, learners' responses to educational uncertainty. This was mirrored with learners described as cognitively flexible, as they were able to “focus on the right things at the right time” ( I‐4, Primary and secondary health and physical education). Those described as cognitively inflexible were perceived as less tolerant of educational uncertainty. In this study, HASS educators typically linked this subtheme with students in STEMM disciplines: it's amazing how many science students I've worked with think, if you do a statistical test and its significant, then that's the truth. They might have asked the silliest question that doesn't make biological sense in ANY way. But if they get a positive stat … (FG‐2, Sustainability) Participants described their perceptions of learners' responses to educational uncertainty, in the context of described moderators. Perceived learner responses, and the links between moderators and responses are described in more detail below. Responses Across the data, educators' perceptions of their students' cognitive, emotional, and behavioral responses were discussed (Table , Supplementary Material Appendix ). Within each domain, participants' perceptions of learner responses represented a spectrum from positive (+) to negative (‐). Described positive cognitive responses included: understanding / accepting uncertainty ; receptiveness ; and confidence in managing uncertainty. These described learner responses appeared to result from longitudinal and developmental educational processes wherein learners were continually exposed to uncertainty, either through real‐world experiences or classroom teaching practices. This progressive approach in developing learner uncertainty tolerance was often described as transformational, with responses indicating permanent changes to learner's mindset. In contrast, the negative cognitive response included being resistant and avoiding uncertainty and was predominantly linked to novice students starting university. Behaviorally, participant responses described predominately negative perceptions of learner responses, including themes such as non ‐ or avoidant participation or entitled information seeking , and often were associated with perceptions of learners' negative emotional response (e.g., feelings of stress , anxiety , or feeling overwhelmed ). While academics' perceptions of learner responses to uncertain stimuli were usually situated at one end of the spectrum (negative or positive), vulnerability (an emotional response), had an indeterminate valency. Pastoral care A theme identified across the dataset was the perceived importance of pastoral care when executing uncertainty tolerance teaching practices. This theme referred to emotional support, leadership, and mentorship required when engaging uncertainty tolerance teaching practices. Participants expressed the need to support students with low subject mastery and/or students from disciplines typically linked to objective worldviews (e.g., STEMM) illustrated by the quote below drawing upon a boat metaphor: But you approach a kid who's doing science and has no notion of this with that. And it's just too unmooring. And the point is not to unmoor them, but to give them a sense that from this unmooring, they can find, ah, they can find direction and that to empower them, to understand that there is a process of unmooring, of course, but from that comes direction. And from self‐reflection and cogitation, comes a new understanding. And I didn't understand that at the beginning when I started teaching. I think I just threw them into the deep end of the pool, and many of them tanked. … So that's been a learning curve for me. Understanding that not everyone approaches ambiguity with the ease that certain disciplines do. (FG‐2, Global Studies) Stimuli, moderator and response interactions The depth and richness of data allowed exploration and analysis of linkages between, and across, different parts of the uncertainty tolerance conceptual model (Hillen et al., ). Herein this study identified educators' perceptions of how certain educational uncertainty stimuli were perceived by learners, and how different moderators were perceived as impacting on learner responses. Educator‐sourced moderators are ones described as originated by or from the teacher (e.g., pedagogy, teaching practices), while learner‐sourced moderators are student‐derived (e.g., traits or worldviews). Figure illustrates “grey cases” as an exemplar of uncertainty tolerance model interaction, as the moderator interactions herein were complex and nuanced. Interactions: Grey case stimulus The uncertainty educational stimulus of “grey cases” were perceived as eliciting a variety of learner responses, depending on the classroom moderators at play. If students had low subject knowledge mastery, and a subjective worldview (moderators), learners were perceived as having resistance (negative cognitive response) and being disengaged (negative behavioral response). However, if learners were perceived as having a subjective worldview (moderator), regardless of discipline knowledge level, educators linked this to entitled information seeking (negative behavioral response). Similarly, if grey cases were introduced, and students were reported as cognitively inflexible, then learners appeared to respond with negative emotional appraisals (stress, anxiety and feeling overwhelmed). On the positive end, when educators engaged in intellectual candor (moderator) or designed their teaching approach to allow for purposeful learner exposure to discomfort (moderator), students appeared to respond with confidence to manage this uncertainty (positive cognitive response). If learners had an objective worldview, but educators challenged student assumptions through multi‐disciplinary educational environments (both moderators), students appeared to be accepting of this uncertainty (positive cognitive response). If, on the other hand, students had a subjective worldview (moderator) and educators designed learning activities to include purposeful exposure to uncertainty, multi‐disciplinary approaches, and helped students manage risk and accept error (moderators)—then students accepted uncertainty (cognitive positive response) arising from grey cases. Interactions: Questioning student pre‐conceptions stimulus When educators questioned student pre‐conceptions (stimulus), the learner's responses were percieved as mostly positive. Moderators that appeared to temper this uncertainty stimulus positively included: student attributes of humility and educator approaches of exposure to discomfort or exposing students to strategies for managing risk and/or accepting error. Conversely, this same stimulus moderator by the educator engaging intellectual candor (moderator), with students' who were perceived as cognitive inflexible (moderator), appeared to result in negative emotional responses of anxiety. Interactions: Transferring learning to new contexts stimulus Transferring learning to new contexts (uncertainty stimulus) included a wide variety of moderator interactions. If educators used open pedagogical approaches (moderator) with students who held objective worldviews (moderator) and were relatively cognitively inflexible (moderator), learners were described as responding with resistance to uncertainty (negative cognitive response). However, if educators moderated the classroom by teaching students' general tools to manage uncertainty (moderator), using this same stimulus, learners responded with a positive cognitive response of receptiveness. Interactions: Multidisciplinary, faceted perspectives stimulus All moderators associated with pedagogical uncertainty stimulus of multidisciplinary, faceted perspectives appeared to modulate learner responses toward the positive end of the appraisal and response spectrum (i.e., more tolerant of uncertainty). If educators described providing uncertainty dress rehearsals alongside intellectual candor, or by challenging student assumptions in a multidisciplinary environment while scaffolding uncertainty (all moderators) with this uncertainty stimulus, learners appeared to positively respond with receptiveness (cognitive response). If educators introduced general tools for managing uncertainty, and students were cognitively flexible (both moderators)—students appeared to accept uncertainty (positive cognitive response). Interactions: Moderators and learner responses Participant discourse did not always include an uncertainty stimulus. However, participants often described moderators relating to perceived learner responses allowing for exploration of linkages between moderators and responses (Table ). Some moderators appeared to work in concert, influencing students' responses to uncertainty stimuli. Identified uncertainty stimuli spanned four broad teaching strategies (Supplementary Material Appendix ): (1) Questioning student pre‐conceptions, (2) Learning transfer to different contexts, (3) Purposeful design and implementation of authentic “grey” case scenarios, and (4) Content presented from multi‐disciplinary/faceted perspective. Inculcated across all stimulus‐related participant discussion was that these teaching practices are both purposeful and integrated at a broad curricular level (across the entire semester and/or year and/or degree) as opposed to being one‐off, ad ‐ hoc teaching practice. Stimulus—Questioning student pre‐conceptions Educators described designing learning activities and/or assignments which purposefully challenged learner views, beliefs, and assumptions. Here, an educator describes challenging students to rethink what a chair represents: … we just use the chair…, we say ‘okay, this is a chair. So, we have socially determined this is an article for sitting on. However, someone else could, you know, come into this, and it could be a cupboard … and you can see their minds being blown. (FG‐3, Sustainability) Stimulus—Learning transfer to different contexts For some educators, classroom practices encouraging “multiple tools for multiple contexts” (I‐4, Primary and secondary health and physical education) stimulated uncertainty by challenging learners to transfer knowledge between contexts: … students need to transpose the analytical skills that they develop when they read a text to real life. So, in the same way that they read a scene, they must learn how to read a situation. So, they start … with short stories, then they move on to film, and then they move on to a real‐life scenario. (FG‐2, Global Studies) Stimulus—Purposeful design and implementation of authentic “grey” cases/scenarios Educators also described deliberate presentations of ambiguous or complex scenarios (i.e., grey cases) wherein they challenge students to consider the “grey” areas of discipline content. Examples of these included future‐focused, complex, and/or ambiguous workplace scenarios: But then you might say, ‘but what about if we move to this particular model of powering cities? What does that do in terms of the economy?’ And students will say, ‘but, you know, it's really good for the environment.’ I said ‘yeah, but what happens to those people who lose their jobs because the type of power in cities changes?’ And so, they have to actually learn to live with a whole range of factors, and not just consider the right answer, because the right answer is we need to do something about climate change. But there's complexities within those right answers … that can be very challenging. (FG‐3, Business and Economics) Stimulus—Content presented from multi‐disciplinary/faceted perspectives Educators' described encouraging learners to expand their worldviews by presenting a variety of viewpoints about a given topic. They look at Indigeneity, radicalization and genocide … So, you'll have an economist talking about genocide, a social scientist talking about genocide, a medical doctor talking about radicalization … because when we're talking about global studies and addressing global issues, it can't be done from a single discipline. So, this allows them to see that these problems can be approached from, you know, a myriad of perspectives. (FG‐2, Global Studies) Educators described designing learning activities and/or assignments which purposefully challenged learner views, beliefs, and assumptions. Here, an educator describes challenging students to rethink what a chair represents: … we just use the chair…, we say ‘okay, this is a chair. So, we have socially determined this is an article for sitting on. However, someone else could, you know, come into this, and it could be a cupboard … and you can see their minds being blown. (FG‐3, Sustainability) For some educators, classroom practices encouraging “multiple tools for multiple contexts” (I‐4, Primary and secondary health and physical education) stimulated uncertainty by challenging learners to transfer knowledge between contexts: … students need to transpose the analytical skills that they develop when they read a text to real life. So, in the same way that they read a scene, they must learn how to read a situation. So, they start … with short stories, then they move on to film, and then they move on to a real‐life scenario. (FG‐2, Global Studies) Educators also described deliberate presentations of ambiguous or complex scenarios (i.e., grey cases) wherein they challenge students to consider the “grey” areas of discipline content. Examples of these included future‐focused, complex, and/or ambiguous workplace scenarios: But then you might say, ‘but what about if we move to this particular model of powering cities? What does that do in terms of the economy?’ And students will say, ‘but, you know, it's really good for the environment.’ I said ‘yeah, but what happens to those people who lose their jobs because the type of power in cities changes?’ And so, they have to actually learn to live with a whole range of factors, and not just consider the right answer, because the right answer is we need to do something about climate change. But there's complexities within those right answers … that can be very challenging. (FG‐3, Business and Economics) Educators' described encouraging learners to expand their worldviews by presenting a variety of viewpoints about a given topic. They look at Indigeneity, radicalization and genocide … So, you'll have an economist talking about genocide, a social scientist talking about genocide, a medical doctor talking about radicalization … because when we're talking about global studies and addressing global issues, it can't be done from a single discipline. So, this allows them to see that these problems can be approached from, you know, a myriad of perspectives. (FG‐2, Global Studies) Moderators, within the higher education context, refer to factors impacting learner responses to educational uncertainty stimuli, including “situational characteristics as well as cultural and social factors” (Hillen et al., ). Within this current study, three moderator themes were identified (Supplementary Material Appendix ; Table ): (1) Knowledge and experience relative to uncertainty; (2) Educator approach; and (3) Learners' personal attributes. Each moderator theme had multiple subthemes and codes described below. Moderator—Knowledge and experience relative to uncertainty The interrelated nature of learners' prior knowledge and prior exposure (or lack thereof) to uncertainty stimuli were described as: (1) Subject mastery and/or experiences (high and low) and (2) Discipline background. High mastery included learners with prior uncertainty experiences, often through experiential learning opportunities, and/or previously acquired discipline knowledge. Both types of prior uncertainty experience were perceived as fostering learner uncertainty tolerance. Low mastery predominantly related to educators describing learners new to university or new to the discipline content. Participants discussed investing effort in scaffolding uncertainty stimulus exposure by developing ways to support “low mastery” learners', as these students were perceived as struggling with uncertainty. (see related quotes in Table ). The moderator of discipline background (Table ) refers to educator perception of learner's worldview, as it relates to their knowledge (not as an individual characteristic). A subjective worldview was predominantly linked to HASS students, while an objective worldview related to STEMM students. There were also some educators that acknowledged a spectrum of learner worldviews, but noted that the location of learners on this spectrum appeared dependent on their study field, which was described below as discipline tension: There's a big range in terms of how students respond, because I don't think there's necessarily as clear a tradition of how you would approach a problem solving or a research question [in sustainability] as there would be in the physical sciences, or what a lawyer would do. You know, with lawyers, they've been trained in a very specific way of attacking of a problem, and physical scientists another way, whereas there's a few degrees that kind of sit in a space that's a bit more flexible. And then there's a few that are sitting in the space of just like, ‘everything is contested’, and let's just have lots of discourse and arguments, and they're the ones who can often thrive in the context of discussing worldviews and uncertainty, but maybe be less useful on the sharp end of sustainability in terms of what do you do. (FG‐2, Sustainability) Moderator—Educator approach The second identified moderator encompassed the educators' teaching approaches and practices ranging from practical classroom methods to purposeful pedagogical design. Subthemes included: (1) Challenging student assumptions and worldviews; (2) Uncertainty management tools; (3) Pedagogical Instruction (open or closed) through scaffolding uncertainty; (4) Exposure to diversity through teamwork and collaboration; (5) Exposure to unease and/or discomfort; and (6) Intellectual streaking or candor, and are defined further below. Learners' assumptions and worldviews were challenged by embedding uncertainty within the curriculum, including cultural immersion in global overseas programs or educators intentionally designing experiential learning exposures to “help them experience it and learn to live with it” (FG‐3, Sociology). Many participants discussed weaving a variety of uncertainty management tools into the curriculum to foster learners' uncertainty tolerance (Table ). While some participants did not provide explicit details about these tools (categorized as general tools ), others discussed specific approaches including: (1) Self‐reflection; (2) Strategies for managing risk and accepting error; and (3) Providing uncertainty dress‐rehearsals. Self‐reflection, in particular was a dominant tool described in the data (Table ). Educators also described moderating uncertainty tolerance through open and closed pedagogy . Open pedagogical instruction referred to less prescriptive guidelines which were often not attached to formal assessment, or through providing choice for assessed components. Closed pedagogy included educators' descriptions of ‘bounding’ the classroom uncertainty through calculated steps, especially for students with no or limited uncertainty experiences (e.g., low subject mastery ). Exposure to diversity through teamwork and collaboration was another educational moderator used to expose learners to alternate ways of thinking and doing, by taking deliberate steps to assemble teams from diverse cultural, socioeconomic, gender (etc.) backgrounds. Moderating learner uncertainty tolerance development was also seen in educators' setting clear expectations that discomfort and/or unease is implicit to deep learning, helping learners become aware, and explore ‘sitting with’, this uneasiness. In this way, the theme of exposure to unease and discomfort differed from the theme of uncertainty management tools because the former was not a tool, but a learned practice: You have to set the expectations that these WILL be uncomfortable, you WILL feel very … [pause] … it could feel painful. But that [is the] very point of learning for you. (FG‐2, Sustainability) Intellectual streaking and / or candor focuses on the educator embracing their own vulnerabilities around uncertainty, and being transparent in order to help normalize the learner's uncertainty experiences (Bearman & Molloy, ; Molloy & Bearman, ). Intellectual streaking included examples where the educator was “fully exposed” in these vulnerabilities, whereas intellectual candor is relevant to the learners' assigned tasks, and thus becomes a bounded exposure of educator vulnerability. Moderator‐learners' personal attributes Educators perceived that learners' personal attributes influenced learner uncertainty tolerance, and included subthemes: (1) Extrinsically merit‐minded; (2) humility; and (3) cognitive flexibility. The moderator of extrinsically merit ‐ minded described learners who were hyper‐focused on assessment and/or class performance, and as a consequence appeared to struggle developing uncertainty tolerance. This is contrasted with learner humility , for example, permitting space for “others to be right,” appeared to positively moderate, learners' responses to educational uncertainty. This was mirrored with learners described as cognitively flexible, as they were able to “focus on the right things at the right time” ( I‐4, Primary and secondary health and physical education). Those described as cognitively inflexible were perceived as less tolerant of educational uncertainty. In this study, HASS educators typically linked this subtheme with students in STEMM disciplines: it's amazing how many science students I've worked with think, if you do a statistical test and its significant, then that's the truth. They might have asked the silliest question that doesn't make biological sense in ANY way. But if they get a positive stat … (FG‐2, Sustainability) Participants described their perceptions of learners' responses to educational uncertainty, in the context of described moderators. Perceived learner responses, and the links between moderators and responses are described in more detail below. The interrelated nature of learners' prior knowledge and prior exposure (or lack thereof) to uncertainty stimuli were described as: (1) Subject mastery and/or experiences (high and low) and (2) Discipline background. High mastery included learners with prior uncertainty experiences, often through experiential learning opportunities, and/or previously acquired discipline knowledge. Both types of prior uncertainty experience were perceived as fostering learner uncertainty tolerance. Low mastery predominantly related to educators describing learners new to university or new to the discipline content. Participants discussed investing effort in scaffolding uncertainty stimulus exposure by developing ways to support “low mastery” learners', as these students were perceived as struggling with uncertainty. (see related quotes in Table ). The moderator of discipline background (Table ) refers to educator perception of learner's worldview, as it relates to their knowledge (not as an individual characteristic). A subjective worldview was predominantly linked to HASS students, while an objective worldview related to STEMM students. There were also some educators that acknowledged a spectrum of learner worldviews, but noted that the location of learners on this spectrum appeared dependent on their study field, which was described below as discipline tension: There's a big range in terms of how students respond, because I don't think there's necessarily as clear a tradition of how you would approach a problem solving or a research question [in sustainability] as there would be in the physical sciences, or what a lawyer would do. You know, with lawyers, they've been trained in a very specific way of attacking of a problem, and physical scientists another way, whereas there's a few degrees that kind of sit in a space that's a bit more flexible. And then there's a few that are sitting in the space of just like, ‘everything is contested’, and let's just have lots of discourse and arguments, and they're the ones who can often thrive in the context of discussing worldviews and uncertainty, but maybe be less useful on the sharp end of sustainability in terms of what do you do. (FG‐2, Sustainability) The second identified moderator encompassed the educators' teaching approaches and practices ranging from practical classroom methods to purposeful pedagogical design. Subthemes included: (1) Challenging student assumptions and worldviews; (2) Uncertainty management tools; (3) Pedagogical Instruction (open or closed) through scaffolding uncertainty; (4) Exposure to diversity through teamwork and collaboration; (5) Exposure to unease and/or discomfort; and (6) Intellectual streaking or candor, and are defined further below. Learners' assumptions and worldviews were challenged by embedding uncertainty within the curriculum, including cultural immersion in global overseas programs or educators intentionally designing experiential learning exposures to “help them experience it and learn to live with it” (FG‐3, Sociology). Many participants discussed weaving a variety of uncertainty management tools into the curriculum to foster learners' uncertainty tolerance (Table ). While some participants did not provide explicit details about these tools (categorized as general tools ), others discussed specific approaches including: (1) Self‐reflection; (2) Strategies for managing risk and accepting error; and (3) Providing uncertainty dress‐rehearsals. Self‐reflection, in particular was a dominant tool described in the data (Table ). Educators also described moderating uncertainty tolerance through open and closed pedagogy . Open pedagogical instruction referred to less prescriptive guidelines which were often not attached to formal assessment, or through providing choice for assessed components. Closed pedagogy included educators' descriptions of ‘bounding’ the classroom uncertainty through calculated steps, especially for students with no or limited uncertainty experiences (e.g., low subject mastery ). Exposure to diversity through teamwork and collaboration was another educational moderator used to expose learners to alternate ways of thinking and doing, by taking deliberate steps to assemble teams from diverse cultural, socioeconomic, gender (etc.) backgrounds. Moderating learner uncertainty tolerance development was also seen in educators' setting clear expectations that discomfort and/or unease is implicit to deep learning, helping learners become aware, and explore ‘sitting with’, this uneasiness. In this way, the theme of exposure to unease and discomfort differed from the theme of uncertainty management tools because the former was not a tool, but a learned practice: You have to set the expectations that these WILL be uncomfortable, you WILL feel very … [pause] … it could feel painful. But that [is the] very point of learning for you. (FG‐2, Sustainability) Intellectual streaking and / or candor focuses on the educator embracing their own vulnerabilities around uncertainty, and being transparent in order to help normalize the learner's uncertainty experiences (Bearman & Molloy, ; Molloy & Bearman, ). Intellectual streaking included examples where the educator was “fully exposed” in these vulnerabilities, whereas intellectual candor is relevant to the learners' assigned tasks, and thus becomes a bounded exposure of educator vulnerability. Educators perceived that learners' personal attributes influenced learner uncertainty tolerance, and included subthemes: (1) Extrinsically merit‐minded; (2) humility; and (3) cognitive flexibility. The moderator of extrinsically merit ‐ minded described learners who were hyper‐focused on assessment and/or class performance, and as a consequence appeared to struggle developing uncertainty tolerance. This is contrasted with learner humility , for example, permitting space for “others to be right,” appeared to positively moderate, learners' responses to educational uncertainty. This was mirrored with learners described as cognitively flexible, as they were able to “focus on the right things at the right time” ( I‐4, Primary and secondary health and physical education). Those described as cognitively inflexible were perceived as less tolerant of educational uncertainty. In this study, HASS educators typically linked this subtheme with students in STEMM disciplines: it's amazing how many science students I've worked with think, if you do a statistical test and its significant, then that's the truth. They might have asked the silliest question that doesn't make biological sense in ANY way. But if they get a positive stat … (FG‐2, Sustainability) Participants described their perceptions of learners' responses to educational uncertainty, in the context of described moderators. Perceived learner responses, and the links between moderators and responses are described in more detail below. Across the data, educators' perceptions of their students' cognitive, emotional, and behavioral responses were discussed (Table , Supplementary Material Appendix ). Within each domain, participants' perceptions of learner responses represented a spectrum from positive (+) to negative (‐). Described positive cognitive responses included: understanding / accepting uncertainty ; receptiveness ; and confidence in managing uncertainty. These described learner responses appeared to result from longitudinal and developmental educational processes wherein learners were continually exposed to uncertainty, either through real‐world experiences or classroom teaching practices. This progressive approach in developing learner uncertainty tolerance was often described as transformational, with responses indicating permanent changes to learner's mindset. In contrast, the negative cognitive response included being resistant and avoiding uncertainty and was predominantly linked to novice students starting university. Behaviorally, participant responses described predominately negative perceptions of learner responses, including themes such as non ‐ or avoidant participation or entitled information seeking , and often were associated with perceptions of learners' negative emotional response (e.g., feelings of stress , anxiety , or feeling overwhelmed ). While academics' perceptions of learner responses to uncertain stimuli were usually situated at one end of the spectrum (negative or positive), vulnerability (an emotional response), had an indeterminate valency. A theme identified across the dataset was the perceived importance of pastoral care when executing uncertainty tolerance teaching practices. This theme referred to emotional support, leadership, and mentorship required when engaging uncertainty tolerance teaching practices. Participants expressed the need to support students with low subject mastery and/or students from disciplines typically linked to objective worldviews (e.g., STEMM) illustrated by the quote below drawing upon a boat metaphor: But you approach a kid who's doing science and has no notion of this with that. And it's just too unmooring. And the point is not to unmoor them, but to give them a sense that from this unmooring, they can find, ah, they can find direction and that to empower them, to understand that there is a process of unmooring, of course, but from that comes direction. And from self‐reflection and cogitation, comes a new understanding. And I didn't understand that at the beginning when I started teaching. I think I just threw them into the deep end of the pool, and many of them tanked. … So that's been a learning curve for me. Understanding that not everyone approaches ambiguity with the ease that certain disciplines do. (FG‐2, Global Studies) The depth and richness of data allowed exploration and analysis of linkages between, and across, different parts of the uncertainty tolerance conceptual model (Hillen et al., ). Herein this study identified educators' perceptions of how certain educational uncertainty stimuli were perceived by learners, and how different moderators were perceived as impacting on learner responses. Educator‐sourced moderators are ones described as originated by or from the teacher (e.g., pedagogy, teaching practices), while learner‐sourced moderators are student‐derived (e.g., traits or worldviews). Figure illustrates “grey cases” as an exemplar of uncertainty tolerance model interaction, as the moderator interactions herein were complex and nuanced. Interactions: Grey case stimulus The uncertainty educational stimulus of “grey cases” were perceived as eliciting a variety of learner responses, depending on the classroom moderators at play. If students had low subject knowledge mastery, and a subjective worldview (moderators), learners were perceived as having resistance (negative cognitive response) and being disengaged (negative behavioral response). However, if learners were perceived as having a subjective worldview (moderator), regardless of discipline knowledge level, educators linked this to entitled information seeking (negative behavioral response). Similarly, if grey cases were introduced, and students were reported as cognitively inflexible, then learners appeared to respond with negative emotional appraisals (stress, anxiety and feeling overwhelmed). On the positive end, when educators engaged in intellectual candor (moderator) or designed their teaching approach to allow for purposeful learner exposure to discomfort (moderator), students appeared to respond with confidence to manage this uncertainty (positive cognitive response). If learners had an objective worldview, but educators challenged student assumptions through multi‐disciplinary educational environments (both moderators), students appeared to be accepting of this uncertainty (positive cognitive response). If, on the other hand, students had a subjective worldview (moderator) and educators designed learning activities to include purposeful exposure to uncertainty, multi‐disciplinary approaches, and helped students manage risk and accept error (moderators)—then students accepted uncertainty (cognitive positive response) arising from grey cases. Interactions: Questioning student pre‐conceptions stimulus When educators questioned student pre‐conceptions (stimulus), the learner's responses were percieved as mostly positive. Moderators that appeared to temper this uncertainty stimulus positively included: student attributes of humility and educator approaches of exposure to discomfort or exposing students to strategies for managing risk and/or accepting error. Conversely, this same stimulus moderator by the educator engaging intellectual candor (moderator), with students' who were perceived as cognitive inflexible (moderator), appeared to result in negative emotional responses of anxiety. Interactions: Transferring learning to new contexts stimulus Transferring learning to new contexts (uncertainty stimulus) included a wide variety of moderator interactions. If educators used open pedagogical approaches (moderator) with students who held objective worldviews (moderator) and were relatively cognitively inflexible (moderator), learners were described as responding with resistance to uncertainty (negative cognitive response). However, if educators moderated the classroom by teaching students' general tools to manage uncertainty (moderator), using this same stimulus, learners responded with a positive cognitive response of receptiveness. Interactions: Multidisciplinary, faceted perspectives stimulus All moderators associated with pedagogical uncertainty stimulus of multidisciplinary, faceted perspectives appeared to modulate learner responses toward the positive end of the appraisal and response spectrum (i.e., more tolerant of uncertainty). If educators described providing uncertainty dress rehearsals alongside intellectual candor, or by challenging student assumptions in a multidisciplinary environment while scaffolding uncertainty (all moderators) with this uncertainty stimulus, learners appeared to positively respond with receptiveness (cognitive response). If educators introduced general tools for managing uncertainty, and students were cognitively flexible (both moderators)—students appeared to accept uncertainty (positive cognitive response). Interactions: Moderators and learner responses Participant discourse did not always include an uncertainty stimulus. However, participants often described moderators relating to perceived learner responses allowing for exploration of linkages between moderators and responses (Table ). Some moderators appeared to work in concert, influencing students' responses to uncertainty stimuli. The uncertainty educational stimulus of “grey cases” were perceived as eliciting a variety of learner responses, depending on the classroom moderators at play. If students had low subject knowledge mastery, and a subjective worldview (moderators), learners were perceived as having resistance (negative cognitive response) and being disengaged (negative behavioral response). However, if learners were perceived as having a subjective worldview (moderator), regardless of discipline knowledge level, educators linked this to entitled information seeking (negative behavioral response). Similarly, if grey cases were introduced, and students were reported as cognitively inflexible, then learners appeared to respond with negative emotional appraisals (stress, anxiety and feeling overwhelmed). On the positive end, when educators engaged in intellectual candor (moderator) or designed their teaching approach to allow for purposeful learner exposure to discomfort (moderator), students appeared to respond with confidence to manage this uncertainty (positive cognitive response). If learners had an objective worldview, but educators challenged student assumptions through multi‐disciplinary educational environments (both moderators), students appeared to be accepting of this uncertainty (positive cognitive response). If, on the other hand, students had a subjective worldview (moderator) and educators designed learning activities to include purposeful exposure to uncertainty, multi‐disciplinary approaches, and helped students manage risk and accept error (moderators)—then students accepted uncertainty (cognitive positive response) arising from grey cases. When educators questioned student pre‐conceptions (stimulus), the learner's responses were percieved as mostly positive. Moderators that appeared to temper this uncertainty stimulus positively included: student attributes of humility and educator approaches of exposure to discomfort or exposing students to strategies for managing risk and/or accepting error. Conversely, this same stimulus moderator by the educator engaging intellectual candor (moderator), with students' who were perceived as cognitive inflexible (moderator), appeared to result in negative emotional responses of anxiety. Transferring learning to new contexts (uncertainty stimulus) included a wide variety of moderator interactions. If educators used open pedagogical approaches (moderator) with students who held objective worldviews (moderator) and were relatively cognitively inflexible (moderator), learners were described as responding with resistance to uncertainty (negative cognitive response). However, if educators moderated the classroom by teaching students' general tools to manage uncertainty (moderator), using this same stimulus, learners responded with a positive cognitive response of receptiveness. All moderators associated with pedagogical uncertainty stimulus of multidisciplinary, faceted perspectives appeared to modulate learner responses toward the positive end of the appraisal and response spectrum (i.e., more tolerant of uncertainty). If educators described providing uncertainty dress rehearsals alongside intellectual candor, or by challenging student assumptions in a multidisciplinary environment while scaffolding uncertainty (all moderators) with this uncertainty stimulus, learners appeared to positively respond with receptiveness (cognitive response). If educators introduced general tools for managing uncertainty, and students were cognitively flexible (both moderators)—students appeared to accept uncertainty (positive cognitive response). Participant discourse did not always include an uncertainty stimulus. However, participants often described moderators relating to perceived learner responses allowing for exploration of linkages between moderators and responses (Table ). Some moderators appeared to work in concert, influencing students' responses to uncertainty stimuli. This research serves to advance understandings of how teaching practices can purposefully foster learner uncertainty tolerance, particularly in foundational anatomy education where there is an already identified implicit link with uncertainty and uncertainty tolerance (Willan & Humpherson, ; Stephens et al., ; Wheble & Channon, ; Cullinane & Barry, ). The results of this study suggest that HASS teaching practices designed to foster learner uncertainty tolerance broadly align with the prevailing uncertainty tolerance model (Hillen et al., ), with identified themes mapping to each conceptual model domain (stimuli, moderators, and responses), suggesting transferability of HASS uncertainty tolerance teaching practices (Firestone, ) to other educational contexts. Given that uncertainty is intrinsic to healthcare, evidenced in part by the Covid‐19 pandemic where uncertainties stemmed from the biomedical nature of the virus, and the psychosocial aspects of healthcare including public health communication (Finset et al., ), care of patients (Young et al., ; Lin et al., ) and healthcare provider well‐being (Rolland, ; Valeras, ; Zerbini et al., ; Di Trani et al., ) helping students develop uncertainty tolerance becomes increasingly important. This study suggests that anatomy educators can potentially foster uncertainty tolerance early in the healthcare education pathway by purposefully designing curriculum that stimulates uncertainty (e.g., grey cases) and can also then help students learn to manage this classroom uncertainty by selectively timing identified moderators (e.g., reflective practice) to support learners in their unique contexts (e.g., novice vs. experienced learners). Uncertainty tolerance remains a valuable attribute in everyday healthcare practice. There is growing evidence that doctors with lower uncertainty tolerance are more likely to over‐order tests (Rao & Levin, ), increase healthcare costs (Bhise et al., ), have dogmatic tendencies (Iannello et al., ; Geller et al., ), are more likely to suffer from psychological distress (Hancock & Mattick, ), and contribute to healthcare disparities (Balsa et al., ). Those with higher uncertainty tolerance appear to be more open to diversity, have improved attitudes toward the underserved (Kvale et al., ; Wayne et al., ), and engage in patient‐centered care (Portnoy et al., ; Berger et al., ). In this way, designing healthcare education which serves to longitudinally develop learner uncertainty tolerance is both timely and relevant, and this research provides practical recommendations which serve to accomplish this (described in more detail below). Stimuli Data analysis identified key pedagogical stimuli which can be applied across healthcare classrooms, as many of these stimuli were generic (not explicitly tied to HASS content) in nature, and have been implemented in healthcare education previously. For example, integrating grey cases (e.g., complex discipline‐focused case problem solving) is shown to provide uncertainty tolerance practice opportunities in multiple disciplines including: clinical anatomy (Stephens et al., ), healthcare education (Khatri et al., ), business (Rippin et al., ), and mathematics (Voskoglou, )—suggesting transferability of this theme (grey cases) from HASS education to education more broadly. Efforts toward inclusion of identified uncertainty stimuli into disciplines such as anatomy may be particularly relevant to address the predominance of reported lower uncertainty tolerance of students starting medical school (Strout et al., ; Geller et al., ), and given that anatomy is often a cornerstone of healthcare curricula (Sugand et al., ). Anatomy educators may work toward stimulating uncertainty by engaging multifaceted points of view . When teaching shoulder anatomy, for instance, anatomists could present the anatomical structures associated with shoulder anatomy and movement and invite a multi‐discipline panel to discuss their diverse perspectives of shoulder anatomy. This panel could include: A general practitioner to discuss shoulder examination considerations, a surgeon who focuses on shoulder repair surgical approaches (including relevant anatomical variations), a physical therapist outlining shoulder rehabilitation strategies, a radiologist debating evaluation approaches when viewing medical imaging of the shoulder, and a patient who has lived experience of shoulder pain. This approach has already been shown to be of value to improving anatomical learning (Lazarus et al., ; Stott et al., ), and this study suggests that this same approach could prove useful in fostering learner uncertainty tolerance when purposefully designed to do so. Educators can harness existing anatomy education teaching practices to foster uncertainty tolerance development by expanding the focus of such panels from exclusively emphasizing relevant knowns (i.e., shoulder anatomy) toward an approach which includes discussing “unknowns” or points of contention between these diverse panel members. Results herein suggest that engaging this multi‐disciplinary panel to not only cover the anatomy, but also review the points of ambiguity, will help students understand that while the shoulder anatomy knowledge is relatively stable—the relevancy, focus and application of shoulder anatomy is highly variable (i.e., uncertain and complex) in clinical practice. Moderators This research broadens the fields' understanding of the complexity of an individual learners' uncertainty tolerance, particularly around the concept of moderators. Prior to this study, educational moderators were predominately listed and described as independent, singular factors modulating uncertainty tolerance (Hillen et al., ; Strout et al., ). However, in this study, moderators originated from two sources, both the educator and the learner. Each moderator, and its source, interacted across the learning environment in numerous circumstances suggesting a complex interplay of educator and learner‐sourced moderators which may, in turn, be impacting on students' responses to educational uncertainty stimuli. This interaction of moderators with each other could also explain why a systematic review of moderators such as age and learning stage had diverse impacts on uncertainty tolerance, with these moderators reported as negatively, positively, or neutral impacts on learner uncertainty tolerance (Strout et al., ). If moderators do, indeed, interact and work together to modulate learner uncertainty tolerance, then isolating the impacts of a single moderator (as many of the included studies attempted) may be challenging and lead to the observed inconsistent results. Indeed, this current study analysis revealed a pattern of moderator interdependency (Figure ). This pattern suggests that educators have agency and opportunities to manage learner uncertainty tolerance. The choices educators make will depend on the educational stimulus chosen, the learning outcomes planned, and the educators' desired learner response, as well as the consideration of learner‐sourced moderators. Therefore, this study found educators are able to purposefully select educator sourced modifiable moderators (e.g., diverse teamwork, intellectual candor) to counteract more static learner‐sourced ones (e.g., year level, discipline background) to develop the most effective curriculum to foster uncertainty tolerance in a given educational context. Educator awareness of the moderating factors ‘at play’ in their unique learning context allows for opportunities to be more responsive to learners' needs at a particular time in their learning journey (Figure ). In considering the anatomy learning environment, anatomists often teach first year medical students (Drake et al., ; Sugand et al., ), wherein students typically have a ‘low subject mastery’ of anatomy knowledge. This student‐sourced moderator, based on study results, appears to influence learners' uncertainty tolerance towards 'less tolerant'. Armed with this knowledge, anatomists can build in educator‐sourced moderators which counteract this negative moderator, by engaging uncertainty management tools such as formative self‐reflection activities (educator‐driven moderator). Reflective practice, in particular, is a moderator that appears to improve both medical students' (Nevalainen et al., ) and educators (Attard, ) uncertainty tolerance, further underscoring that findings in this study likely have broad application to multiple learner contexts. Key to an educators' capacity to moderate learners' uncertainty tolerance, is an awareness of which uncertainty tolerance moderators are present in their classrooms. Anatomists, given the time they spend with students in the anatomy laboratories (Drake et al., ), are well placed to have a holistic knowledge of their learner population and classroom dynamics, reinforcing the anatomy learning environment as an ideal context for fostering learners' uncertainty tolerance early in the students' learning journey. As described in this study, anatomy educators may consider stimulating learner uncertainty via “grey cases” or case‐based studies which are a frequently used tool in the anatomy classroom, as well as other disciplines (Hutchings, ; Kim et al., ). If an educator chooses to use this approach to stimulate uncertainty in learners who have more objective worldviews (i.e., clinical anatomy students, based on study results), this study suggests that inclusion of an educator‐sourced moderator (e.g., uncertainty moderating tool of managing risk and accepting error), may counteract learners' negative response(s) perceived when this moderator is left in isolation. While many of these educator‐driven moderators are already shown as effective teaching strategies for either managing uncertainty in the anatomy learning environment (Rippin et al., ; Stephens et al., ), or for learning more generally (Gijbels et al., ; Baeten et al., ; McLean, ), this study illustrates that anatomy educators may have the power and agency to purposefully select a number of moderators at different points in the learning journey in order to direct learner responses to uncertainty stimuli positively, potentially preparing them for implicit uncertainties in their future healthcare practice. Responses Congruent with findings related to previous studies exploring medical anatomy students' reports of their own experiences of uncertainty tolerance (Stephens et al., ), those found in learners' responses to uncertainty in the broader context of education (Weurlander et al., ; Grace et al., ) and healthcare literature (Lee et al., ; Moffett et al., ), this study found academics' perceptions of their learners' emotional responses to uncertain stimuli were predominantly negative. Despite the uncertainty tolerance conceptual model depicting positive emotional responses to uncertainty for healthcare providers (e.g., courage, curiosity, hope), within the learner context there appears to be exclusive reporting of negative responses (Weurlander et al., ; Grace et al., ; Lee et al., ; Moffett et al., ). Thus, this overlap between HASS academics perceptions of their non‐healthcare students' responses to uncertainty overlaps with those identified in the healthcare student population, further substantiating that identified HASS educators' perceptions of uncertainty tolerance teaching practices may align with the healthcare learning context. In this study, thematic analysis identified educator perceptions of some positive cognitive and behavioral learner responses, despite educator reports of learners' negative emotional responses. This is consistent with recent data elicited from medical students, wherein students self‐reported negative emotional responses to uncertainty, alongside reports of positive behavioral and cognitive responses in the anatomy laboratory (Stephens et al., ) as well as medical school more broadly (Nevalainen et al., ). This discrepancy between emotional responses as compared to cognitive and behavioral responses may be valuable for anatomy educators to consider when designing uncertainty tolerance inculcated curriculum (e.g., they may not “like” it, but their cognition and behavior suggests improved uncertainty tolerance). This is also an important consideration for universities when they contemplate the timing and approach of student evaluations of teaching, as the strong emotional responses students have to uncertainty tolerance curriculum could affect their rating of teachers and teaching. State versus trait This work adds to the mounting evidence that uncertainty tolerance is complex and nuanced, including ongoing discourse about whether uncertainty tolerance is a static personality trait, or a contextually‐dependent state. Hillen ( ) suggests that the uncertainty tolerance model can be used to guide research in both (or either) the state or the trait perspective. While at first glance, this seems oxymoronic, this study may help elucidate this apparent anomaly by suggesting that the construct, as a whole, is state‐based, and thus contextually dependent, but components may also be trait‐based (e.g., objective worldview or humility). The presence, and apparent impact, of moderators on educators' perceptions of learners' uncertainty tolerance does suggest a largely state‐based construct, similar to the limited (but growing) studies within other educational contexts (Han et al., ; Strout et al., ; Geller et al., ; Stephens et al., ). However, some of these moderators may, in fact, be traits (or trait‐like), such as whether the learner is “humble” or “extrinsically‐merit minded” or whether learners' have “subject mastery”. Results herein suggest that these “trait”‐like moderators, likely do not singularly determine the learners' uncertainty tolerance, but are rather one ingredient in the moderator “soup” impacting learners' uncertainty tolerance. In this way, understanding how, and which, modifiable and unmodifiable moderators impact learner uncertainty tolerance would be an exciting and appropriate next step for anatomical education, and healthcare education more broadly. Broad practice recommendations These results contribute new knowledge and suggest practical applications for effectively fostering uncertainty tolerance within healthcare education broadly, and the anatomy education context specifically. Interestingly, many of the teaching practices described by HASS educators provide students the opportunity to safely practice and develop uncertainty tolerance in the classroom through experiential learning prior to entering the healthcare workforce. Experiential learning's central dogma (Kolb, ), relies on transformational learning through varied learner experiences and, when viewing from an ‘uncertainty tolerance’ lens, is considered along a tri‐partite spectrum that includes: (1) “critical incidents” whereby students reflect and link classroom content to real‐life experiences (2) “destabilization” that encourages students to act out similar scenarios, and (3) iso‐immersion whereby students are embedded in workplaces (placements) (García Ochoa & McDonald, ). All study participants described uncertainty tolerance curriculum aligning with one or more of these experiential learning phases. An example is the uncertainty tolerance stimulus of “grey cases”, which could represent either as critical incidents (if students are reading about the case) or destabilization (if students are role‐playing the case). Moderators could be titrated during each phase (i.e., challenging student assumptions, intellectual candor (Molloy & Bearman, )). Thus, healthcare educators could begin integrating uncertainty pedagogy across a curriculum through purposefully planned experiential learning approaches (e.g., case questions and simulations). This suggestion is further supported by recent findings exploring which types of pre‐clinical learning which was perceived as enhancing uncertainty tolerance (Papanagnou et al., ). Herein, small group learning and simulations were identified as teaching practices fostering uncertainty tolerance, with simulations reported as helping students to “realize real life is much more fluid and less concrete.” Indeed, anatomy educators often engage forms of simulation and cases (experiential learning approaches), through dissection, case studies, and problem‐based learning (Torres et al., ) both to enhance anatomy learning, and also to illustrate uncertainty intrinsic in the human body. As anatomy education curricular time continues to decrease with a concomitant increase of curricular time devoted to healthcare competency education (Craig et al., ; Prober & Khan, ; Trautman, et al., ), engaging in teaching practices which foster both anatomy discipline content and uncertainty tolerance healthcare competency becomes both timely and imperative. Limitations of the study and future work This qualitative study achieved depth and rigor through purposeful sampling, team‐based reflexive coding, and theme development with reference to an existing construct model (Varpio et al., ; McGrath et al., ; Kiger & Varpio, ). This study, however, is not without limitations. Importantly, while educators commented on learners' experiences, learners themselves were not directly studied and conclusions drawn by educators may not accurately reflect the learner's actual experiences. Despite this limitation, study results align with prior work exploring uncertainty tolerance within learners' directly (Moffett et al., ; Stephens et al., ). In addition, this study did not undertake direct classroom observation, and instead relied on educators' subjective reflections. This may lead to biased recall of experiences and may (in part) be the cause of the high reporting of negative learner responses. Future work could explore corroboration of educators' stated experiences through direct observational classroom studies and concomitant collection of students' perspectives. This study was completed at a single University in Australia, and thus findings from this study may not be broadly applicable. This limitation is mitigated, in part, by the engagement of an existing model and abductive approach, which enhances the applicability outside the study context (Firestone, ). Finally, this study was conducted at a single timepoint, not longitudinally, thus extrapolation regarding the impact of education on learner uncertainty tolerance development over time is not possible. Future research should focus on more deeply exploring moderator interactions and interdependency in the anatomy learning context, as well as investigating the disparate response valences seen across the emotion versus cognitive and behavioral domains across education more broadly. Data analysis identified key pedagogical stimuli which can be applied across healthcare classrooms, as many of these stimuli were generic (not explicitly tied to HASS content) in nature, and have been implemented in healthcare education previously. For example, integrating grey cases (e.g., complex discipline‐focused case problem solving) is shown to provide uncertainty tolerance practice opportunities in multiple disciplines including: clinical anatomy (Stephens et al., ), healthcare education (Khatri et al., ), business (Rippin et al., ), and mathematics (Voskoglou, )—suggesting transferability of this theme (grey cases) from HASS education to education more broadly. Efforts toward inclusion of identified uncertainty stimuli into disciplines such as anatomy may be particularly relevant to address the predominance of reported lower uncertainty tolerance of students starting medical school (Strout et al., ; Geller et al., ), and given that anatomy is often a cornerstone of healthcare curricula (Sugand et al., ). Anatomy educators may work toward stimulating uncertainty by engaging multifaceted points of view . When teaching shoulder anatomy, for instance, anatomists could present the anatomical structures associated with shoulder anatomy and movement and invite a multi‐discipline panel to discuss their diverse perspectives of shoulder anatomy. This panel could include: A general practitioner to discuss shoulder examination considerations, a surgeon who focuses on shoulder repair surgical approaches (including relevant anatomical variations), a physical therapist outlining shoulder rehabilitation strategies, a radiologist debating evaluation approaches when viewing medical imaging of the shoulder, and a patient who has lived experience of shoulder pain. This approach has already been shown to be of value to improving anatomical learning (Lazarus et al., ; Stott et al., ), and this study suggests that this same approach could prove useful in fostering learner uncertainty tolerance when purposefully designed to do so. Educators can harness existing anatomy education teaching practices to foster uncertainty tolerance development by expanding the focus of such panels from exclusively emphasizing relevant knowns (i.e., shoulder anatomy) toward an approach which includes discussing “unknowns” or points of contention between these diverse panel members. Results herein suggest that engaging this multi‐disciplinary panel to not only cover the anatomy, but also review the points of ambiguity, will help students understand that while the shoulder anatomy knowledge is relatively stable—the relevancy, focus and application of shoulder anatomy is highly variable (i.e., uncertain and complex) in clinical practice. This research broadens the fields' understanding of the complexity of an individual learners' uncertainty tolerance, particularly around the concept of moderators. Prior to this study, educational moderators were predominately listed and described as independent, singular factors modulating uncertainty tolerance (Hillen et al., ; Strout et al., ). However, in this study, moderators originated from two sources, both the educator and the learner. Each moderator, and its source, interacted across the learning environment in numerous circumstances suggesting a complex interplay of educator and learner‐sourced moderators which may, in turn, be impacting on students' responses to educational uncertainty stimuli. This interaction of moderators with each other could also explain why a systematic review of moderators such as age and learning stage had diverse impacts on uncertainty tolerance, with these moderators reported as negatively, positively, or neutral impacts on learner uncertainty tolerance (Strout et al., ). If moderators do, indeed, interact and work together to modulate learner uncertainty tolerance, then isolating the impacts of a single moderator (as many of the included studies attempted) may be challenging and lead to the observed inconsistent results. Indeed, this current study analysis revealed a pattern of moderator interdependency (Figure ). This pattern suggests that educators have agency and opportunities to manage learner uncertainty tolerance. The choices educators make will depend on the educational stimulus chosen, the learning outcomes planned, and the educators' desired learner response, as well as the consideration of learner‐sourced moderators. Therefore, this study found educators are able to purposefully select educator sourced modifiable moderators (e.g., diverse teamwork, intellectual candor) to counteract more static learner‐sourced ones (e.g., year level, discipline background) to develop the most effective curriculum to foster uncertainty tolerance in a given educational context. Educator awareness of the moderating factors ‘at play’ in their unique learning context allows for opportunities to be more responsive to learners' needs at a particular time in their learning journey (Figure ). In considering the anatomy learning environment, anatomists often teach first year medical students (Drake et al., ; Sugand et al., ), wherein students typically have a ‘low subject mastery’ of anatomy knowledge. This student‐sourced moderator, based on study results, appears to influence learners' uncertainty tolerance towards 'less tolerant'. Armed with this knowledge, anatomists can build in educator‐sourced moderators which counteract this negative moderator, by engaging uncertainty management tools such as formative self‐reflection activities (educator‐driven moderator). Reflective practice, in particular, is a moderator that appears to improve both medical students' (Nevalainen et al., ) and educators (Attard, ) uncertainty tolerance, further underscoring that findings in this study likely have broad application to multiple learner contexts. Key to an educators' capacity to moderate learners' uncertainty tolerance, is an awareness of which uncertainty tolerance moderators are present in their classrooms. Anatomists, given the time they spend with students in the anatomy laboratories (Drake et al., ), are well placed to have a holistic knowledge of their learner population and classroom dynamics, reinforcing the anatomy learning environment as an ideal context for fostering learners' uncertainty tolerance early in the students' learning journey. As described in this study, anatomy educators may consider stimulating learner uncertainty via “grey cases” or case‐based studies which are a frequently used tool in the anatomy classroom, as well as other disciplines (Hutchings, ; Kim et al., ). If an educator chooses to use this approach to stimulate uncertainty in learners who have more objective worldviews (i.e., clinical anatomy students, based on study results), this study suggests that inclusion of an educator‐sourced moderator (e.g., uncertainty moderating tool of managing risk and accepting error), may counteract learners' negative response(s) perceived when this moderator is left in isolation. While many of these educator‐driven moderators are already shown as effective teaching strategies for either managing uncertainty in the anatomy learning environment (Rippin et al., ; Stephens et al., ), or for learning more generally (Gijbels et al., ; Baeten et al., ; McLean, ), this study illustrates that anatomy educators may have the power and agency to purposefully select a number of moderators at different points in the learning journey in order to direct learner responses to uncertainty stimuli positively, potentially preparing them for implicit uncertainties in their future healthcare practice. Congruent with findings related to previous studies exploring medical anatomy students' reports of their own experiences of uncertainty tolerance (Stephens et al., ), those found in learners' responses to uncertainty in the broader context of education (Weurlander et al., ; Grace et al., ) and healthcare literature (Lee et al., ; Moffett et al., ), this study found academics' perceptions of their learners' emotional responses to uncertain stimuli were predominantly negative. Despite the uncertainty tolerance conceptual model depicting positive emotional responses to uncertainty for healthcare providers (e.g., courage, curiosity, hope), within the learner context there appears to be exclusive reporting of negative responses (Weurlander et al., ; Grace et al., ; Lee et al., ; Moffett et al., ). Thus, this overlap between HASS academics perceptions of their non‐healthcare students' responses to uncertainty overlaps with those identified in the healthcare student population, further substantiating that identified HASS educators' perceptions of uncertainty tolerance teaching practices may align with the healthcare learning context. In this study, thematic analysis identified educator perceptions of some positive cognitive and behavioral learner responses, despite educator reports of learners' negative emotional responses. This is consistent with recent data elicited from medical students, wherein students self‐reported negative emotional responses to uncertainty, alongside reports of positive behavioral and cognitive responses in the anatomy laboratory (Stephens et al., ) as well as medical school more broadly (Nevalainen et al., ). This discrepancy between emotional responses as compared to cognitive and behavioral responses may be valuable for anatomy educators to consider when designing uncertainty tolerance inculcated curriculum (e.g., they may not “like” it, but their cognition and behavior suggests improved uncertainty tolerance). This is also an important consideration for universities when they contemplate the timing and approach of student evaluations of teaching, as the strong emotional responses students have to uncertainty tolerance curriculum could affect their rating of teachers and teaching. This work adds to the mounting evidence that uncertainty tolerance is complex and nuanced, including ongoing discourse about whether uncertainty tolerance is a static personality trait, or a contextually‐dependent state. Hillen ( ) suggests that the uncertainty tolerance model can be used to guide research in both (or either) the state or the trait perspective. While at first glance, this seems oxymoronic, this study may help elucidate this apparent anomaly by suggesting that the construct, as a whole, is state‐based, and thus contextually dependent, but components may also be trait‐based (e.g., objective worldview or humility). The presence, and apparent impact, of moderators on educators' perceptions of learners' uncertainty tolerance does suggest a largely state‐based construct, similar to the limited (but growing) studies within other educational contexts (Han et al., ; Strout et al., ; Geller et al., ; Stephens et al., ). However, some of these moderators may, in fact, be traits (or trait‐like), such as whether the learner is “humble” or “extrinsically‐merit minded” or whether learners' have “subject mastery”. Results herein suggest that these “trait”‐like moderators, likely do not singularly determine the learners' uncertainty tolerance, but are rather one ingredient in the moderator “soup” impacting learners' uncertainty tolerance. In this way, understanding how, and which, modifiable and unmodifiable moderators impact learner uncertainty tolerance would be an exciting and appropriate next step for anatomical education, and healthcare education more broadly. These results contribute new knowledge and suggest practical applications for effectively fostering uncertainty tolerance within healthcare education broadly, and the anatomy education context specifically. Interestingly, many of the teaching practices described by HASS educators provide students the opportunity to safely practice and develop uncertainty tolerance in the classroom through experiential learning prior to entering the healthcare workforce. Experiential learning's central dogma (Kolb, ), relies on transformational learning through varied learner experiences and, when viewing from an ‘uncertainty tolerance’ lens, is considered along a tri‐partite spectrum that includes: (1) “critical incidents” whereby students reflect and link classroom content to real‐life experiences (2) “destabilization” that encourages students to act out similar scenarios, and (3) iso‐immersion whereby students are embedded in workplaces (placements) (García Ochoa & McDonald, ). All study participants described uncertainty tolerance curriculum aligning with one or more of these experiential learning phases. An example is the uncertainty tolerance stimulus of “grey cases”, which could represent either as critical incidents (if students are reading about the case) or destabilization (if students are role‐playing the case). Moderators could be titrated during each phase (i.e., challenging student assumptions, intellectual candor (Molloy & Bearman, )). Thus, healthcare educators could begin integrating uncertainty pedagogy across a curriculum through purposefully planned experiential learning approaches (e.g., case questions and simulations). This suggestion is further supported by recent findings exploring which types of pre‐clinical learning which was perceived as enhancing uncertainty tolerance (Papanagnou et al., ). Herein, small group learning and simulations were identified as teaching practices fostering uncertainty tolerance, with simulations reported as helping students to “realize real life is much more fluid and less concrete.” Indeed, anatomy educators often engage forms of simulation and cases (experiential learning approaches), through dissection, case studies, and problem‐based learning (Torres et al., ) both to enhance anatomy learning, and also to illustrate uncertainty intrinsic in the human body. As anatomy education curricular time continues to decrease with a concomitant increase of curricular time devoted to healthcare competency education (Craig et al., ; Prober & Khan, ; Trautman, et al., ), engaging in teaching practices which foster both anatomy discipline content and uncertainty tolerance healthcare competency becomes both timely and imperative. This qualitative study achieved depth and rigor through purposeful sampling, team‐based reflexive coding, and theme development with reference to an existing construct model (Varpio et al., ; McGrath et al., ; Kiger & Varpio, ). This study, however, is not without limitations. Importantly, while educators commented on learners' experiences, learners themselves were not directly studied and conclusions drawn by educators may not accurately reflect the learner's actual experiences. Despite this limitation, study results align with prior work exploring uncertainty tolerance within learners' directly (Moffett et al., ; Stephens et al., ). In addition, this study did not undertake direct classroom observation, and instead relied on educators' subjective reflections. This may lead to biased recall of experiences and may (in part) be the cause of the high reporting of negative learner responses. Future work could explore corroboration of educators' stated experiences through direct observational classroom studies and concomitant collection of students' perspectives. This study was completed at a single University in Australia, and thus findings from this study may not be broadly applicable. This limitation is mitigated, in part, by the engagement of an existing model and abductive approach, which enhances the applicability outside the study context (Firestone, ). Finally, this study was conducted at a single timepoint, not longitudinally, thus extrapolation regarding the impact of education on learner uncertainty tolerance development over time is not possible. Future research should focus on more deeply exploring moderator interactions and interdependency in the anatomy learning context, as well as investigating the disparate response valences seen across the emotion versus cognitive and behavioral domains across education more broadly. The strength of this research lies in the identification of the pivotal, and nuanced, role education can play in fostering learner uncertainty tolerance. Drawing from HASS teaching practices, this exploratory study sheds light on practical and broadly applicable teaching practices for implementation of 'uncertainty pedagogy'. This study also substantiates prior findings in the anatomy learning environment further underscoring that the role that anatomy education, in particular, may be a valuable context for supporting learners' uncertainty tolerance. Importantly, this study suggests that educators' knowledge of the context within which they teach can be harnessed to purposefully foster, rather than hinder, learners' uncertainty tolerance development. This study also illustrates that educators perceive learners' uncertainty tolerance not as pre‐determined, but rather as a malleable construct impacted through pedagogical approaches. To quote a participant, uncertainty tolerance should be “… explicitly taught, explicitly modelled, explicitly practiced [sic] …”. This transdisciplinary research represents the beginning of a paradigm shift in considering uncertainty tolerance within the higher educational context. The themes identified, including stimulus, moderator and response interactions, is an incremental step forward to inform a larger program of research relating to education (and educators) impact on learner uncertainty tolerance development. Results herein suggest that educators have the power and agency to purposefully integrate uncertainty tolerance teaching practices into their curriculum to better prepare healthcare students for uncertainty inherent in their future healthcare careers. Appendix S1 Click here for additional data file. Appendix S2 Click here for additional data file. Appendix S3 Click here for additional data file.
Identification of histological threshold concepts in health sciences curricula: Students' perception
2168dc0f-3260-4df0-b6c2-91018c5f77e9
10078720
Anatomy[mh]
The identification of students' perceptions constitutes a crucial element for the appropriate design and implementation of pedagogical strategies. Given that the learning process depends on triggers such as motivation (Wouters et al., ), self‐recognition of progress (Berkovich‐Ohana et al., ), and awareness of professionalism (Neve et al., ), the assessment of students' perceptions is an essential aspect of this process. The role of the student in the learning process has been examined from many different points of view in recent years (Campos et al., ; Agarwal & Kaushik, ; Rafati et al., ; Sun et al., ). Metacognition, defined as self‐knowledge of how the mind works and the intentional control of cognitive processes, has been identified as a relevant component for students' learning (Bryce et al., ; Vettori et al., ). Motivation and conceptions of learning are considered self‐regulatory constructs that arise from metacognition and that may closely influence student's learning, and consequently their academic outcomes (Martin & Ramsden, ; Vettori et al., ). Students' strategies aimed at seeking meaning and relating ideas (reconstructive conception) tend to achieve higher‐quality learning outcomes compared to students who practice unrelated memorizing (reproductive conception) (Entwistle et al., ; Zeegers, ). Accordingly, students' metacognitive skills and perceptions are considered important variables for high‐quality learning (Efklides, ; Campos‐Sanchez et al., , , ; Al Khader et al., ). One of the learning theories that situates the student as the central element in learning is Threshold Concepts (TC) theory (Meyer & Land, ), which has been a topic of increasing interest for the scientific community in the last decade (Santisteban‐Espejo et al., ). Threshold Concept theory considers education as a space of uncertainty, and proposes the existence of certain concepts or learning experiences that resemble conceptual gateways or portals that lead to a previously inaccessible way of thinking about something (Meyer & Land, , ). A specific notion can be considered a TC if it is transformative: once understood, these concepts trigger a significant shift in the student's perception of a subject, as well as emotional and performative elements in the learning process (Mezirow, ). In addition, TC are irreversible: once learned, they are unlikely to be forgotten by students. They are integrative, which means that their acquisition usually discloses previously hidden interrelations between apparently distant subjects. Lastly, TC are troublesome for learners because their learning is perceived as difficult or dissonant (Meyer & Land, ). The identification of specific TC within a discipline may thus be viewed as a relevant tool for focused curricular redesign, given that the teaching of these concepts can significantly improve students' learning (Entwistle, ). However, the identification of TC in health sciences currently constitutes a major challenge for higher education, because of the lack of a standardized, validated method for this purpose (Santisteban‐Espejo et al., ). Although TC have initially been identified by professors and staff scholars (Davies & Mangan, ; McKillop et al., ) students' perception may also play a key role in their identification, and students' perceptions are now increasingly used in the scientific community to identify TC (Clouder, ; Loertscher et al., ; Park, ). In this connection, several authors have attempted to identify TC by investigating the perceptions of medical students after conducting clinical practice in palliative care (O’Callaghan et al., ) and pediatrics (Randall et al., ). Personal reflections using audio diaries and subsequent group discussion have also been used to identify TC in bioethics (Collett et al., ), and to understand students' experiences in medical professionalism (Neve et al., ). Nevertheless, the question of how to appropriately identify TC still remains unresolved (Barradell, ; Santisteban‐Espejo et al., ). Attempts have been made to identify TC in clinical subjects as a pedagogical tool that may improve teaching in health sciences curricula; however, there is a lack of evidence regarding the identification of TC in the field of teaching histology. In biomedicine, histology is a basic science that deals with concepts and facts regarding the microscopic structure of the human body. An understanding of histology is crucial to comprehend human biochemical and physiological processes, as well as to gain insights into how structural abnormalities lead to disorders resulting in disease (Shaw & Friedman, ; Lowe et al., ; Al Khader et al., ). As a basic science, histology constitutes a crossroad among other curricular contents in basic and clinical sciences. Accordingly, histology is a fundamental part of most syllabi in biomedical disciplines and can be considered a core cognitive element in health sciences curricula (Moxham et al., ; Cui & Moxham, ). In the health sciences curricula, histology offers the student the possibility of approaching and understanding the microscopic structure and ultrastructure of normal human cells and tissues as the basis of human pathology. In general, the study of the four basic tissues of the human body, i.e., the epithelial, connective, muscle and nerve tissues, is considered in all histology curricula. However, there is no international consensus on the exact contents that should be included in a histological curriculum. In this regard, a recent study by Cui and Moxam using Delphi panels was able to determine the most relevant matters of medical histology, and authors were able to propose a core syllabus for this discipline (Cui & Moxham, ). As expected, the study of human cells and basic tissues was considered within this syllabus, but several specialized tissues corresponding to the skeletal muscle, cartilage, bone, blood, bone marrow and other structures of the human body were also considered with 100% of consensus of the Delphi panel. In addition, several strategies have been reported for the optimization of teaching in histology, such as the use of virtual microscopy (Mione et al., ), electronic learning resources (Ali & Syed, ) and the correlation with clinical cases (Eurell et al., ). The teaching of histology has been enriched by interesting approaches such as functional and dynamic histology (Vandevelde, ; Varga et al., ). These pedagogical strategies consider that the teaching and learning process in histology should not be oriented in a traditional way that requires students to reproduce the components of a tissue in an unchanging or fixed mode, but rather should facilitate comprehension of the physiological and clinical significance of different tissue structures (Kerr, ). Consequently, the design and implementation of histology teaching programs could be significantly optimized by taking into account this dual, static–dynamic dimension of the subject matter. Because of the many interdisciplinary relationships involved in this subject, learning histology could be significantly improved by the identification of TC. Ideally, these concepts should focus directly on the most integrative and troublesome aspects for students in order to optimize transformative, irreversible learning about microscopic structures in the human body. Given that improvements in educational programs are grounded on an accurate analysis of students' understanding of the subject matter, the evaluation of students' perceptions might constitute an appropriate tool to identify histological TC. Once identified, these concepts can then be used to construct the foundations for the further comprehension of clinical and surgical subjects. The results of the present study may have the potential to impact both basic and clinical sciences in higher medical education, given that knowledge of histology is essential for further progress in learning basic curricular contents intended to provide an understanding of the architecture of damaged tissues in different human diseases. Histological concepts are also closely related to magnifying instruments (microscopes) and histological techniques (staining). Thus, this instrumental and methodological dimension is conjoined to histology, and should not be overlooked in the identification of histological TC. However, the interrelations among histology and other disciplines are necessarily different in health sciences curricula, in which the course contents and goals are specific for each degree. In fact, guidance on how histology is taught differs across curricula, and focuses on different aspects within each degree program. For this reason, it is important to identify TC not only for each subject, but also for each academic degree program that includes a give subject. In the present study TC were investigated during the teaching and learning process—an approach that made it possible to evaluate students' effort and cognitive integration as they occurred. Students were asked to identify histological TC after the histology course had been taught in its entirety. The undergraduate histology course in the health sciences curricula at the University of Granada ranges from 60 hours in the medicine and dentistry degree programs to 30 hours for the pharmacy degree (combined with 30 hours of anatomy in a unique 60‐h course). Part of the histology course consists of 10 to 15 hours of work in practical sessions to learn how to identify basic human tissues and their main characteristics in microscope slides, with both light microscopes and virtual microscope tools. Content knowledge is evaluated with theoretical and practical tests at the end of the course. The present work aimed to evaluate students' perceptions regarding a group of concepts that are considered elemental in histology in different health sciences curricula, i.e., dentistry, medicine and pharmacy. Insights into students' perceptions should be useful in efforts to redesign and optimize the pedagogical methods and approaches used in the teaching and learning of histology. Design of the study The present research was carried out to identify perceptions of TC in histology by students enrolled in undergraduate degree programs in medicine, dentistry and pharmacy. The study took place during the period when the histology course was taught; that is, the same module of histological concepts was taught to all students by the same teaching faculty members in the same period of the academic year. All students received the same information about the goals of the study and the procedure to be used. This study complied with the Helsinki Declaration and was approved by the Ethics Committee on Human Research of the University of Granada (ref. n. 622/CEIH/2018). Participants in the study The study was done at the University of Granada (Spain). The sample consisted of 410 undergraduate students enrolled in the first year of three health sciences degree programs. Because general histology at the University of Granada is only taught as a full course in the first year of the degree program, only first‐year students were enrolled in this study. A total of 244 medical students (97.21% of all first‐year students), 64 dentistry students (94.11%), and 102 pharmacy students (88.70%) participated. The medicine, dentistry and pharmacy degree programs were chosen for this study because they share a similar syllabus for health sciences curricula, both at the University of Granada and at other universities in Spain. Mean age of the participants ranged from 18.7 ± 1.7 years in pharmacy to 19.5 ± 4.3 years in dentistry, with no significant differences among degree programs. The male/female ratio was similar in all three groups of participants ( P = 0.456). Further information on the participants' demographic and university access characteristics are shown in Table . All participants received information about the definition of TC before they completed the questionnaire, which was distributed to students at the end of the academic year once all theoretical contents in the histology course had been taught. Participation was voluntary and consistent with the procedures of the university research review board. The students were given no extra credit or compensation for participating, and they were informed that their participation would help them explore their own learning processes. Questionnaire tool To identify histology TC, a tool named the Histological Threshold Concepts questionnaire (HTCq) was developed by faculty members of the Department of Histology at the University of Granada. (The questionnaire is available for use as long as this article is cited). This survey is registered in the Safe Creative Electronic Register of Intellectual Property (Saragossa, Spain) on November 5, 2021, code 2111059726532. This questionnaire consists of 37 concepts previously used in histology teaching activities. Students were asked to indicate the value of each concept as a TC on a five‐point Likert scale from 1 (total disagreement) to 5 (total agreement). Definitions were provided below each concept in the questionnaire. The HTCq (Figure ) was completed after the last class of the histology course. The students were first informed of the aim of the HTCq and given instructions about how to complete it. Although the HTCq presented students with 37 individual concepts, the constitution of groups of concepts (i.e., factors) was used to analyze the responses. These factors were obtained through statistical analysis, namely main components factor analysis. Main components analysis was used to identify subgroups of items (factors) that were highly correlated. The aim was to obtain the minimum number of factors able to explain the greatest proportion of variance in the original set of items. This method consecutively identified factors which explained decreasing proportions of the overall variance (i.e., at the end of the process the number of factors is the same as the number of items), and assigned a quality score value to each factor (called the eigenvalue). Only factors with a high eigenvalue (i.e., greater than 1, in accordance with the Kaiser rule) were considered informative, and therefore retained. This procedure identified 10 factors that were retained. To identify which items pertained to each of the 10 factors, the correlation coefficients between each item and each factor were obtained. Each item produced a value for this coefficient (called the factor load) for each of the 10 factors retained. Ideally, each item should yield a high load on only one factor and very low loads on the remaining factors, although sometimes the same item showed high loads (i.e., greater than 0.40) on two or even more factors. To minimize this situation (which makes it difficult to distribute the items across the extracted factors, and consequently makes their interpretation challenging) and favor the optimal distribution of loads, the mathematical procedure called varimax rotation was first applied. The model obtained from the results was contrasted with different confirmatory factor models derived from the different variants to be tested. The goodness of fit index (GIF), adjusted goodness of fit index (AGIF), and root mean square error of approximation (RMSEA) coefficient were used to verify the fitness of each proposed model, and thus to validate the underlying factorial structure of the HTCq tool (Bollen, ). These indexes and coefficients provide a measurement of fitness of the results model yielded by the data to a theoretical model consisting of 10 factors (groups of concepts) (Table ). The statistical software used was SPSS, version 15 (SPSS Inc., Chicago, IL) for exploratory factor analysis and R statistical software, version 3.6.3 (Foundation for Statistical Computing, Vienna, Austria) for confirmatory factor analysis. Lastly, the 37 concepts were divided into ten factors: morphostructural basic concepts linked to morphology, structure, function and the relationships among them (MBC); tissue organization concepts related to the elemental components of tissues (TO); hierarchical body organization concepts related to the levels of organization in the human body (HBO); organ histofunctional organization concepts related to the stromal and parenchymal nature of tissue components (OHO); concepts related to the histogenesis and development of tissues (HD); tissue functional state concepts related to the general activity of tissues (TFS); tissue engineering concepts related to the generation of artificial tissues (TE);microscopic magnification concepts related to magnification with different instruments (MM); microscopic examination analysis concepts related to histological techniques (MEA); and concepts related to histological information arising from the two‐dimensional observation of microscopic structures (HIO). The concepts pertaining to each factor are shown in Table . Statistical analysis Cronbach's alpha was used to assess test reliability. The results were analyzed to identify differences among the different study groups in the concepts identified as TC. Average values and standard deviations were calculated for each factor. Two‐way ANOVA was used to compare the results obtained for each factor among the three groups. Thereafter, post‐hoc analyses with the Tukey test were used to detect specific pairwise differences between dentistry and medicine, dentistry and pharmacy, and medicine and pharmacy groups. Statistical tests were done with SPSS statistical package, version 15 (SPSS Inc., Chicago, IL) and the significance level was set at P < 0.05 for all tests. Effect sizes of the differences were calculated as Cohen's d (Δ), and were categorized as small (0 ≤ Δ < 0.333), medium (0.333 ≤ Δ < 0.666) or large (0.666 ≤ Δ < 1) based on benchmarks suggested by Cohen ( ). The present research was carried out to identify perceptions of TC in histology by students enrolled in undergraduate degree programs in medicine, dentistry and pharmacy. The study took place during the period when the histology course was taught; that is, the same module of histological concepts was taught to all students by the same teaching faculty members in the same period of the academic year. All students received the same information about the goals of the study and the procedure to be used. This study complied with the Helsinki Declaration and was approved by the Ethics Committee on Human Research of the University of Granada (ref. n. 622/CEIH/2018). The study was done at the University of Granada (Spain). The sample consisted of 410 undergraduate students enrolled in the first year of three health sciences degree programs. Because general histology at the University of Granada is only taught as a full course in the first year of the degree program, only first‐year students were enrolled in this study. A total of 244 medical students (97.21% of all first‐year students), 64 dentistry students (94.11%), and 102 pharmacy students (88.70%) participated. The medicine, dentistry and pharmacy degree programs were chosen for this study because they share a similar syllabus for health sciences curricula, both at the University of Granada and at other universities in Spain. Mean age of the participants ranged from 18.7 ± 1.7 years in pharmacy to 19.5 ± 4.3 years in dentistry, with no significant differences among degree programs. The male/female ratio was similar in all three groups of participants ( P = 0.456). Further information on the participants' demographic and university access characteristics are shown in Table . All participants received information about the definition of TC before they completed the questionnaire, which was distributed to students at the end of the academic year once all theoretical contents in the histology course had been taught. Participation was voluntary and consistent with the procedures of the university research review board. The students were given no extra credit or compensation for participating, and they were informed that their participation would help them explore their own learning processes. To identify histology TC, a tool named the Histological Threshold Concepts questionnaire (HTCq) was developed by faculty members of the Department of Histology at the University of Granada. (The questionnaire is available for use as long as this article is cited). This survey is registered in the Safe Creative Electronic Register of Intellectual Property (Saragossa, Spain) on November 5, 2021, code 2111059726532. This questionnaire consists of 37 concepts previously used in histology teaching activities. Students were asked to indicate the value of each concept as a TC on a five‐point Likert scale from 1 (total disagreement) to 5 (total agreement). Definitions were provided below each concept in the questionnaire. The HTCq (Figure ) was completed after the last class of the histology course. The students were first informed of the aim of the HTCq and given instructions about how to complete it. Although the HTCq presented students with 37 individual concepts, the constitution of groups of concepts (i.e., factors) was used to analyze the responses. These factors were obtained through statistical analysis, namely main components factor analysis. Main components analysis was used to identify subgroups of items (factors) that were highly correlated. The aim was to obtain the minimum number of factors able to explain the greatest proportion of variance in the original set of items. This method consecutively identified factors which explained decreasing proportions of the overall variance (i.e., at the end of the process the number of factors is the same as the number of items), and assigned a quality score value to each factor (called the eigenvalue). Only factors with a high eigenvalue (i.e., greater than 1, in accordance with the Kaiser rule) were considered informative, and therefore retained. This procedure identified 10 factors that were retained. To identify which items pertained to each of the 10 factors, the correlation coefficients between each item and each factor were obtained. Each item produced a value for this coefficient (called the factor load) for each of the 10 factors retained. Ideally, each item should yield a high load on only one factor and very low loads on the remaining factors, although sometimes the same item showed high loads (i.e., greater than 0.40) on two or even more factors. To minimize this situation (which makes it difficult to distribute the items across the extracted factors, and consequently makes their interpretation challenging) and favor the optimal distribution of loads, the mathematical procedure called varimax rotation was first applied. The model obtained from the results was contrasted with different confirmatory factor models derived from the different variants to be tested. The goodness of fit index (GIF), adjusted goodness of fit index (AGIF), and root mean square error of approximation (RMSEA) coefficient were used to verify the fitness of each proposed model, and thus to validate the underlying factorial structure of the HTCq tool (Bollen, ). These indexes and coefficients provide a measurement of fitness of the results model yielded by the data to a theoretical model consisting of 10 factors (groups of concepts) (Table ). The statistical software used was SPSS, version 15 (SPSS Inc., Chicago, IL) for exploratory factor analysis and R statistical software, version 3.6.3 (Foundation for Statistical Computing, Vienna, Austria) for confirmatory factor analysis. Lastly, the 37 concepts were divided into ten factors: morphostructural basic concepts linked to morphology, structure, function and the relationships among them (MBC); tissue organization concepts related to the elemental components of tissues (TO); hierarchical body organization concepts related to the levels of organization in the human body (HBO); organ histofunctional organization concepts related to the stromal and parenchymal nature of tissue components (OHO); concepts related to the histogenesis and development of tissues (HD); tissue functional state concepts related to the general activity of tissues (TFS); tissue engineering concepts related to the generation of artificial tissues (TE);microscopic magnification concepts related to magnification with different instruments (MM); microscopic examination analysis concepts related to histological techniques (MEA); and concepts related to histological information arising from the two‐dimensional observation of microscopic structures (HIO). The concepts pertaining to each factor are shown in Table . Cronbach's alpha was used to assess test reliability. The results were analyzed to identify differences among the different study groups in the concepts identified as TC. Average values and standard deviations were calculated for each factor. Two‐way ANOVA was used to compare the results obtained for each factor among the three groups. Thereafter, post‐hoc analyses with the Tukey test were used to detect specific pairwise differences between dentistry and medicine, dentistry and pharmacy, and medicine and pharmacy groups. Statistical tests were done with SPSS statistical package, version 15 (SPSS Inc., Chicago, IL) and the significance level was set at P < 0.05 for all tests. Effect sizes of the differences were calculated as Cohen's d (Δ), and were categorized as small (0 ≤ Δ < 0.333), medium (0.333 ≤ Δ < 0.666) or large (0.666 ≤ Δ < 1) based on benchmarks suggested by Cohen ( ). All participants in the study completed the HTCq, and scores were recorded for each TC and then clustered in 10 categories according to exploratory and confirmatory factorial analysis. The results for perceptions of each factor by dentistry, medicine and pharmacy students are shown in Table and Figure . Individual scores for each concept are shown in Figure to provide more detailed information. Cronbach's alpha resulted in 0.901 indicating acceptable test reliability. The overall results for students in all three groups showed that the most relevant concepts for learning histology were those related to tissue organization (4.4 ± 0.9),tissue functional states (4.1 ± 0.9), and histological information arising from two‐dimensional observation (4.1 ± 1.0). In fact, tissue organization concepts were perceived most clearly as TC by students in all groups. However, analysis of the responses by students in different health science degree programs disclosed some differences among groups. Specifically, dentistry students highlighted the importance of morphostructural concepts (4.3 ± 0.8), whereas medical students considered that notions related to histological information arising from two‐dimensional observation (4.2 ± 0.9) and tissue functional states (4.3 ± 0.9) were more relevant. Pharmacy students emphasized the value of tissue functional states concepts (4.3 ± 0.9) for their learning of histology. Dentistry students assigned significantly higher scores to morphostructural and tissue organization concepts than medical or pharmacy students (Figure ). Morphostructural concepts were highly valued (mean score of 4.3 ± 0.8) by dentistry students, whereas medical student perceptions yielded a slightly lower mean score of 4.0 ± 1.0 ( P = 0.001) and pharmacy students' mean score for these concepts was 4.1 ± 0.9 ( P = 0.026). Similarly, the results for tissue organization concepts yielded a mean score of 4.5 ± 0.7 among dentistry students, with significantly lower scores among medical (4.3 ± 0.9; P < 0.001) and pharmacy students (4.3 ± 0.9; P < 0.001). There were also statistically significant differences among groups for organ histofunctional organization concepts. Although the difference between medical (4.1 ± 1.0) and dentistry students (3.9 ± 1.0) was not significant ( P = 0.216), pharmacy students perceived these concepts to be of less relevance (3.7 ± 1.1), and their mean score was significantly lower than in the medicine ( P = 0.007) and dentistry ( P = 0.018) groups. Consequently, the effect size for the difference between medical and pharmacy students in their perception of organ histofunctional organization concepts was considered medium (Δ = 0.388). Concepts related to histogenesis and development, hierarchical body organization, and tissue functional states were scored similarly by students in all three groups, with no significant differences (Figure ). However, the effect size for differences in the perceived relevance of concepts related to tissue functional states was considered medium in dental students compared to medical (Δ = 0.383) and pharmacy students (Δ = 0.383). Perceptions of tissue engineering showed remarkable differences among groups (Figure ). The highest mean score was seen in dentistry students (4.1 ± 0.9), and this score was significantly higher than in medical (3.7 ± 1.0, P < 0.001) and pharmacy (3.6 ± 1.1, P = 0.007) students. In fact, the effect size for the difference in the perception of tissue engineering concepts was greatest between dentistry and pharmacy students (Δ = 0.436). Concepts related to histological information arising from two‐dimensional observation of microscopic structures received a higher score from medical students (4.2 ± 0.9) compared to pharmacy students (4.1 ± 1.1; P = 0.001). Among the three groups, pharmacy students gave the lowest scores to microscopic examination analysis (3.6 ± 1.1) and magnification concepts (3.5 ± 1.1), although the differences among groups were not statistically significant (Figure ). Recent decades have seen the appearance of several pedagogical theories characterized by the central positioning of the student in the learning process (Walder, ). One such theory, the Threshold Concepts learning framework, aims to identify the concepts most relevant for learning in a given discipline. Currently, newer studies have attempted to identify TC in a wide variety of disciplines such as nursing (McKillop et al., ), palliative medicine (O’Callaghan et al., ), economics (Randall et al., ), or physics (Serbanescu, ), among others. As noted by Meyer and Land ( ), TC theory is grounded in the notion of pedagogy as a space of uncertainty (Shulman, ), and a specific concept can be defined as a TC if it is transformative, irreversible, integrative, and troublesome (Meyer & Land, ). Threshold concepts in health sciences Different strategies have been proposed within health sciences curricula such as audio diaries (Neve et al., ), recording and viewing video discussions in the laboratory (Carstensen & Bernhard, ), or discussions about significant learning experiences during clinical sessions in general medicine (Vaughan, ) and pediatrics (Randall et al., ). Moreover, a TC‐based pedagogical framework has also been used to elucidate students' perceptions of medical professionalism (Neve et al., ), as well as central notions in psychology and bioethics (Collett et al., ). Of note, most of these approaches in medical education have focused on the clinical level, whereas basic and transversal subjects have received less attention. In this connection, Loertscher et al. described a process of TC identification in biochemistry that involved faculty members and graduate students, and resulted in the identification of four notions considered central for adequate progress in biomedical studies: steady state, biochemical pathway dynamics and regulation, the physical basis of interactions, and thermodynamics of macromolecular structure formation (Loertscher et al., ). However, the present study is the first of its kind to use the TC pedagogical framework to identify TC in histology. The histology TC identified in the present study are essential for medical education, as they constitute a pillar for further learning and comprehension of human pathology in medical and surgical curricula. This study evaluates students' perceptions for the identification of TC in histology in the undergraduate degree programs in dentistry, medicine and pharmacy at the University of Granada (Spain). The study of histology in these curricula is intended to enable students to identify the tissue disorders or drug effects underlying human diseases (Pawlina & Ross, ); consequently, the identification of TC in histology holds the potential to considerably improve curricular design in health sciences in higher medical education. Further, after appropriate characterization of forgetting curves (which tend to decay with time), it would be of interest to follow up on these students when they reach the final year in their degree programs. This could lead to a more accurate picture of the TC identified in the present study. Survey‐based studies can be a useful tool to characterize profiles associated with the perception of knowledge in conceptual, procedural and attitudinal dimensions (McKee et al., ; Tsu et al., ; Sola et al., ). In this connection, although the use of focus groups of practitioners (Tanner, ) or reflective writing pieces (Fouberg, ) can contribute to the identification of TC, a questionnaire‐based methodology was used in the present study for several reasons. First, a collaborative strategy combining academics' experience and students' perceptions of learning can offer a more inclusive model to identify the characteristics associated with TC, given that different actors are consulted at different times during the learning process. On one hand, descriptors such as “troublesome” and “transformative” may be better identified by students, i.e., the actors who are confronted with certain concepts for the first time during their university education; this experience may thus prompt them to analyze new concepts during their training. Narrative and semi‐structured interviews are considered an adequate method for assessing troublesomeness component of TC as they allow students to present their particular stories (Martindale, ). Nevertheless, these methods could lead to some bias as troublesomeness is not only indicative of TC, but also due to a lack of effort or motivation (Santisteban‐Espejo et al., ). On the other hand, the integrative and irreversible nature of a TC may be better assessed through collaboration with academics, given that the experience of confronting different disciplines over the course of longer periods equips them to develop a more realistic, comprehensive view of these attributes. Survey‐based strategies combine the perspectives of both actors involved in the learning experience: academics propose certain concepts, which are then evaluated by students in terms of liminality, irreversibility, transformativeness, and their integrative nature. In fact, it has been recently reported the use of surveys designed by relevant experts as a method to identify TC in higher education (Kilgour et al., ). Threshold concepts in histology Undoubtedly, the task of finding TC for a specific discipline is challenging, and as yet there is no gold‐standard technique to achieve this goal. However, collaboration between students and teachers appears to be a suitable empirical approach that can facilitate and optimize TC identification. In this connection, Cousin proposed “transactional curriculum inquiry”—a process of coordinated crosstalk among students, academics and education developers—as an effective partnership for the identification of TC in higher education (Cousin, ). An important consideration in this regard is that the tool developed and described here should be constructed appropriately and validated with multivariate data analysis techniques such as exploratory factor analysis (Lucas da Rocha Cunha et al., ). Exploratory factor analysis clusters survey items with the strongest mutual correlations based on students' responses (Hair et al., ). The questionnaire used in the present study was validated with exploratory factor analysis, which grouped 37 threshold concepts into 10 different factors. In addition, the survey structure was further validated with confirmatory factor analysis, which indicated an acceptable goodness of fit. Overall, students in dentistry, medicine and pharmacy identified concepts related to tissue organization, tissue functional states, and microscopic identification as the most relevant TC. It is important to highlight that students from different degree programs perceived the learning of histology as a process that requires integrating two dimensions: static concepts related to the constituent elements of tissues, e.g., the concept of cell or extracellular matrix, and dynamic concepts such as stem cells as tissue renewal substrates, and the euplasic, proplasic and retroplasic state of tissues. The complexity of integrating static and dynamic concepts may pose a considerable barrier for the comprehension of histology. The findings of the present study disclosed some differences in students' perceptions among the three health sciences degree programs—a result that did not occur in isolation. For example, findings reported previously by Campos‐Sanchez et al. were consistent with the present results. Significant differences were reported in the motivational profiles of students from different health sciences degree programs (Campos‐Sanchez et al., ), and related work found that the conceptions of learning also differed significantly between students in health sciences and non‐health sciences master's degree programs (Campos et al., ). In addition, research in veterinary sciences compared digital slides and traditional microscopy in the teaching of histological concepts (Mills et al., ; Brown et al., ), and several studies have implemented the TC framework to evaluate teaching experiences developed for veterinary students in the clinical setting (Lygo‐Baker et al., ; Alpi & Hoggan, ). In the present study, veterinary students' perceptions regarding histological TC could not be investigated because the university where this study was carried out does not offer a degree program in this discipline. Dentistry students in particular identified concepts related to tissue organization and morphostructural concepts as the most relevant to them. In dentistry, morphological and functional concepts are closely interrelated, and this could hinder students' efforts to distinguish between them (Nanci, ). Nevertheless, in medicine, morphology, i.e., the study of the size, shape and the constituent parts of tissues, is well delimited from the physiological study of different tissue functions (Anyanwu et al., ; Sherer et al., ). Once identified, this conceptual divergence could serve to design pedagogical programs built on collaboration between students from different health sciences degree programs, and to enhance the development of interprofessional competencies in histology, as proposed in a previous study (Haber et al., ). Medical students, for their part, perceived the learning of histological concepts associated with the two‐dimensional identification of microscopic structures as an area of concern. This attitudinal perception raises an important issue, given that microscopic identification skills are necessary not only for histology learning, but also for the learning of applied medical subjects such as human pathology (Braun & Kearns, ). In this connection, a recent systematic review contrasted the role of digital microscopy compared to conventional light microscopy in the learning of pathology (Rodrigues‐Fernandes et al., ). Learning to identify tissues correctly under a microscope supports solidifying principles and concepts, and adds a real‐life knowledge component that cannot be acquired through theoretical teaching only. The comprehension of two‐dimensional features of microscopic images will clearly be useful in students' subsequent learning in the health sciences curriculum, and may help them to attain better skills and competencies in clinical diagnosis in the future. In fact, modern histology, which is directly oriented to the resolution of clinical problems through the use of teaching methods such as the examination of histopathological slides (Chapman et al., ; Hoda & Hoda, ), is one of the pillars of the clinical and professional qualifications of future doctors. Turning to another TC, medicine and dentistry students' perceptions of tissue engineering concepts (native tissues, artificial tissues, and cell, tissue and organ culture) differed significantly in comparison to pharmacy students. In medicine and dentistry degree programs, histology is conceived not only as a diagnostic tool, but also as a discipline oriented to the treatment of human diseases with artificial tissues as substitutes. This educational goal is especially relevant at present, because tissue engineering currently constitutes a consolidated area with a well‐defined cognitive framework which can be implemented in educational programs in association with histology (Saavedra‐Casado et al., ; Santisteban‐Espejo et al., , ; Martin‐Piedra et al., ). Different bioartificial tissues have been developed to functionally replace damaged bone (McDermott et al., ), cornea (Gonzalez‐Andrades et al., ; Rico‐Sanchez et al., ), peripheral nerve (Carriel et al., ; Huang et al., ), skin (Egea‐Guerrero et al., ), blood vessels (Chandra & Atala, ), cartilage (Park et al., ), and oral mucosa (Blanco‐Elices et al., ), among other tissues. Now that new biomimetic artificial tissues are available to treat previously untreatable conditions, tissue engineering should be taught not only as a new horizon in histological science, but also as an important element of educational programs in health sciences curricula (Griffith et al., ; Wyles et al., ). Currently, therapeutic procedures inspired by tissue engineering methods such as guided tissue regeneration are widely used in dentistry for reconstruction after dental and periodontal lesions have been removed (Lang & Lindhe, ; Garzon et al., ; Bueno et al., ; Xu et al., ; Azim et al., ). It is likely that because of this clinical impact, medicine and dentistry students perceived tissue engineering concepts as more valuable and applicable to daily clinical practice, whereas pharmacy students did not share this perception. For pharmacy students who participated in the present study, histology is not as important, as a transversal subject in their higher education program, as other subjects such as biochemistry and drug development, which are given more attention. Not surprisingly, histological concepts associated with tissue organization and tissue functional states were the TC valued most by pharmacy students. Histology, as taught in the pharmacy curriculum, is oriented toward knowledge of the histological structures functionally linked to drug metabolism and the absorption, distribution, and elimination of drug components (Pang et al., ). Consequently the comprehension of histological structures in organs such as the liver and kidneys is an important issue, because this knowledge is needed to further understand drug metabolism along with the therapeutic and possible side effects of drug compounds (Kleiner, ; Al‐Naimi et al., ). In summary, this questionnaire‐based study provides evidence of an approach that can be used to identify different pedagogical profiles related to the teaching of histology in dentistry, medicine and pharmacy. The three groups of students who participated in the present study generally perceived the learning of microscopic structures of organs as a process that requires the harmonization of static and dynamic concepts. Limitations of the study One of the limitations of this study is the use of a questionnaire developed originally in the Spanish language. In order to preserve the significance of the concepts defined in the questionnaire, versions in other languages should be tested only after accurate, validated translation. Another limitation is the fact that histology faculty members, rather than the students themselves, suggested the TC that were scored by students in the questionnaire. This approach has been criticized by some authors because many academics have learned these concepts some time ago, and may consequently not be in an ideal position to identify troublesomeness during the learning process, which is one of the essential characteristics of TC (Randall et al., ). Moreover, almost by definition, the different actors involved in the learning process (educators and students) are located on different sides of the liminal space. This fact may introduce disparities in how students and academics understand genuine TC. Consequently, the authors of the present study cannot state categorically that the faculty members involved in this study identified all potential concepts that might later be confirmed as TC by the students. To overcome this potential shortcoming, the reliability of the data could be further verified through discussion groups with students, who could confirm some of their questionnaire responses soon after analysis of the results. Different strategies have been proposed within health sciences curricula such as audio diaries (Neve et al., ), recording and viewing video discussions in the laboratory (Carstensen & Bernhard, ), or discussions about significant learning experiences during clinical sessions in general medicine (Vaughan, ) and pediatrics (Randall et al., ). Moreover, a TC‐based pedagogical framework has also been used to elucidate students' perceptions of medical professionalism (Neve et al., ), as well as central notions in psychology and bioethics (Collett et al., ). Of note, most of these approaches in medical education have focused on the clinical level, whereas basic and transversal subjects have received less attention. In this connection, Loertscher et al. described a process of TC identification in biochemistry that involved faculty members and graduate students, and resulted in the identification of four notions considered central for adequate progress in biomedical studies: steady state, biochemical pathway dynamics and regulation, the physical basis of interactions, and thermodynamics of macromolecular structure formation (Loertscher et al., ). However, the present study is the first of its kind to use the TC pedagogical framework to identify TC in histology. The histology TC identified in the present study are essential for medical education, as they constitute a pillar for further learning and comprehension of human pathology in medical and surgical curricula. This study evaluates students' perceptions for the identification of TC in histology in the undergraduate degree programs in dentistry, medicine and pharmacy at the University of Granada (Spain). The study of histology in these curricula is intended to enable students to identify the tissue disorders or drug effects underlying human diseases (Pawlina & Ross, ); consequently, the identification of TC in histology holds the potential to considerably improve curricular design in health sciences in higher medical education. Further, after appropriate characterization of forgetting curves (which tend to decay with time), it would be of interest to follow up on these students when they reach the final year in their degree programs. This could lead to a more accurate picture of the TC identified in the present study. Survey‐based studies can be a useful tool to characterize profiles associated with the perception of knowledge in conceptual, procedural and attitudinal dimensions (McKee et al., ; Tsu et al., ; Sola et al., ). In this connection, although the use of focus groups of practitioners (Tanner, ) or reflective writing pieces (Fouberg, ) can contribute to the identification of TC, a questionnaire‐based methodology was used in the present study for several reasons. First, a collaborative strategy combining academics' experience and students' perceptions of learning can offer a more inclusive model to identify the characteristics associated with TC, given that different actors are consulted at different times during the learning process. On one hand, descriptors such as “troublesome” and “transformative” may be better identified by students, i.e., the actors who are confronted with certain concepts for the first time during their university education; this experience may thus prompt them to analyze new concepts during their training. Narrative and semi‐structured interviews are considered an adequate method for assessing troublesomeness component of TC as they allow students to present their particular stories (Martindale, ). Nevertheless, these methods could lead to some bias as troublesomeness is not only indicative of TC, but also due to a lack of effort or motivation (Santisteban‐Espejo et al., ). On the other hand, the integrative and irreversible nature of a TC may be better assessed through collaboration with academics, given that the experience of confronting different disciplines over the course of longer periods equips them to develop a more realistic, comprehensive view of these attributes. Survey‐based strategies combine the perspectives of both actors involved in the learning experience: academics propose certain concepts, which are then evaluated by students in terms of liminality, irreversibility, transformativeness, and their integrative nature. In fact, it has been recently reported the use of surveys designed by relevant experts as a method to identify TC in higher education (Kilgour et al., ). Undoubtedly, the task of finding TC for a specific discipline is challenging, and as yet there is no gold‐standard technique to achieve this goal. However, collaboration between students and teachers appears to be a suitable empirical approach that can facilitate and optimize TC identification. In this connection, Cousin proposed “transactional curriculum inquiry”—a process of coordinated crosstalk among students, academics and education developers—as an effective partnership for the identification of TC in higher education (Cousin, ). An important consideration in this regard is that the tool developed and described here should be constructed appropriately and validated with multivariate data analysis techniques such as exploratory factor analysis (Lucas da Rocha Cunha et al., ). Exploratory factor analysis clusters survey items with the strongest mutual correlations based on students' responses (Hair et al., ). The questionnaire used in the present study was validated with exploratory factor analysis, which grouped 37 threshold concepts into 10 different factors. In addition, the survey structure was further validated with confirmatory factor analysis, which indicated an acceptable goodness of fit. Overall, students in dentistry, medicine and pharmacy identified concepts related to tissue organization, tissue functional states, and microscopic identification as the most relevant TC. It is important to highlight that students from different degree programs perceived the learning of histology as a process that requires integrating two dimensions: static concepts related to the constituent elements of tissues, e.g., the concept of cell or extracellular matrix, and dynamic concepts such as stem cells as tissue renewal substrates, and the euplasic, proplasic and retroplasic state of tissues. The complexity of integrating static and dynamic concepts may pose a considerable barrier for the comprehension of histology. The findings of the present study disclosed some differences in students' perceptions among the three health sciences degree programs—a result that did not occur in isolation. For example, findings reported previously by Campos‐Sanchez et al. were consistent with the present results. Significant differences were reported in the motivational profiles of students from different health sciences degree programs (Campos‐Sanchez et al., ), and related work found that the conceptions of learning also differed significantly between students in health sciences and non‐health sciences master's degree programs (Campos et al., ). In addition, research in veterinary sciences compared digital slides and traditional microscopy in the teaching of histological concepts (Mills et al., ; Brown et al., ), and several studies have implemented the TC framework to evaluate teaching experiences developed for veterinary students in the clinical setting (Lygo‐Baker et al., ; Alpi & Hoggan, ). In the present study, veterinary students' perceptions regarding histological TC could not be investigated because the university where this study was carried out does not offer a degree program in this discipline. Dentistry students in particular identified concepts related to tissue organization and morphostructural concepts as the most relevant to them. In dentistry, morphological and functional concepts are closely interrelated, and this could hinder students' efforts to distinguish between them (Nanci, ). Nevertheless, in medicine, morphology, i.e., the study of the size, shape and the constituent parts of tissues, is well delimited from the physiological study of different tissue functions (Anyanwu et al., ; Sherer et al., ). Once identified, this conceptual divergence could serve to design pedagogical programs built on collaboration between students from different health sciences degree programs, and to enhance the development of interprofessional competencies in histology, as proposed in a previous study (Haber et al., ). Medical students, for their part, perceived the learning of histological concepts associated with the two‐dimensional identification of microscopic structures as an area of concern. This attitudinal perception raises an important issue, given that microscopic identification skills are necessary not only for histology learning, but also for the learning of applied medical subjects such as human pathology (Braun & Kearns, ). In this connection, a recent systematic review contrasted the role of digital microscopy compared to conventional light microscopy in the learning of pathology (Rodrigues‐Fernandes et al., ). Learning to identify tissues correctly under a microscope supports solidifying principles and concepts, and adds a real‐life knowledge component that cannot be acquired through theoretical teaching only. The comprehension of two‐dimensional features of microscopic images will clearly be useful in students' subsequent learning in the health sciences curriculum, and may help them to attain better skills and competencies in clinical diagnosis in the future. In fact, modern histology, which is directly oriented to the resolution of clinical problems through the use of teaching methods such as the examination of histopathological slides (Chapman et al., ; Hoda & Hoda, ), is one of the pillars of the clinical and professional qualifications of future doctors. Turning to another TC, medicine and dentistry students' perceptions of tissue engineering concepts (native tissues, artificial tissues, and cell, tissue and organ culture) differed significantly in comparison to pharmacy students. In medicine and dentistry degree programs, histology is conceived not only as a diagnostic tool, but also as a discipline oriented to the treatment of human diseases with artificial tissues as substitutes. This educational goal is especially relevant at present, because tissue engineering currently constitutes a consolidated area with a well‐defined cognitive framework which can be implemented in educational programs in association with histology (Saavedra‐Casado et al., ; Santisteban‐Espejo et al., , ; Martin‐Piedra et al., ). Different bioartificial tissues have been developed to functionally replace damaged bone (McDermott et al., ), cornea (Gonzalez‐Andrades et al., ; Rico‐Sanchez et al., ), peripheral nerve (Carriel et al., ; Huang et al., ), skin (Egea‐Guerrero et al., ), blood vessels (Chandra & Atala, ), cartilage (Park et al., ), and oral mucosa (Blanco‐Elices et al., ), among other tissues. Now that new biomimetic artificial tissues are available to treat previously untreatable conditions, tissue engineering should be taught not only as a new horizon in histological science, but also as an important element of educational programs in health sciences curricula (Griffith et al., ; Wyles et al., ). Currently, therapeutic procedures inspired by tissue engineering methods such as guided tissue regeneration are widely used in dentistry for reconstruction after dental and periodontal lesions have been removed (Lang & Lindhe, ; Garzon et al., ; Bueno et al., ; Xu et al., ; Azim et al., ). It is likely that because of this clinical impact, medicine and dentistry students perceived tissue engineering concepts as more valuable and applicable to daily clinical practice, whereas pharmacy students did not share this perception. For pharmacy students who participated in the present study, histology is not as important, as a transversal subject in their higher education program, as other subjects such as biochemistry and drug development, which are given more attention. Not surprisingly, histological concepts associated with tissue organization and tissue functional states were the TC valued most by pharmacy students. Histology, as taught in the pharmacy curriculum, is oriented toward knowledge of the histological structures functionally linked to drug metabolism and the absorption, distribution, and elimination of drug components (Pang et al., ). Consequently the comprehension of histological structures in organs such as the liver and kidneys is an important issue, because this knowledge is needed to further understand drug metabolism along with the therapeutic and possible side effects of drug compounds (Kleiner, ; Al‐Naimi et al., ). In summary, this questionnaire‐based study provides evidence of an approach that can be used to identify different pedagogical profiles related to the teaching of histology in dentistry, medicine and pharmacy. The three groups of students who participated in the present study generally perceived the learning of microscopic structures of organs as a process that requires the harmonization of static and dynamic concepts. One of the limitations of this study is the use of a questionnaire developed originally in the Spanish language. In order to preserve the significance of the concepts defined in the questionnaire, versions in other languages should be tested only after accurate, validated translation. Another limitation is the fact that histology faculty members, rather than the students themselves, suggested the TC that were scored by students in the questionnaire. This approach has been criticized by some authors because many academics have learned these concepts some time ago, and may consequently not be in an ideal position to identify troublesomeness during the learning process, which is one of the essential characteristics of TC (Randall et al., ). Moreover, almost by definition, the different actors involved in the learning process (educators and students) are located on different sides of the liminal space. This fact may introduce disparities in how students and academics understand genuine TC. Consequently, the authors of the present study cannot state categorically that the faculty members involved in this study identified all potential concepts that might later be confirmed as TC by the students. To overcome this potential shortcoming, the reliability of the data could be further verified through discussion groups with students, who could confirm some of their questionnaire responses soon after analysis of the results. In conclusion, the results of this study are potentially useful to optimize the design of health sciences curricula. The identification of threshold concepts through students' perceptions appears to be a useful approach to improving the teaching and learning process in dentistry, medicine and pharmacy undergraduate curricula. The authors have no conflict of interest to declare. Fig S1 Click here for additional data file. Fig S2 Click here for additional data file. Fig S3 Click here for additional data file. Table S1 Click here for additional data file.
Feasibility of a Geriatric Oncology Longitudinal End to End (GOLDEN) Program in a Tertiary Cancer Center in Singapore
a8c9e45b-675d-4fd1-aa63-3f863110a6f1
10078895
Internal Medicine[mh]
Globally, the population is aging, with the number of people aged 60 years and older projected to double from 1 billion worldwide in 2020 to 2.1 billion by 2050. The aging population, coupled with the risk of cancer increasing with age, predicts an exponential rise in cases of older adults diagnosed with cancer. Given that the provision of care for older adults with cancer presents the unique challenge of requiring expertise in both oncologic and geriatric issues, most cancer programs are still lacking in terms of meeting the complex needs of these patients. , Consequently, in recent years, there has been an increasing urgency to address that through training and subsequently, the introduction of dedicated geriatric oncologic models of care delivery. Singapore is in a similar predicament with the number of working adults supporting the older population decreasing from 5 young adults to one older adult today, to 2 young adults for one older adult by 2030. Correspondingly, the incidence of cancer in older adults aged 65 years and over is expected to rise from 121,000 in 2020 to 349,000 in 2040. Despite this, a nation-wide survey of oncologists conducted by the National University Cancer Institute, Singapore (NCIS), revealed that most oncologists in Singapore (61%) have never engaged the help of a geriatrician in the decision-making process for cancer treatment and less than half of the participants (47%) were aware that there were geriatric oncology assessment scales available. However, in line with the recommendations of the American Society of Clinical Oncology (ASCO), International Society of Geriatric Oncology (SIOG) and National Comprehensive Cancer Network (NCCN) to perform geriatric assessments (GA) on patients before initiating therapy, , the vast majority of the oncologists surveyed (90%) welcomed the introduction of a geriatric oncology service. With the recognition of the importance of Geriatric Assessment (GA)-directed interventions in guiding and supporting cancer care in older adults, 2 concurrent pilot programs were initiated in the National University Hospital (NUH) and the National University Cancer Institute, Singapore (NCIS) to cater to the needs of patients planned for cancer surgery and those planned for chemotherapy and/or radiation. The Management & Innovation for Longevity in Elderly Surgical patients (MILES) program was started in NUH in 2017 to enhance perioperative care for older adults aged 65 years and above requiring elective major surgery. All patients enrolled into the program are managed by a multidisciplinary team with expertise to meet their complex care needs. All patients will undergo a GA administered by MILES nurses. During the assessment, attention is paid to their functional status, cognition, nutritional status and level of frailty so the patients requiring input by dietitians, physiotherapists and occupational therapists are promptly identified and referred. These patients will receive personalized nutritional intervention and are prescribed exercise regimens tailored to their capacity and needs. They are also provided with strategies and aids to cope with the limitations they are experiencing. Furthermore, patients are referred to a specialist in perioperative medicine should they require medical optimization. These management strategies are geared toward optimizing patients’ health status pre-surgery, thus reducing their operative risks and improving their outcomes from the surgery. To ensure a smooth hospital stay and transition to home or a step-down facility, the multidisciplinary team remains involved in the continuation of care for all patients within the program throughout their journey. The MILES nurses will conduct follow-up calls and visits to patients peri-surgery to ensure that the trajectory of their surgical journey and recovery is keeping to the expected course. The allied health teams involved during the patients’ preoperative period will continue to partner closely with the surgical teams in the postoperative period to expedite the patients’ recovery. This input continues after discharge till the patients are fit to be discharged from their specialized care. This is to ensure that the program restores as many patients as possible to their premorbid level of health and quality of life. A pilot Geriatric Medical Oncology(GO) program supported by the Singapore Cancer Society (SCS) grant was also developed at NCIS in 2017. Similarly, all patients aged 70 years and older seen in NCIS undergo a GA on the day of their first visit. All cases are discussed at a multidisciplinary meeting with a geriatric medical oncology (GO) team consisting of a medical oncologist, radiation oncologist, geriatrician, pharmacist and nurse coordinator. The GO team identifies older adults who are pre-frail and frail through the GA and multidisciplinary discussion and synthesizes a summary of treatment recommendations and interventions which can help support patients through their cancer treatment. This summary is then conveyed to the primary oncologist in a memo. The GO team also works with patients’ primary oncologists to design a suitable treatment plan for optimal results, without compromising their independence and quality of life. Patients on the program are monitored closely for treatment-related toxicities during their cancer treatment. With recognition of the need for a program which can provide seamless continuity of care to older adults with cancer in our hospital, the 2 teams merged to form the G eriatric O ncology L ongitu D inal E nd to e N d (GOLDEN) program. This end to end program was the first of its kind in Singapore. Consequently, the combined team was awarded a grant from a philanthropic fund, the Jurong Health Fund, to support the program in NCIS (NUH) and Ng Teng Fong General Hospital (NTFGH), 2 hospitals within the National University Health System (NUHS) cluster. The GOLDEN program commenced in August 2019. The age cutoff was aligned at 65 years and older and the handover workflows were fine-tuned between the surgical MILES program and the geriatric medical oncology program to facilitate a seamless transfer of care for pre-frail and frail patients. All cancer patients aged 65 years and older seen in NCIS would be screened on their first visit with a Geriatric 8 (G8) screening questionnaire , to identify patients who might benefit from a Comprehensive Geriatric Assessment (CGA). For patients who scored 14 or less, an electronic memo would be sent to their primary oncologist through the hospital’s electronic medical records system to highlight their potential suitability for the GOLDEN program. All patients referred to the GOLDEN program will undergo a CGA before their treatment, alongside a consultation with the geriatric medical oncology team in a one-stop geriatric oncology clinic, where they can be seen on the same day by members of the multidisciplinary team including a dietitian, physiotherapist, occupational therapist and a medical social worker if required. The cases are also discussed at a multidisciplinary meeting by the geriatric oncology (GO) team. Patients in the GOLDEN program would be under the supportive care of the geriatric medical oncology (GO) team after surgery until the end of their cancer treatment. For older adults with challenging cancer survivorship issues and geriatric syndromes, the geriatric medical oncology team would continue to follow them up after the completion of their oncological treatment until specialized geriatric oncology care is no longer required. Their care would then be transferred to their primary oncologist and their primary care provider. After establishing the GOLDEN program in NCIS, the program expanded to NTFGH, a hospital within the same healthcare system. A multidisciplinary team was assembled and a parallel GOLDEN program workflow was initiated in NTFGH in November 2019. Geriatric assessment for older adults with hematological malignancies is no less important with many recent papers highlighting the need for a CGA in prognosis and treatment decision. The decision to add the geriatric hematology patients into the GOLDEN program was made after the successful initiation of the program in oncology patients. To successfully initiate the geriatric hematology program, champions in the medical team were identified and feedback from hematologists in the department were sought and incorporated into the workflow to ensure referrals and subsequent consultations. As of December 2021, all patients aged 65 years and older with newly diagnosed hematological malignancy would be referred to the geriatric hematology service. Similar to the GOLDEN framework, a G8 screening would be performed followed by a CGA if required. They would also be discussed at the same GOLDEN multidisciplinary team meeting. With the initiation of the GOLDEN program in our cancer center, we sought to assess the feasibility of our Geriatric Oncology program and evaluate if we have benefited our patients since its introduction. A CGA was performed for all patients enrolled in the GOLDEN program to determine the older adults’ health state, classifying them as fit, pre-frail or frail. It covered domains including functional status, falls, cognition, sensory impairment, social support, nutrition, psycho-emotional status, and assessment of comorbidity and polypharmacy. Patients with one to 4 domains of concern were categorized as pre-frail, while those with 5 or more domains of concern were frail. Functional status was assessed using the Katz’s Activities of Daily Living (ADL) Index, , Lawton instrumental ADL (IADL), and Karnofsky Performance Status (KPS). Comprehensive fall history and Timed Up and Go (TUG) were taken to assess fall risk. Presence of visual and hearing impairment were noted. The Mini-Cog was used to assess for cognitive impairment. Social support and activity was assessed using Medical Outcomes Study—Social Support Survey (MOS-SSS-4) and Medical Outcome Study—Social Activities Survey (MOS-SAS-4) respectively. Nutritional status was assessed via changes in weight over 6-month duration, psycho-emotional status was assessed using the Distress Scale and Geriatric Depression Scale (GDS-4) and polypharmacy was assessed through a thorough medication review by a trained pharmacist. Comorbidities were assessed using a patient-reported version of the Older Americans Resources and Services Questionnaire (OARS) Physical Health comorbidity subscale. We measured the incidence of geriatric syndromes picked up on performing a CGA in these patients, and the number of patients had issues of concern requiring targeted allied health interventions after a CGA. In order to assess the impact of the GOLDEN program on the referring physicians’ practice, we also measured the percentage of patients who had a change in treatment plans after the GA. Lastly, we measured patient related outcome measures (PROMs) such as their satisfaction rate with a patient satisfaction survey and their quality of life using the European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire Core 30 (EORTC QLQ-C30) questionnaire. As there was limited time for the older adults planned for prehabilitation prior to surgery, only the patients who were enrolled in the geriatric medical oncology component of the GOLDEN program were further evaluated for these outcomes. This study was approved by the National University Hospital (NUH) Institutional Ethics Review Board. A total of 1347 older adults with cancer were screened with G8 in NCIS from August 2019 to August 2021, of which 1,139 were suitable to be enrolled in the GOLDEN program and 777 were referred by their primary oncologists and enrolled into the GOLDEN program. Five hundred and sixty-nine (73%) patients were enrolled in the surgical prehabilitation program, while 308 (40%) patients were enrolled in the geriatric medical oncology program and 100 (12.8%) patients were enrolled in both. shows the characteristics of all the patients referred to the GOLDEN program. There were 442 (56%) females and the median age was 73 (ages ranged from 65 to 95). The ethnic make-up of the cohort was 85.8% Chinese, 8.2% Malay, 3.1% Indians, and 2.8% were of other ethnicities. The most common cancer types in our program are lower gastrointestinal cancers ( n = 398; 51.2%), hepatobiliary cancers ( n = 187; 24.1%), upper gastrointestinal cancers ( n = 48; 6.2%), genitourinary cancers ( n = 43; 5.5%) and thoracic cancers ( n = 40; 5.2%). 86.6% of the patients had early stage cancers while 13.4% had advanced stage cancers. Based on the CGA, 44.9% of our whole population of 777 patients was fit, 43.4% was pre-frail and 11.7% was frail. Further Analysis of Patients Enrolled in the Geriatric Medical Oncology Arm Of the 308 patients enrolled in the geriatric medical oncology arm, 265 patients (86.0%) were identified to have geriatric syndromes with a CGA, as detailed in . A hundred and forty-five (47.1%) patients were at risk for frequent falls with a Timed Up and Go test (TUG) of more than 12 seconds, and 65 (21.1%) of them had more than one fall in the past 6 months. Ninety-five (30.8%) of the patients scored less than 3 on the mini cog screening test, suggesting a likelihood of cognitive impairment, while 51 (16.6%) patients scored 2 or more on the Geriatric Depression Scale 4 questionnaire, which was concerning for low mood. Thirty-one (10%) patients had self-reported urinary incontinence issues. One hundred ninety 4 patients (63.0%) had polypharmacy, defined as the use of at least five chronic medications, of which 111 (36.0%) required deprescribing or dose adjustments. One hundred and eighty-seven (60.8%) of the patients had a change in their treatment plans after the patients were seen by the geriatric medical oncology team. 205 (66.6%) of the patients were treated with curative intent for their cancer diagnosis. Of the 231 patients who completed the EORTC QLQ-C30 questionnaire, 97 patients (31.5%) reported an overall improvement in their global health status, while 118 patients (38.3%) maintained their global health status after being enrolled in the GOLDEN program.Of the 233 patients who completed the patient satisfaction survey, 226 (73%) responded that they had benefited from the program. Of the 308 patients enrolled in the geriatric medical oncology arm, 265 patients (86.0%) were identified to have geriatric syndromes with a CGA, as detailed in . A hundred and forty-five (47.1%) patients were at risk for frequent falls with a Timed Up and Go test (TUG) of more than 12 seconds, and 65 (21.1%) of them had more than one fall in the past 6 months. Ninety-five (30.8%) of the patients scored less than 3 on the mini cog screening test, suggesting a likelihood of cognitive impairment, while 51 (16.6%) patients scored 2 or more on the Geriatric Depression Scale 4 questionnaire, which was concerning for low mood. Thirty-one (10%) patients had self-reported urinary incontinence issues. One hundred ninety 4 patients (63.0%) had polypharmacy, defined as the use of at least five chronic medications, of which 111 (36.0%) required deprescribing or dose adjustments. One hundred and eighty-seven (60.8%) of the patients had a change in their treatment plans after the patients were seen by the geriatric medical oncology team. 205 (66.6%) of the patients were treated with curative intent for their cancer diagnosis. Of the 231 patients who completed the EORTC QLQ-C30 questionnaire, 97 patients (31.5%) reported an overall improvement in their global health status, while 118 patients (38.3%) maintained their global health status after being enrolled in the GOLDEN program.Of the 233 patients who completed the patient satisfaction survey, 226 (73%) responded that they had benefited from the program. While most cancer physicians recognize the importance of a geriatric assessment for the provision of holistic care for older adults with cancer, there were barriers to the uptake of the GOLDEN program by oncologists and patients during the initiation of the program. Screening with G8 (using a cutoff of 14) has identified 84% ( n = 1139) of the population to be potentially suitable for the GOLDEN programme, but only 58% ( n = 777) of the cohort were referred on by their primary physicians . We sought to understand the barriers to referral amongst the oncologists by interviewing some of them. These include the treating physicians would rather prioritize patients’ cancer treatment, instead of “peripheral geriatric issues”, concerns of the burden of additional clinical consults to the patients, fear of delaying patients receiving urgent cancer treatment, and some felt that they had sufficient experience to address issues in older adults. In our patient cohort who had received a CGA, the percentage of pre-frail and frail patients was 55.1% of the patient population which is lower compared to that of 84% of the population identified by the G8. This is consistent with the G8 screen being more sensitive, rather than specific as a screening tool for frailty in the older adults with cancer. In our center, we are currently evaluating if the cutoff of 14 is suitable for our population, or a lower cutoff should be used for a better specificity. The relative urgency of cancer surgery and reluctance of both surgeons and patients alike for adjustments in the surgical schedule to allow prehabilitation has also posed a challenge for its uptake and the execution of the recommendations by the team. We intend to evaluate and fine tune our work processes to improve the uptake of surgical prehabilitation in older adult cancer patients. As such, we had to limit some of the interventions and assessment of patient related outcome measures (PROMs) to the patients who were accrued into the geriatric medical oncology (GO) arm of the program, as they had more time to undergo the interventions, and be interviewed for the PROMs. Close to 70% of the patients enrolled in the geriatric medical oncology arm had geriatric syndromes that would have been missed if a GA was not performed. These may be overlooked when the focus is solely on treating the cancer. This represents lost opportunity for management and treatment, especially when geriatric syndromes have well been associated with adverse outcomes including poorer quality of life, hospitalization, functional decline, institutionalization, and increased healthcare costs. While we understand the concern of their treating oncologist, the importance of an in-depth understanding of an older adult’s overall state of health cannot be undermined when treating individuals with competing medical and physiological challenges which will impact their cancer care. With the appropriate interventions, we believe that it would optimize care for our older patients with cancer. Furthermore, GA-directed interventions have been shown to reduce treatment related toxicities , and improve quality of life in studies done in tertiary cancer centers with geriatric oncology services, as well as in community oncology practices with tailored geriatric assessment and management recommendations. More than half of the patients (60.7%) had a change in their treatment plans after going through the program, with the majority receiving an attenuated treatment regimen in view of their risk of treatment related toxicities. As this may potentially result in under treatment of cancer in these patients, a long term evaluation of their cancer related outcomes would be equally crucial. A recent randomized controlled trial by Li et al had shown that the integration of multidisciplinary geriatric assessment-driven interventions (GAIN) significantly reduced the incidence of grade 3 or higher chemotherapy- related toxicities with no negative impact on their overall survival over a period of 12 months. While some physicians were reluctant to refer their patients to the program, the ones who did have been accepting of the recommendations by the team. With a team specializing in care for older adults with cancer, the GOLDEN program can provide “geri-confidence” to less experienced doctors in this area, such as surgeons and oncology trainees and provide them with guidance in their care of older adults. This is especially so when caring for frail older adults who are at higher risk of developing treatment related toxicities. Most of the patients (69.8%) had at least maintained or shown an improvement in their global health status while undergoing treatment with chemotherapy. This is especially important for older adults to maintain their quality of life during their cancer treatment to strike a balance between the challenges of physiological aging and appropriate treatment of their cancer. A majority of the patients (73.4%) who had completed the patient satisfaction survey felt that they had benefited from the program. On further understanding, what most patients and their loved ones appreciated was the time spent to hear about their concerns, assistance in navigating the system and provision of a one-stop clinic for interventions provided after an explanation of the rationale of prehabilitation. In the small developed nation of Singapore, most older adult cancer patients are treated in public tertiary cancer centers where geriatric oncology services are available. However, there are older adults who are managed in community oncology practices who may benefit from GA directed interventions. We hope to be able to extend a referral service to community oncologists who may wish to refer their patients to the GOLDEN program, as it has been shown by Mohile et al to be beneficial for older adult patients to receive GA directed interventions in community practices. We envisioned the GOLDEN program to be positioned as the sherpa or guide in the older patient’s cancer journey by helping to guide appropriate treatment for older adults and to be a dependable companion to patients and their caregivers to provide necessary information and care navigation during this process. This is especially crucial given the increasing complexity of cancer treatment, which can often be overwhelming especially for older adults. The GOLDEN program provides valuable geriatric assessment of older adults to the referring primary oncologist. We hope to be able to intervene as early as possible, as a pretreatment assessment. The further upstream the patient is in their cancer journey, the more useful this information would be to the treating physicians, as it allows them to take into consideration the additional information a GA provides prior to formulating a suitable cancer treatment for their patients. The program also assesses, monitors and subsequently gives recommendations and supports an older adult’s entire cancer journey. This not only provides a continuous care plan, and is also holistic because the GOLDEN program is not just focused on the cancer treatment, but also care of the older adult and environment as a whole. Our study was not designed to assess the statistical feasibility of the program, but rather to gauge how the recipients of our service viewed the program. With future studies, it would be ideal to plan to assess the tangible benefits of the program such as a reduction in unplanned hospitalizations, shortened hospitalization stays and reduction in treatment related toxicities. There is still a significant proportion of oncologists who do not refer their patients to our geriatric oncology program. It would be beneficial to have a better understanding of the factors contributing to this relatively low referral rate and how we could add value to their management of their older adult patients. With compelling evidence supporting the importance of GA directed interventions in older adults and improved outcomes seen in our patients, we hope that this will encourage more cancer physicians to have a mindset shift and consider referring their vulnerable older adult patients to our program. In this paper, we shared the evolution of the GOLDEN geriatric oncology program in our tertiary cancer center. With early beginnings consisting of 2 small pilot programs, the GOLDEN program now covers the entire older adult cancer journey and has broadened to include older adults with hematological cancers. With promising benefits felt by patients and increasing appreciation of the program by referring physicians, we hope to further evaluate our program for cancer related outcomes and sustainability of a geriatric hematology-oncology program like this in major tertiary cancer centers. By doing so, we hope to be able to provide some insights to others who are seeking to do the same for their older adults with cancer.
Variable Landscape of PD-L1 Expression in Breast Carcinoma as Detected by the DAKO 22C3 Immunohistochemistry Assay
fd181812-260f-49f9-a120-6defd8680a1b
10078903
Anatomy[mh]
Despite advances in early detection and treatment, breast cancer remains the second leading cause of cancer-related death for females in the USA. , In routine clinical practice, breast cancers are classified based on the expression of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor (HER2). In 15% of breast cancers, there is an absence of expression of these 3 receptors, so-called triple-negative breast cancer (TNBC). , TNBC is characterized by aggressive pathologic features and high rates of distant recurrence. TNBC have the lowest 5-year survival rate among the receptor expression-defined subtypes. , , In addition, TNBC is more prevalent in African American women and contributes to the disparate outcomes for this population. , Chemotherapy remains the standard of care first-line treatment for metastatic TNBC. Various anthracyclines, taxanes, and alkylating agents are commonly used in this setting. One of the challenges of treating BRCA wild-type TNBC is that it lacks targeted therapy options, so there is a strong dependence on cytotoxic agents. A new therapeutic option became available for patients in 2019 with the approval of the immune checkpoint inhibitor (ICPI) atezolizumab in combination with nab -paclitaxel for locally advanced or metastatic TNBC that is PD-L1 positive based on improved progression-free survival (PFS) observed in the IMpassion130 trial using the VENTANA SP142 PD-L1 assay. However, the indication was subsequently withdrawn in the USA in 2021 after additional studies did not show improvement in survival. , During this time, another immunotherapy option became available for PD-L1 positive TNBC patients; pembrolizumab in combination with chemotherapy was approved in November, 2020, which was supported by the results of the KEYNOTE-355 study. Pembrolizumab and atezolizumab are ICPI targeting the programmed death receptor-1 (PD-1) and programmed death ligand-1 (PD-L1), respectively. PD-L1 is expressed by tumor-infiltrating leukocytes (TILs) in the tumor microenvironment and by tumor cells themselves as an adaptation to evade anti-tumor immune responses. , Multiple studies showed that approximately half of TNBC express PD-L1, predominantly in TILs rather than tumor cells. , In the KEYNOTE-355 trial (NCT02819518), 847 patients with untreated locally recurrent, inoperable, or metastatic TNBC received either pembrolizumab plus chemotherapy, or placebo plus chemotherapy with the physician’s choice of nab -paclitaxel, paclitaxel, or gemcitabine plus carboplatin for the chemotherapy regimen. Patients were assessed for PD-L1 expression by combined positive score (CPS) using the DAKO PD-L1 IHC 22C3 pharmDx assay. Among patients with a CPS of ≥10, median PFS was 9.7 months for patients receiving pembrolizumab versus 5.6 months for patients receiving placebo (hazard ratio [HR] 0.65, 95% CI, 0.49-0.86, P = .0012). In addition, PFS was evaluated at a CPS cutoff of ≥1 (7.6 vs. 5.6 months, HR 0.82, 95% CI, 0.74-0.90, P = .0014) and in the intent-to-treat population (7.5 vs. 5.6 months, HR 0.82, 95% CI, 0.69-0.97), but the threshold for significance based on the pre-specified statistical criteria was not met in either of these populations. These study results led to the FDA accelerated approval in November, 2020 of pembrolizumab plus chemotherapy for TNBC patients with a PD-L1 CPS ≥10. The benefit was confirmed in the survival analysis which reported overall survival (OS) of 23 months for pembrolizumab plus chemotherapy versus 16.3 months for placebo plus chemotherapy (HR 0.73, 95% CI, 0.55-0.95, P = .0093). Biomarkers predicting benefits from ICPI other than PD-L1 expression have also emerged in recent years. Both microsatellite instability-high (MSI-H) and tumor mutational burden ≥10 mutations per megabase (TMB-High) have also been approved as pan-solid tumor companion diagnostics for pembrolizumab. , In addition, CD274 gene (that encodes PD-L1), copy number alterations, mutations, and rearrangements are emerging as candidate biomarkers for benefit from ICPI. This study aimed to determine the landscape of PD-L1 protein expression in breast cancer using the DAKO 22C3 assay and to compare PD-L1 positive and PD-L1 negative cancers by comprehensive genomic profiling (CGP) to determine if these populations differ in clinical or genomic characteristics. Patient Selection All patients meeting the following criteria were included in this study: (1) CGP performed using the FoundationOne ® CDx assay between November, 2018 and October, 2021; (2) a diagnosis of breast carcinoma as reported by the submitting physician and accompanying pathology report; and (3) PD-L1 IHC was evaluated by DAKO 22C3 assay and scored with CPS. Thus, this US national cohort consisted of patients from multiple institutions who submitted breast carcinoma specimens for CGP to Foundation Medicine during the specified time frame. ER, PR, and HER2 expression status were extracted from the documentation submitted as a part of the routine course of clinical testing. In cases where HER2 expression was not reported, detection of ERBB2 amplification by CGP was used to classify the patient’s receptor status. Patients that were positive for ER and/or PR expression and negative for HER2 expression were classified as HR+/HER2−, patients that were positive for HER2 expression regardless of ER/PR status were classified as HER2+, and patients that were negative for all three biomarkers were classified as triple-negative breast carcinoma (TNBC). PD-L1 IHC PD-L1 IHC was performed in a Clinical Laboratory Improvement Amendments (CLIA)-certified, College of American Pathologists (CAP)-accredited laboratory (Foundation Medicine, Morrisville, NC, USA). PD-L1 IHC was tested using the DAKO PD-L1 IHC 22C3 pharmDx assay which uses the mouse monoclonal 22C3 anti-PD-L1 clone. The assay was performed according to the package insert with appropriate controls. Scoring was performed by American Board of Pathology board-certified pathologists specifically trained in PD-L1 22C3 CDx scoring for TNBC. Borderline cases underwent review by a 2nd pathologist to come to a consensus for scoring. Scores are reported as a CPS which is the number of PD-L1 staining cells (including tumor cells, lymphocytes, and macrophages) divided by the total number of viable tumor cells and then multiplied by 100. A sample was considered PD-L1 when CPS was ≥10. Comprehensive Genomic Profiling CGP was performed in a CLIA-certified, CAP-accredited laboratory (Foundation Medicine, Cambridge, MA, USA and Morrisville, NC, USA). Sequencing was performed using adaptor-ligation and hybrid capture from ≥50 ng DNA extracted from formalin-fixed paraffin-embedded samples as previously described. Exons from 324 genes and select introns from 36 genes were interrogated for all classes of genomic alterations (GA) (short variants, copy number changes, and rearrangements) by the assay. CD274 amplification was defined as a CN of ploidy +4 for the purposes of this study. Tumor mutational burden was assessed on 0.80 megabases (Mb) of sequenced DNA and calculated based on the number of somatic base substitution or insertion/deletion alterations per Mb after excluding known somatic and deleterious mutations as previously described. MSI was determined on 95 loci as previously described. As research uses only (RUO), genomic loss of heterozygosity (gLOH) and genetic ancestry were calculated. gLOH was calculated by quantifying the loss of heterozygosity at over 3 500 SNPs but excluding whole chromosome arm losses (>90% arm loss), a method that was described and validated in the ARIEL2 trial for ovarian cancer. , Scores are reported as a percentage, and specimens were required to have ≥30% tumor nuclei to meet quality control for inclusion in the analysis. Predominant genetic ancestry was assessed using a SNP-based approach. Using the data from the 1000 Genomes set, a trained and validated classifier was developed based on over 40 000 germline SNPs that could be identified in both the 1000 Genomes data and the profiling assay used here. Individuals could be classified into one of five possible predominant ancestry groups: African (AFR), European (EUR), Central and South American (AMR), South Asian (SAS), and East Asian (EAS). Data Analysis Statistics were evaluated using R version 3.6.1. Fisher’s exact test was used for categorical variables, and P -values were corrected using Benjamini–Hochberg corrections for multiple comparisons when appropriate. The Wilcox and the Kruskal–Wallis tests were used to assess continuous, non-parametric variables including TMB and age. All patients meeting the following criteria were included in this study: (1) CGP performed using the FoundationOne ® CDx assay between November, 2018 and October, 2021; (2) a diagnosis of breast carcinoma as reported by the submitting physician and accompanying pathology report; and (3) PD-L1 IHC was evaluated by DAKO 22C3 assay and scored with CPS. Thus, this US national cohort consisted of patients from multiple institutions who submitted breast carcinoma specimens for CGP to Foundation Medicine during the specified time frame. ER, PR, and HER2 expression status were extracted from the documentation submitted as a part of the routine course of clinical testing. In cases where HER2 expression was not reported, detection of ERBB2 amplification by CGP was used to classify the patient’s receptor status. Patients that were positive for ER and/or PR expression and negative for HER2 expression were classified as HR+/HER2−, patients that were positive for HER2 expression regardless of ER/PR status were classified as HER2+, and patients that were negative for all three biomarkers were classified as triple-negative breast carcinoma (TNBC). PD-L1 IHC was performed in a Clinical Laboratory Improvement Amendments (CLIA)-certified, College of American Pathologists (CAP)-accredited laboratory (Foundation Medicine, Morrisville, NC, USA). PD-L1 IHC was tested using the DAKO PD-L1 IHC 22C3 pharmDx assay which uses the mouse monoclonal 22C3 anti-PD-L1 clone. The assay was performed according to the package insert with appropriate controls. Scoring was performed by American Board of Pathology board-certified pathologists specifically trained in PD-L1 22C3 CDx scoring for TNBC. Borderline cases underwent review by a 2nd pathologist to come to a consensus for scoring. Scores are reported as a CPS which is the number of PD-L1 staining cells (including tumor cells, lymphocytes, and macrophages) divided by the total number of viable tumor cells and then multiplied by 100. A sample was considered PD-L1 when CPS was ≥10. CGP was performed in a CLIA-certified, CAP-accredited laboratory (Foundation Medicine, Cambridge, MA, USA and Morrisville, NC, USA). Sequencing was performed using adaptor-ligation and hybrid capture from ≥50 ng DNA extracted from formalin-fixed paraffin-embedded samples as previously described. Exons from 324 genes and select introns from 36 genes were interrogated for all classes of genomic alterations (GA) (short variants, copy number changes, and rearrangements) by the assay. CD274 amplification was defined as a CN of ploidy +4 for the purposes of this study. Tumor mutational burden was assessed on 0.80 megabases (Mb) of sequenced DNA and calculated based on the number of somatic base substitution or insertion/deletion alterations per Mb after excluding known somatic and deleterious mutations as previously described. MSI was determined on 95 loci as previously described. As research uses only (RUO), genomic loss of heterozygosity (gLOH) and genetic ancestry were calculated. gLOH was calculated by quantifying the loss of heterozygosity at over 3 500 SNPs but excluding whole chromosome arm losses (>90% arm loss), a method that was described and validated in the ARIEL2 trial for ovarian cancer. , Scores are reported as a percentage, and specimens were required to have ≥30% tumor nuclei to meet quality control for inclusion in the analysis. Predominant genetic ancestry was assessed using a SNP-based approach. Using the data from the 1000 Genomes set, a trained and validated classifier was developed based on over 40 000 germline SNPs that could be identified in both the 1000 Genomes data and the profiling assay used here. Individuals could be classified into one of five possible predominant ancestry groups: African (AFR), European (EUR), Central and South American (AMR), South Asian (SAS), and East Asian (EAS). Statistics were evaluated using R version 3.6.1. Fisher’s exact test was used for categorical variables, and P -values were corrected using Benjamini–Hochberg corrections for multiple comparisons when appropriate. The Wilcox and the Kruskal–Wallis tests were used to assess continuous, non-parametric variables including TMB and age. Patient Characteristics A total of 396 patients were identified for inclusion in this study. The HR+/HER2− subtype was the most prevalent ( n = 168), followed by TNBC ( n = 142), and HER2+ ( n = 18). In addition, 68 cases had unknown receptor status. Nearly all patients were female (99.5%, 394/396), and they had a median age of 62 years (range 29-89) which was similar across all subtypes . The majority of patients were of predominantly European ancestry (66.4%, 263/396), while the remainder were mostly of African and American ancestry (17.7% and 11.6%, respectively). Notably, there was a numerical enrichment of patients with African ancestry in the TNBC subtype compared to the HR+/HER2− subtype (23.2% vs. 14.3%, P = .055). DAKO 22C3 PD-L1 Expression Patterns The overall PD-L1 positivity (CPS ≥10) rate was 32%. Median CPS for PD-L1 expression varied significantly by subtype . TNBC had the highest median CPS at 7.5 (IQR: 1.0-20) while HR+/HER2− had the lowest at 1.0 (IQR: 0-2.0) ( P < .0001). TNBC also had greater median PD-L1 expression than the group of unknown subtype (CPS 1.0, IQR: 0-10.0) ( P = .015), and the unknown subtype had greater median PD-L1 expression than the HR+/HER2− subtype (CPS 1.0 IQR: 0-2.0) ( P = .0009). A similar trend was also observed when PD-L1 expression was classified as positive defined as a CPS ≥10 . TNBC had the highest rate of positivity at 50.0% (71/142) followed by the HER2+ group at 44.4%, (8/18) and the HR+/HER2− subset had the lowest rate of positivity at 15.5% (26/168). PD-L1 Positive versus PD-L1 Negative TNBC Cohort Based on DAKO 22C3 CDx Assay We performed a comparison of PD-L1 ( n = 71) and PD-L1(−) ( n = 71) TNBC to determine if the difference in PD-L1 positivity correlated with clinicopathological or genomic characteristics . There was no difference in age, sex, or genetic ancestry between the 2 groups. TNBC tissue samples from the breast did have an observed enrichment for PD-L1 positivity compared to TNBC tissue samples from a metastatic site (57% vs. 44%), but this was not statistically significant ( P = .1766). The most common specimen sites are breast, lymph node, liver, skin, and lung representing 44%, 20%, 8%, 6%, and 5% of the entire TNBC cohort, respectively. PD-L1 positivity rates varied by metastatic site, highest in the lung and lymph nodes and lowest in the liver . Median TMB and the rate of TMB ≥10 mut/MB were similar, and no patients had MSI-H status . The landscape of GA was also similar between the PD-L1 and the PD-L1(−) cohorts. There were no significant differences in the frequency of the top of 30 most frequently altered genes of either group . GA in TP53 , MYC , RAD21 , PIK3CA , and PTEN was most prevalent in the overall cohort. The frequency of the targetable genes BRCA1 and BRCA2 were also similar (7% vs. 10% and 8% vs. 3%, respectively). Amplification of CD274 was seen in 6 PD-L1 cases (8%) and only 1 PD-L1(−) case (1%) ( P = .12). PD-L1 Positive versus PD-L1 Negative HR+/HER2− Cohort Based on DAKO 22C3 CDx Assay HR+/HER2− cases were also compared based on their PD-L1 expression. Patients were of similar median age and genetic ancestry . Most tested samples were from a metastatic site in both the PD-L1(−) and PD-L1 groups (69% and 70%, respectively). The most prevalently tested metastatic sites were the liver ( n = 46), lymph node ( n = 21), spine ( n = 8), bone ( n = 7), and lung ( n = 6) with varying PD-L1 expression by the site . Both the rate of TMB ≥10 mut/MB and the median TMB were similar in the 2 cohorts. Median gLOH was significantly higher in the PD-L1 patients (17.3% vs. 9.3%, P < .001). In addition, differences in the frequency of several gene alterations among the top 30 altered genes most frequently found in the whole cohort were identified . TP53 , CREBBP , and CCNE1 alterations were more prevalent in the PD-L1 group compared to the PD-L1(−) group ( P = .0027, P = .010, and P = .005, respectively). A total of 396 patients were identified for inclusion in this study. The HR+/HER2− subtype was the most prevalent ( n = 168), followed by TNBC ( n = 142), and HER2+ ( n = 18). In addition, 68 cases had unknown receptor status. Nearly all patients were female (99.5%, 394/396), and they had a median age of 62 years (range 29-89) which was similar across all subtypes . The majority of patients were of predominantly European ancestry (66.4%, 263/396), while the remainder were mostly of African and American ancestry (17.7% and 11.6%, respectively). Notably, there was a numerical enrichment of patients with African ancestry in the TNBC subtype compared to the HR+/HER2− subtype (23.2% vs. 14.3%, P = .055). The overall PD-L1 positivity (CPS ≥10) rate was 32%. Median CPS for PD-L1 expression varied significantly by subtype . TNBC had the highest median CPS at 7.5 (IQR: 1.0-20) while HR+/HER2− had the lowest at 1.0 (IQR: 0-2.0) ( P < .0001). TNBC also had greater median PD-L1 expression than the group of unknown subtype (CPS 1.0, IQR: 0-10.0) ( P = .015), and the unknown subtype had greater median PD-L1 expression than the HR+/HER2− subtype (CPS 1.0 IQR: 0-2.0) ( P = .0009). A similar trend was also observed when PD-L1 expression was classified as positive defined as a CPS ≥10 . TNBC had the highest rate of positivity at 50.0% (71/142) followed by the HER2+ group at 44.4%, (8/18) and the HR+/HER2− subset had the lowest rate of positivity at 15.5% (26/168). We performed a comparison of PD-L1 ( n = 71) and PD-L1(−) ( n = 71) TNBC to determine if the difference in PD-L1 positivity correlated with clinicopathological or genomic characteristics . There was no difference in age, sex, or genetic ancestry between the 2 groups. TNBC tissue samples from the breast did have an observed enrichment for PD-L1 positivity compared to TNBC tissue samples from a metastatic site (57% vs. 44%), but this was not statistically significant ( P = .1766). The most common specimen sites are breast, lymph node, liver, skin, and lung representing 44%, 20%, 8%, 6%, and 5% of the entire TNBC cohort, respectively. PD-L1 positivity rates varied by metastatic site, highest in the lung and lymph nodes and lowest in the liver . Median TMB and the rate of TMB ≥10 mut/MB were similar, and no patients had MSI-H status . The landscape of GA was also similar between the PD-L1 and the PD-L1(−) cohorts. There were no significant differences in the frequency of the top of 30 most frequently altered genes of either group . GA in TP53 , MYC , RAD21 , PIK3CA , and PTEN was most prevalent in the overall cohort. The frequency of the targetable genes BRCA1 and BRCA2 were also similar (7% vs. 10% and 8% vs. 3%, respectively). Amplification of CD274 was seen in 6 PD-L1 cases (8%) and only 1 PD-L1(−) case (1%) ( P = .12). HR+/HER2− cases were also compared based on their PD-L1 expression. Patients were of similar median age and genetic ancestry . Most tested samples were from a metastatic site in both the PD-L1(−) and PD-L1 groups (69% and 70%, respectively). The most prevalently tested metastatic sites were the liver ( n = 46), lymph node ( n = 21), spine ( n = 8), bone ( n = 7), and lung ( n = 6) with varying PD-L1 expression by the site . Both the rate of TMB ≥10 mut/MB and the median TMB were similar in the 2 cohorts. Median gLOH was significantly higher in the PD-L1 patients (17.3% vs. 9.3%, P < .001). In addition, differences in the frequency of several gene alterations among the top 30 altered genes most frequently found in the whole cohort were identified . TP53 , CREBBP , and CCNE1 alterations were more prevalent in the PD-L1 group compared to the PD-L1(−) group ( P = .0027, P = .010, and P = .005, respectively). In this study, we evaluated the landscape of PD-L1 IHC expression using DAKO 22C3 assay in a large cohort of breast carcinoma patients. This landscape demonstrated variable expression based on receptor subtype. TNBC had the highest median expression of PD-L1 and the greatest (approximately 50%) PD-L1 positivity rate based on the CPS ≥10 threshold per the companion diagnostic label. We also observed an approximately 15% PD-L1 positivity rate in HR+/HER2− cases. Prospective clinical trials will be needed to determine if these HR+/HER2− patients will benefit from ICPI therapy and what the optimal cut-off is to predict benefit. We also saw that the PD-L1 and PD-L1(−)TNBC groups were similar in both clinicopathologic and genomic characteristics. Patients were of similar age distributions, and there was a similar distribution of genetic ancestry. We also examined the prevalence of other ICPI biomarkers. Median TMB was similar in both groups and there was no significant difference between the PD-L1 and PD-L1(−) groups in the percent of patients with TMB ≥10 mut/Mb. The landscape of concurrent alterations also showed no differences. These results demonstrate that these other characteristics cannot be used as surrogates to predict whether PD-L1 expression is likely for TNBC patients and that evaluation of PD-L1 by immunohistochemistry is necessary to properly evaluate this biomarker for patients in the clinic and clinical trials. Of note, these results differed from studies in non-small cell lung cancer, urothelial carcinoma, and cervical carcinoma where the authors found significant clinical and molecular differences between the PD-L1 and PD-L1(−) groups. We noticed that TNBC tissue samples from the breast had a higher rate of PD-L1 positivity compared to TNBC tissue samples from a metastatic site (57% vs. 44%), though it did not reach statistical significance. This is consistent with a prior study that also showed lower PD-L1 expression in metastatic lesions and substantial variability in PD-L1 positivity rates by metastatic site. Taken together, these observations suggest that the immune microenvironment of metastatic lesions may differ from that of primary tissues in TNBC as hypothesized by other studies of the immune landscape specific to metastases. This data have important implications for the specimen sites to submit for testing and clinical trial design. In the HR+/HER2− cohort, PD-L1 positivity did correlate with other genomic differences. Median gLOH was higher in the PD-L1 subgroup, and alterations in TP53 , CREBBP , and CCNE1 were also enriched in this population. The elevated gLOH in the PD-L1 could provide a rationale for possible therapy combinations of poly (ADP-ribose) polymerase (PARP) inhibitors and immunotherapy for HR+ disease. TP53 has been shown to be enriched in metastatic HR+ breast cancer, but the interaction between TP53 alterations, PD-L1 expression, and prognosis merits further study based on these results. Notably, nearly a quarter of TNBC patients in this study were of predominantly African ancestry regardless of PD-L1 expression. This finding is consistent with reports that the TNBC phenotype is enriched in African American patients. , Similar PD-L1 expression and highly comparable immune microenvironment between TNBC in African American and Caucasian patients have also been shown earlier. It is regrettable that patients identifying as being of Black race made up only 7% of enrollees in the IMpassion130 trial and only 4% of those enrolled in Keynote-355. , Given this particular unmet need of African American patients and the evidence that PD-L1 expression is prevalent in this population, future clinical trials assessing PD-L1 as a biomarker of response to ICPI should aim to increase enrollment of these patients. One limitation of this study is the lack of treatment history for the included patients. It is unknown if patients received prior therapy including hormone therapy and chemotherapy which could possibly have altered the clonal evolution and genomics of the tumor. The study is retrospective in nature, and prospective clinical trials will be needed to test further hypotheses in the active treatment of patients. The subtypes of breast cancer have distinct patterns of PD-L1 expression, and thus investigations into the efficacy of ICPI in non-TNBC patients may consider including analysis of optimum cutoffs for non-TNBC patients. In TNBC, PD-L1 positivity is not associated with other clinicopathologic or genomic features and thus should continue to be integrated into future studies of the immune microenvironment and immunotherapy efficacy. TNBC tissue samples from the breast did have an observed but not statistically significant enrichment for PD-L1 positivity compared to TNBC tissue samples from a metastatic site, a finding that merits future research in a larger cohort. oyad025_suppl_Supplementary_Tables Click here for additional data file.
Agroecosystem edge effects on vegetation, soil properties, and the soil microbial community in the Canadian prairie
24e9d593-8ee2-4ae3-b096-5bd78a814bf8
10079068
Microbiology[mh]
Habitat fragmentation is a leading cause of biodiversity loss and agriculture has caused extensive habitat fragmentation . Highly fragmented landscapes have a high proportion of edges, which affect various ecological aspects . Edges can be high contrast such as a forest abutting a pasture, or a more gradual low contrast edge like a shrub patch adjacent a meadow. Edges have edge effects which are abiotic and biotic changes occurring at the bounds of an ecosystem or habitat patch influencing properties including microclimate, moisture, soils, plant or animal community composition and distribution [ – ]. Some factors that influence edges are orientation , time , patch size , edge contrast and matrix composition . Ecological dynamics and patterns around edges can be understood through four essential mechanisms, ecological flows across edges, resource distribution, resource mapping, and unique species interactions . Expansion and intensification of agriculture has induced change in nearby habitats, and have been observed in both plant communities and soil properties [ – ]. Agricultural intensification is thought to magnify edge effects further altering vegetation and soil biodiversity in these systems . Commonly, edges in the agroecosystem are inhabited by non-native undesirable plants, here called weeds, or other invasive species . Plant communities at the edge may be of concern to farmers, where weeds can compete with crops . While aboveground vegetation changes at the edge are evident, belowground changes are also occurring. Underlying gradients of soil properties have been found at edges, including soil pH, nitrogen (N), and carbon (C) , though these studies are limited to forest edges. Aboveground and belowground interactions are important to consider because those interactions determine ecosystem function, and in particular agroecosystems, where land management has effects beyond the field boundary. However, the extent and characteristics of edges and their effects in agroecosystems remain poorly understood belowground. Two major land uses in the agroecosystem are cultivated croplands and grasslands; they each have characteristics that affect the soil microbial community. Nutrient dynamics between the two are quite different; for instance, croplands often have lower soil C than grasslands while grasslands have more soil C and are frequently correlated with higher microbial biomass . Various environmental factors affect soil microbial community composition and function , but agricultural practices directly alter environmental conditions affecting soil microbes . These agricultural practices include but are not limited to, soil amendments , tillage , herbicides , and crop type . However, the magnitude to which these factors influence the soil microbial community are complex [ – ]; considering edge effects and the interactions with agricultural practices is essential to understand soil microbial community dynamics in these landscapes. Aboveground edge effects provide insight into belowground conditions and ultimately the soil microbial community. Plant species can have specific microbial associations affecting microbial community composition, such as mycorrhizal associations with plant roots . Additionally, invasive plant species can alter the soil microbial community through changing inputs of litter quality and quantity . Knowing how and what alters the soil microbial community is important, as soil microorganisms are critical in maintaining ecosystem function, especially through nutrient cycling, disease suppression, and plant growth promotion . Understanding how the soil microbial community responds to edge effects is crucial, as the soil microbial community is essential for maintaining ecosystem function, especially with intensification of agricultural lands . To investigate edge effects in agroecosystems above and belowground, we measured vegetation composition and biomass, and soil physicochemical and microbial properties across perennial grassland and annual cropland edges in central Saskatchewan, Canada. Our goal was to determine if changes in land use altered the plant community and soil properties at agricultural edges, and if so, how these changes influenced the microbial community across the edge. Considering the interrelated effects of management on soil properties and plant communities, and their impacts on soil microbial communities, will better our understanding of agroecosystem edges and their ecosystem function. 2.1. Study sites We examined perennial grassland-annual cropland edges at two locations, St. Denis National Wildlife Area (SDNWA) and the Conservation Learning Centre (CLC), in southern-central Saskatchewan, Canada. SDNWA is located in the Moist Mixed Grassland ecoregion and CLC is in the Boreal Transition ecoregion . Soils at SDNWA are mostly of Dark Brown Chernozemic and CLC are predominantly Black Chernozemic soils and were confirmed by another study that was sampling cores for their study. Authorization to sample at these sites were granted by the St. Denis National Wildlife Area and the Conservation Learning Centre. Both locations are composed of cropland interspersed with perennial grasslands. Both croplands are no-till, while perennial grasslands are not intensively managed, only being cut for hay and no grazing, fertilizing, or spraying occurs. At SDNWA, in 1977, 97 hectares of cropland were converted to a perennial forage predominately composed of smooth brome ( Bromus inermis L.), alfalfa ( Medicago sativa L.), and yellow sweet clover ( Melilotus officinale L.) . Perennial grasslands at both sites were cut once for hay in 2017 and croplands were planted with flax ( Linuum usitatissiumum var. CDC Sorrel) in May 2017 at SDNWA. Glyphosate was applied prior to seeding and during seeding granular fertilizer (90 N—36 P– 17 S kg/hectare) was used; herbicides (Buctril M and Centurion mix) were also applied in July 2017 at SDNWA. Canola ( Brassica napus L., Nexera RR112) was planted in May 2017 at CLC. At the time of seeding, anhydrous fertilizer was applied (112 N– 28 P– 28 S kg/hectare) as well as glyphosate. Fungicides were applied in June (Topnotch/Eclipse) and July (Lance) 2017. 2.2. Field sampling Two edge sites at each location were sampled; we sampled at SDNWA from June 25–28, 2017 and sampling at CLC took place June 29 –July 6, 2017. At each edge site (n = 2 per location), three transects were laid perpendicular to the grassland-cropland edge and spaced 3 meters apart. Along each transect, samples were taken at the edge (0 m), 25 cm, 50 cm, 1 m, 2 m, 6 m, 8 m, 16 m, and 33 m into each of the two land use types (n = 15 per transect, 90 per location). Each sampling point was randomly assigned a position directly on the transect, or 1 m to either side of the transect ( ). The edge point was visually determined, aided by inspecting the seeding row extent. At each sampling point, percent cover was assessed for all plant species within a 1 m 2 quadrat and 1 m 2 quadrats were not allowed to overlap between sampling points. We also recorded plant species present within a 1 m radius of the center point; a 1 m radius was chosen to capture plants whose roots may be in the locale of the soil sample. Aboveground biomass was collected in a 20 cm x 50 cm quadrat and separated into three categories: grass, forbs, and plant litter. Biomass samples were dried at 40°C for four days and weighed to determine dry biomass. During analyses, we combined forbs and grass to encompass all living biomass. To characterize soil properties, we collected a soil core (5 cm diameter x 10 cm depth) from the A horizon at the center of the cover quadrat using a sledge core (AMS Soil Core Sampler, American Falls, ID). A composite sample of three smaller cores (2 cm diameter x 15 cm depth each) was collected for molecular analysis of the soil microbial community. All soil samples were stored at -20°C and were freshly thawed prior to analysis. 2.3. Soil property analyses Soil was air-dried and passed through a 5 mm sieve to remove large debris and rocks. Soil nitrate (NO 3 ) and ammonium (NH 4 ) extractions were performed using 2.0 M KCl and analyzed on an AutoAnalyzer 3 (SEAL, UK). Soil pH was measured with a pH probe (Mettler Toledo, USA) using a 1:2 soil to 0.1 M CaCl 2 solution . Air-dried, sieved soil was ball-ground (Retsch MM-400, Germany) and 0.25 g of soil was used to determine total N and C. Total C was combusted at 1100°C with a LECO C632 analyzer (LECO, USA) and total N was combusted at 1250°C with the TruMac CNS analyzer (LECO, USA). 2.4. Soil microbial sequencing and bioinformatics Composite samples were sub-sampled (5 g) and ball-ground (Retsch MM-400, Germany). DNA was extracted from 1 g of soil using the PowerPlant Pro Kit (Qiagen, Germany) and eluted in 100 μL of EB solution. DNA was quantified using the Qubit 2.0 Fluorometer (Invitrogen, Massachusetts, USA) and all samples were standardized to 1 ng/μL of DNA for downstream amplification. To target the bacterial community, the 16S rRNA V4 region was amplified using the primers 515F/806R . Reactions were performed at a final volume of 25 μL; 2 μL of template DNA, 12.5 μL of Platinum Green (2X) Master Mix (Thermo Fisher, Massachusetts, USA), and 1.5 μL of each primer (10 μM). PCR conditions followed Caporaso et al., (2011) using 30 cycles. To target the fungal community, the Internal Transcribed Spacer (ITS) region was amplified using the primer pair ITS1-F and ITS2-R . Reactions were performed at a final volume of 25 μL; 2 μL of template DNA, 12.5 μL of Platinum Green (2X) Master Mix (Thermo Fisher, Massachusetts, USA), 1 μL of each primer (10 μM). PCR conditions were 3 minutes 94°C, 35 cycles: 94°C 30 s, 52°C 30 s, 72°C 45 s, and 72°C for 7 minutes. All PCR products were purified using the NucleoMag NGS Clean-up and Size Select magnetic beads (Macherey-Nagel, Germany) following the protocol for single size selection with the exception of reduced drying time after the second ethanol wash (2 minutes). Double size selection purification was performed for the ITS amplicon to ensure that fragments larger than the target region were removed. Library preparation for Illumina MiSeq followed the Illumina Library Preparation Guide (#15044223 Rev. A) and sequencing was performed at the Toxicology Centre at the University of Saskatchewan (300 cycle v2 kit for 16S, 500 cycle v2 kit for ITS). Soil microbial sequences were processed through QIIME2 2018.11 using the DADA2 pipeline . DADA2 was used for quality filtering, removal of chimeric variants, and merging forward and reverse ITS reads (only forward reads were used for 16S sequences due to poor overlap). Taxonomy was assigned to Amplicon Sequence Variants (ASVs) using the Greengenes and UNITE databases for 16S and ITS, respectively. 2.5. Statistical analyses All statistical analyses were conducted in R 3.5.2 . We performed a non-metric multidimensional scaling (NMDS) analysis on plant species cover at each site using the vegan package v 2.5–2 . Plant cover data were Hellinger transformed prior to the NMDS . Soil property vectors overlaid on the NMDS were created using the ‘envfit’ function in vegan . From the NMDS, three groups based on sampling point location were apparent. Thus, we split sampling points into three edge locations: perennial grassland, edge, and cropland (n = 5 per transect, n = 30 for each edge location per site). Perennial grassland and cropland included sampling points from 1 m– 33 m on either side of the edge. Edge included samplings points at 0 m, 0.25 m, and 0.5 m into both perennial grassland and cropland. The groupings were examined by permutational multivariate analysis of variance (PERMANOVA) using the adonis function in the vegan package. Indicator plant species for each edge location were determined with the indicspecies package v 1.7.6 . To examine vegetation biomass and soil properties across the edge, we used linear mixed models (LMM). Fixed effects for all models included edge location, site, and their interaction. Random effects included transect (n = 3) nested within site (n = 2). Total living, grass, forb, and litter biomass, as well as NO 3 and NH 4 were log transformed to meet assumptions of normality. Models were fit with the lme4 package v 1.1–19 using restricted maximum likelihood (REML) estimation. Model fit was assessed by inspecting residuals to ensure homoscedasticity. We used the lmerTest package v 3.0–1 to obtain degrees of freedom and p -values. Tukey’s HSD post-hoc testing was used to determine significant differences among edge location using the emmeans package v 1.3.1 . We conducted an NMDS to examine the bacterial and fungal community of each site. Again, we used a Hellinger transformation on the ASVs, as it places less weight on rare species . The previously established three groups were also examined for the bacterial and fungal communities by PERMANOVA using the adonis function in the vegan package. We used Structural Equation Models (SEMs) to investigate relationships between land management, plants, soil properties, and the soil microbial communities. An advantage of using SEMs is the ability to include multiple complex relationships in an a priori theoretical model . Our a priori SEM hypothesized that land management had a direct relationship with plants (live plant biomass) ( ). Land management affects plant biomass through direct manipulation of plant community via seeding, harvesting, and mowing. Plant biomass was log-transformed to improve linearity. As land management was included as a categorical variable with three factors (cropland, edge, and perennial grassland), we ran the SEM twice, changing the reference land management category to display all possible comparisons (cropland vs edge, perennial vs edge, cropland vs perennial). We also hypothesized a direct relationship from plant biomass to total C and total N as studies show biomass is an important factor . Lastly, we included an effect of soil properties on the fungal and bacterial communities as soil nutrients may influence soil microbial communities . Goodness of fit for SEMs was assessed by the chi-square ( p -value > 0.05), Root Mean Square Error of Approximation (RMSEA < 0.08), and Comparative Fit Index (CFI > 0.90) . As our initial a priori model was not a good fit (χ 2 p -value < 0.001, RMSEA = 0.299, CFI = 0.692), we evaluated alternative models . As such, our modelling approach was now exploratory and based on modification indices we added pathways with ecological relevance . All models were fit and calculated using the lavaan package v 0.6–3 with the maximum likelihood estimation . To further investigate the fungal community, we identified significant fungal genera across the edge at both sites. First, the ASV table was filtered at 20% prevalence across samples to remove rare species and to prepare data for transformation, zero and NA values in the ASV tables were replaced with an estimate (Count Zero Multiplicative) using the zCompositions package . The centered log-ratio transformation was calculated with the CoDaSeq package v 0.99.4 and these ratios were used for abundance. Genera were aggregated using the phyloseq package v 1.24.1 and Welch’s t -tests used to determine significant differences in genus abundance between each pair of edge location (cropland vs edge, edge vs perennial, perennial vs cropland). P -values were adjusted using the p.adjust function in R selecting Bonferroni correction method. We examined perennial grassland-annual cropland edges at two locations, St. Denis National Wildlife Area (SDNWA) and the Conservation Learning Centre (CLC), in southern-central Saskatchewan, Canada. SDNWA is located in the Moist Mixed Grassland ecoregion and CLC is in the Boreal Transition ecoregion . Soils at SDNWA are mostly of Dark Brown Chernozemic and CLC are predominantly Black Chernozemic soils and were confirmed by another study that was sampling cores for their study. Authorization to sample at these sites were granted by the St. Denis National Wildlife Area and the Conservation Learning Centre. Both locations are composed of cropland interspersed with perennial grasslands. Both croplands are no-till, while perennial grasslands are not intensively managed, only being cut for hay and no grazing, fertilizing, or spraying occurs. At SDNWA, in 1977, 97 hectares of cropland were converted to a perennial forage predominately composed of smooth brome ( Bromus inermis L.), alfalfa ( Medicago sativa L.), and yellow sweet clover ( Melilotus officinale L.) . Perennial grasslands at both sites were cut once for hay in 2017 and croplands were planted with flax ( Linuum usitatissiumum var. CDC Sorrel) in May 2017 at SDNWA. Glyphosate was applied prior to seeding and during seeding granular fertilizer (90 N—36 P– 17 S kg/hectare) was used; herbicides (Buctril M and Centurion mix) were also applied in July 2017 at SDNWA. Canola ( Brassica napus L., Nexera RR112) was planted in May 2017 at CLC. At the time of seeding, anhydrous fertilizer was applied (112 N– 28 P– 28 S kg/hectare) as well as glyphosate. Fungicides were applied in June (Topnotch/Eclipse) and July (Lance) 2017. Two edge sites at each location were sampled; we sampled at SDNWA from June 25–28, 2017 and sampling at CLC took place June 29 –July 6, 2017. At each edge site (n = 2 per location), three transects were laid perpendicular to the grassland-cropland edge and spaced 3 meters apart. Along each transect, samples were taken at the edge (0 m), 25 cm, 50 cm, 1 m, 2 m, 6 m, 8 m, 16 m, and 33 m into each of the two land use types (n = 15 per transect, 90 per location). Each sampling point was randomly assigned a position directly on the transect, or 1 m to either side of the transect ( ). The edge point was visually determined, aided by inspecting the seeding row extent. At each sampling point, percent cover was assessed for all plant species within a 1 m 2 quadrat and 1 m 2 quadrats were not allowed to overlap between sampling points. We also recorded plant species present within a 1 m radius of the center point; a 1 m radius was chosen to capture plants whose roots may be in the locale of the soil sample. Aboveground biomass was collected in a 20 cm x 50 cm quadrat and separated into three categories: grass, forbs, and plant litter. Biomass samples were dried at 40°C for four days and weighed to determine dry biomass. During analyses, we combined forbs and grass to encompass all living biomass. To characterize soil properties, we collected a soil core (5 cm diameter x 10 cm depth) from the A horizon at the center of the cover quadrat using a sledge core (AMS Soil Core Sampler, American Falls, ID). A composite sample of three smaller cores (2 cm diameter x 15 cm depth each) was collected for molecular analysis of the soil microbial community. All soil samples were stored at -20°C and were freshly thawed prior to analysis. Soil was air-dried and passed through a 5 mm sieve to remove large debris and rocks. Soil nitrate (NO 3 ) and ammonium (NH 4 ) extractions were performed using 2.0 M KCl and analyzed on an AutoAnalyzer 3 (SEAL, UK). Soil pH was measured with a pH probe (Mettler Toledo, USA) using a 1:2 soil to 0.1 M CaCl 2 solution . Air-dried, sieved soil was ball-ground (Retsch MM-400, Germany) and 0.25 g of soil was used to determine total N and C. Total C was combusted at 1100°C with a LECO C632 analyzer (LECO, USA) and total N was combusted at 1250°C with the TruMac CNS analyzer (LECO, USA). Composite samples were sub-sampled (5 g) and ball-ground (Retsch MM-400, Germany). DNA was extracted from 1 g of soil using the PowerPlant Pro Kit (Qiagen, Germany) and eluted in 100 μL of EB solution. DNA was quantified using the Qubit 2.0 Fluorometer (Invitrogen, Massachusetts, USA) and all samples were standardized to 1 ng/μL of DNA for downstream amplification. To target the bacterial community, the 16S rRNA V4 region was amplified using the primers 515F/806R . Reactions were performed at a final volume of 25 μL; 2 μL of template DNA, 12.5 μL of Platinum Green (2X) Master Mix (Thermo Fisher, Massachusetts, USA), and 1.5 μL of each primer (10 μM). PCR conditions followed Caporaso et al., (2011) using 30 cycles. To target the fungal community, the Internal Transcribed Spacer (ITS) region was amplified using the primer pair ITS1-F and ITS2-R . Reactions were performed at a final volume of 25 μL; 2 μL of template DNA, 12.5 μL of Platinum Green (2X) Master Mix (Thermo Fisher, Massachusetts, USA), 1 μL of each primer (10 μM). PCR conditions were 3 minutes 94°C, 35 cycles: 94°C 30 s, 52°C 30 s, 72°C 45 s, and 72°C for 7 minutes. All PCR products were purified using the NucleoMag NGS Clean-up and Size Select magnetic beads (Macherey-Nagel, Germany) following the protocol for single size selection with the exception of reduced drying time after the second ethanol wash (2 minutes). Double size selection purification was performed for the ITS amplicon to ensure that fragments larger than the target region were removed. Library preparation for Illumina MiSeq followed the Illumina Library Preparation Guide (#15044223 Rev. A) and sequencing was performed at the Toxicology Centre at the University of Saskatchewan (300 cycle v2 kit for 16S, 500 cycle v2 kit for ITS). Soil microbial sequences were processed through QIIME2 2018.11 using the DADA2 pipeline . DADA2 was used for quality filtering, removal of chimeric variants, and merging forward and reverse ITS reads (only forward reads were used for 16S sequences due to poor overlap). Taxonomy was assigned to Amplicon Sequence Variants (ASVs) using the Greengenes and UNITE databases for 16S and ITS, respectively. All statistical analyses were conducted in R 3.5.2 . We performed a non-metric multidimensional scaling (NMDS) analysis on plant species cover at each site using the vegan package v 2.5–2 . Plant cover data were Hellinger transformed prior to the NMDS . Soil property vectors overlaid on the NMDS were created using the ‘envfit’ function in vegan . From the NMDS, three groups based on sampling point location were apparent. Thus, we split sampling points into three edge locations: perennial grassland, edge, and cropland (n = 5 per transect, n = 30 for each edge location per site). Perennial grassland and cropland included sampling points from 1 m– 33 m on either side of the edge. Edge included samplings points at 0 m, 0.25 m, and 0.5 m into both perennial grassland and cropland. The groupings were examined by permutational multivariate analysis of variance (PERMANOVA) using the adonis function in the vegan package. Indicator plant species for each edge location were determined with the indicspecies package v 1.7.6 . To examine vegetation biomass and soil properties across the edge, we used linear mixed models (LMM). Fixed effects for all models included edge location, site, and their interaction. Random effects included transect (n = 3) nested within site (n = 2). Total living, grass, forb, and litter biomass, as well as NO 3 and NH 4 were log transformed to meet assumptions of normality. Models were fit with the lme4 package v 1.1–19 using restricted maximum likelihood (REML) estimation. Model fit was assessed by inspecting residuals to ensure homoscedasticity. We used the lmerTest package v 3.0–1 to obtain degrees of freedom and p -values. Tukey’s HSD post-hoc testing was used to determine significant differences among edge location using the emmeans package v 1.3.1 . We conducted an NMDS to examine the bacterial and fungal community of each site. Again, we used a Hellinger transformation on the ASVs, as it places less weight on rare species . The previously established three groups were also examined for the bacterial and fungal communities by PERMANOVA using the adonis function in the vegan package. We used Structural Equation Models (SEMs) to investigate relationships between land management, plants, soil properties, and the soil microbial communities. An advantage of using SEMs is the ability to include multiple complex relationships in an a priori theoretical model . Our a priori SEM hypothesized that land management had a direct relationship with plants (live plant biomass) ( ). Land management affects plant biomass through direct manipulation of plant community via seeding, harvesting, and mowing. Plant biomass was log-transformed to improve linearity. As land management was included as a categorical variable with three factors (cropland, edge, and perennial grassland), we ran the SEM twice, changing the reference land management category to display all possible comparisons (cropland vs edge, perennial vs edge, cropland vs perennial). We also hypothesized a direct relationship from plant biomass to total C and total N as studies show biomass is an important factor . Lastly, we included an effect of soil properties on the fungal and bacterial communities as soil nutrients may influence soil microbial communities . Goodness of fit for SEMs was assessed by the chi-square ( p -value > 0.05), Root Mean Square Error of Approximation (RMSEA < 0.08), and Comparative Fit Index (CFI > 0.90) . As our initial a priori model was not a good fit (χ 2 p -value < 0.001, RMSEA = 0.299, CFI = 0.692), we evaluated alternative models . As such, our modelling approach was now exploratory and based on modification indices we added pathways with ecological relevance . All models were fit and calculated using the lavaan package v 0.6–3 with the maximum likelihood estimation . To further investigate the fungal community, we identified significant fungal genera across the edge at both sites. First, the ASV table was filtered at 20% prevalence across samples to remove rare species and to prepare data for transformation, zero and NA values in the ASV tables were replaced with an estimate (Count Zero Multiplicative) using the zCompositions package . The centered log-ratio transformation was calculated with the CoDaSeq package v 0.99.4 and these ratios were used for abundance. Genera were aggregated using the phyloseq package v 1.24.1 and Welch’s t -tests used to determine significant differences in genus abundance between each pair of edge location (cropland vs edge, edge vs perennial, perennial vs cropland). P -values were adjusted using the p.adjust function in R selecting Bonferroni correction method. 3.1. Vegetation community and biomass Differences in plant community composition were strongly related to edge location ( ). Three distinct clusters were identified: the edge (0.5 m-0.5 m), the cropland (33 m-1 m), and the grassland (1 m-33 m) at both CLC and SDNWA. These plant communities across the edge appear to correlate with soil properties ( ). The distinct vegetation groupings for edge, perennial grassland, and cropland were driven by abundant non-native annual plant species at the edge, seeded species in the perennial grassland, and the crop in the croplands. Indicator species at the edge included hemp nettle ( Galeopsis tetrahit L.) and cleaver’s ( Galium aparine L.) at both sites ( ). Non-native annual and some perennial plant species, here called weedy species, were dominant at the edge and comprised 77% ± 8.9% (mean ± SD) of edge plants recorded at CLC and 85% ± 7.4% at SDNWA. In perennial grasslands, B . inermis had the highest indicator value of any species at SDNWA, while at CLC, both B . inermis and B . bieberstenii were strong indicator species. Other indicator species for perennial grassland common to both sites included M . satvia and dandelion ( Taraxacum officinale L.). Indicator species for cropland were the crops planted in 2017, B . napus and L . usitatissimum for CLC and SDNWA, respectively. Patterns of aboveground vegetation biomass across the edge differed at each site; as determined by linear mixed modelling, the interaction was significant between site and edge location for each biomass category. At SDNWA, living biomass was greatest in the grassland and significantly decreased across the edge and cropland, however at CLC living biomass was only significantly higher in the perennial grassland compared with the edge ( ). The greatest forb biomass at CLC was in cropland, due to planted canola, while at SDNWA the greatest forb biomass was at the edge and cropland ( ). At the edge, forbs consisted of 74% ± 31% and 88% ± 23% (mean ± standard deviation) of living biomass at CLC and SDNWA, respectively. Not surprisingly, the majority of grass biomass was in the perennial grasslands ( ). Litter biomass was not significantly different across the edge at either site ( ). 3.2. Soil properties Overall, soil properties changed across the edge; however, the pattern for total C, NH 4 , and pH significantly differed between sites ( ). Total C and N were significantly higher in the perennial grasslands than croplands at both sites. At SDNWA, the edge had intermediate levels of total C and N when compared to grassland and cropland; at CLC, total C and N at the edge were more similar to croplands ( ). NO 3 had the opposite trend as total C and N, with significantly higher values in the cropland and edge than the perennial grassland at both sites ( ). SDNWA had significantly higher NH 4 in perennial grassland compared to edge and cropland, while at CLC, NH 4 was similar across all locations ( ). Soil pH was significantly higher in the perennial grassland at CLC compared to edge and cropland, with pH values ranging across the edge from 4.8–6.9 ( ). At SDNWA, pH was not significantly different across the edge, with values that ranged from 6.5–7.5. Overall, soil properties at edge locations were more variable at CLC than at SDNWA. 3.3. Soil microbial community Both bacterial and fungal communities were different across the edge at CLC and SDNWA (PERMANOVA, ) ( ). Changes in the bacterial community were less clear, however; at SDNWA, bacterial community composition appears to diverge more with respect to edge location than at CLC ( ). Fungal communities at both sites appeared to have a distinct perennial grassland community compared with the edge and cropland ( ). 3.4. Structural equation modelling Our final SEMs, after including direct pathways from land management to both soil properties and microbial communities, were a good fit, with edge as reference (χ 2 p-value = 0.144, RMSEA = 0.074, CFI = 0.996) and perennial grassland as reference (χ 2 p-value = 0.144, RMSEA = 0.074, CFI = 0.996) ( ). We were able to explain 36% of the variation in the fungal community, which was driven primarily by land management ( ). Cropland had a ‘positive’ relationship and perennial grasslands a ‘negative’ relationship with the fungal community when compared to the edge, indicating community composition differences ( ); both cropland and edge had ‘positive’ relationships with the fungal community, when compared to perennial grasslands ( ). Therefore, the fungal community was most strongly positively influenced by the cropland, followed by edge, and negatively influenced by perennial grasslands. Bacteria was similarly affected by the cropland and edge ( ) and had a ‘negative’ relationship with cropland and edge, compared with perennial grasslands ( ). Plant biomass had no significant relationships with soil properties but soil properties were significantly influenced by land management ( ). Perennial grasslands had ‘positive’ relationships with both total C and N compared to the edge, while cropland had a ‘negative’ relationship with total C ( ). These finding are supported by the significantly higher TC and TN detected in the perennial grassland and the edge having intermediate TC at SDWNA. Similarly, land management relationships with plant biomass follow the same pattern we observed from the linear mixed effect model; the greatest plant biomass was in perennial grasslands ( ), followed by edge, and then cropland ( ). While land management had direct impacts on the soil microbial community, soil properties and plant biomass, we did not find any significant pathways from soil properties to the microbial communities. In addition, the interaction between the fungal and bacterial communities was not significant in either model ( ). 3.5. Fungal abundance across the edge Since changes in the fungal community were clearly related to land use ( ) and these differences were more distinct at both of our sites ( ), we further examined shifts in fungal community composition across land uses. After filtering the data set to obtain the most abundant genera (see ), 50 genera remained (from 392) and six genera were found to be significantly different between at least one location comparison (i.e. cropland vs perennial, edge vs grassland, edge vs cropland). The abundances of five out of the six genera were significantly greater in the cropland than the grassland ( ). Two of these genera, Clonostachys and Gibberella , were also found in greater abundances at the edge compared to the grassland. Paraphoma was the only genera that was significantly more abundant at the edge than in the cropland ( ). Differences in plant community composition were strongly related to edge location ( ). Three distinct clusters were identified: the edge (0.5 m-0.5 m), the cropland (33 m-1 m), and the grassland (1 m-33 m) at both CLC and SDNWA. These plant communities across the edge appear to correlate with soil properties ( ). The distinct vegetation groupings for edge, perennial grassland, and cropland were driven by abundant non-native annual plant species at the edge, seeded species in the perennial grassland, and the crop in the croplands. Indicator species at the edge included hemp nettle ( Galeopsis tetrahit L.) and cleaver’s ( Galium aparine L.) at both sites ( ). Non-native annual and some perennial plant species, here called weedy species, were dominant at the edge and comprised 77% ± 8.9% (mean ± SD) of edge plants recorded at CLC and 85% ± 7.4% at SDNWA. In perennial grasslands, B . inermis had the highest indicator value of any species at SDNWA, while at CLC, both B . inermis and B . bieberstenii were strong indicator species. Other indicator species for perennial grassland common to both sites included M . satvia and dandelion ( Taraxacum officinale L.). Indicator species for cropland were the crops planted in 2017, B . napus and L . usitatissimum for CLC and SDNWA, respectively. Patterns of aboveground vegetation biomass across the edge differed at each site; as determined by linear mixed modelling, the interaction was significant between site and edge location for each biomass category. At SDNWA, living biomass was greatest in the grassland and significantly decreased across the edge and cropland, however at CLC living biomass was only significantly higher in the perennial grassland compared with the edge ( ). The greatest forb biomass at CLC was in cropland, due to planted canola, while at SDNWA the greatest forb biomass was at the edge and cropland ( ). At the edge, forbs consisted of 74% ± 31% and 88% ± 23% (mean ± standard deviation) of living biomass at CLC and SDNWA, respectively. Not surprisingly, the majority of grass biomass was in the perennial grasslands ( ). Litter biomass was not significantly different across the edge at either site ( ). Overall, soil properties changed across the edge; however, the pattern for total C, NH 4 , and pH significantly differed between sites ( ). Total C and N were significantly higher in the perennial grasslands than croplands at both sites. At SDNWA, the edge had intermediate levels of total C and N when compared to grassland and cropland; at CLC, total C and N at the edge were more similar to croplands ( ). NO 3 had the opposite trend as total C and N, with significantly higher values in the cropland and edge than the perennial grassland at both sites ( ). SDNWA had significantly higher NH 4 in perennial grassland compared to edge and cropland, while at CLC, NH 4 was similar across all locations ( ). Soil pH was significantly higher in the perennial grassland at CLC compared to edge and cropland, with pH values ranging across the edge from 4.8–6.9 ( ). At SDNWA, pH was not significantly different across the edge, with values that ranged from 6.5–7.5. Overall, soil properties at edge locations were more variable at CLC than at SDNWA. Both bacterial and fungal communities were different across the edge at CLC and SDNWA (PERMANOVA, ) ( ). Changes in the bacterial community were less clear, however; at SDNWA, bacterial community composition appears to diverge more with respect to edge location than at CLC ( ). Fungal communities at both sites appeared to have a distinct perennial grassland community compared with the edge and cropland ( ). Our final SEMs, after including direct pathways from land management to both soil properties and microbial communities, were a good fit, with edge as reference (χ 2 p-value = 0.144, RMSEA = 0.074, CFI = 0.996) and perennial grassland as reference (χ 2 p-value = 0.144, RMSEA = 0.074, CFI = 0.996) ( ). We were able to explain 36% of the variation in the fungal community, which was driven primarily by land management ( ). Cropland had a ‘positive’ relationship and perennial grasslands a ‘negative’ relationship with the fungal community when compared to the edge, indicating community composition differences ( ); both cropland and edge had ‘positive’ relationships with the fungal community, when compared to perennial grasslands ( ). Therefore, the fungal community was most strongly positively influenced by the cropland, followed by edge, and negatively influenced by perennial grasslands. Bacteria was similarly affected by the cropland and edge ( ) and had a ‘negative’ relationship with cropland and edge, compared with perennial grasslands ( ). Plant biomass had no significant relationships with soil properties but soil properties were significantly influenced by land management ( ). Perennial grasslands had ‘positive’ relationships with both total C and N compared to the edge, while cropland had a ‘negative’ relationship with total C ( ). These finding are supported by the significantly higher TC and TN detected in the perennial grassland and the edge having intermediate TC at SDWNA. Similarly, land management relationships with plant biomass follow the same pattern we observed from the linear mixed effect model; the greatest plant biomass was in perennial grasslands ( ), followed by edge, and then cropland ( ). While land management had direct impacts on the soil microbial community, soil properties and plant biomass, we did not find any significant pathways from soil properties to the microbial communities. In addition, the interaction between the fungal and bacterial communities was not significant in either model ( ). Since changes in the fungal community were clearly related to land use ( ) and these differences were more distinct at both of our sites ( ), we further examined shifts in fungal community composition across land uses. After filtering the data set to obtain the most abundant genera (see ), 50 genera remained (from 392) and six genera were found to be significantly different between at least one location comparison (i.e. cropland vs perennial, edge vs grassland, edge vs cropland). The abundances of five out of the six genera were significantly greater in the cropland than the grassland ( ). Two of these genera, Clonostachys and Gibberella , were also found in greater abundances at the edge compared to the grassland. Paraphoma was the only genera that was significantly more abundant at the edge than in the cropland ( ). We investigated soil properties, vegetation community, and the soil microbial community across edges of perennial grasslands and annual croplands. Land management had direct and indirect influences on the soil microbial community through changes in vegetation and soil properties. Edges acted as an intermediate and unique environment between the two land uses, composed of predominately non-native weedy plants and the edge was more similar to cropland than grassland in both plant and soil. 4.1. Aboveground changes across the edge Differences in plant community composition and biomass across the edge was largely determined by land use type. Three different vegetation communities were observed: the perennial grassland, the edge (~1 m in width), and the cropland. Unsurprisingly, cropland vegetation was strongly influenced by the crop seeded; B . napus at CLC and L . usitatissimum at SDNWA. Living biomass was greatest in grasslands, which were dominated by brome species ( B . inermis and B . biebersteinii ) that were seeded in previous years. Both brome species were primary contributors to biomass, as grass constituted 88% of total living biomass. Plant community composition at the edge was a mixture of grassland plants, crops, and weedy species. Weed population densities are highest near, or at, an edge because these plants are disturbance tolerant . Non-native plant presence in agriculture frequently increases plant species richness in these settings and is driven by agronomic activities . Agronomic activities including general mechanical disturbance such as mowing, crop sowing, and harvesting disturb the edge . While our study sites were no-till systems, croplands still experienced a higher level of disturbance than grasslands throughout the growing season. In-field herbicide and fertilizer application can have unintended effects on adjacent areas . Herbicide and fertilizer drift can reach beyond cropland edges and affect the plant community ; for example, fertilizer drift can promote faster growing competitive plant species that outcompete others [ , , ]. In addition to higher nutrient availability, cropland edges have open space allowing undesirable weedy species to establish . These edge effects lend advantages to these plant species that may compete with crops, reducing yields and facilitate invasion of undesirable plants into adjacent, more natural, land use types . Management practices, such as using herbicides or doubling sown crop density are effective in reducing weed populations at edges . However, conventional eradication attempts may bring more detriments to larger agroecosystem, herbicide can drift into non-target areas and weedy species can become herbicide resistant . Field edges can act reservoir for invasive weeds and other undesirable microbial pathogens . However, the reverse is also true, a diverse weed community can provide ecosystem services and habitat to beneficial species [ , , ]. Multiple management strategies are needed to successfully manage edge habitats valuable to many aspects of the agroecosystem. 4.2. Belowground changes across the edge Land management practices indirectly influenced soil physiochemical properties across perennial grassland-cropland edges through modification of aboveground plant community, and directly through fertilizer application. We found total C and N were highest in the perennial grasslands and lowest in the cropland; this is common in agroecosystems as soil quality is often poorer in cultivated land compared to non-cultivated land [ – ]. At our sites, perennial grasslands had plant species with relatively high-quality litter that likely influenced soil properties through the deposition of rich C sources. For example, at our sites in the perennial grasslands, B . inermis and M . stavia produce large amounts of litter that quickly degrades and is high in N content with a low C:N, which can increase soil organic C and rates of soil N cycling [ – ]. In addition, while the cropland is relatively productive, the majority of aboveground biomass is removed, not allowing the plant based C to return to the soil, which is a major source of soil C . Edges are subjected to fertilizer applied to the cropland, evidenced by high spikes of NO 3 in both in cropland and edges. Inorganic N amendments, applied over both long and short time periods, can increase soil total N and NO 3 [ – ]. Nitrate concentrations in edge soils were more similar to croplands, likely due to the close proximity of the edge to the cropland and inputs from surface runoff . However, our observation was only at one time point and may not provide a complete picture of N dynamics and seasonal fluctuations of NO 3 in this system. Regardless, edges in agroecosystems appear to act as a buffer for nutrient movement from managed croplands into adjacent land use types. 4.3. Soil microbial community across the edge In our study, land management appeared to have a strong influence on soil microbial community composition, as the direct pathways from land management to microbial communities were mostly significant in the SEMs. We chose to focus on community composition rather than a metric like richness, because in cases where richness is not affected, composition can detect more discreet changes . Management practices can directly and indirectly affect soil microbial communities [ – ] and long term practices have selective forces on the soil microbial community, thus changing the microbial community composition as it adapts to these disturbances . Fungal community composition was different in the grassland than cropland, as denoted by a ‘negative’ impact by the perennial grassland and a ‘positive’ by the cropland and edge. Fungal community composition was also different between the edge and cropland, though not as pronounced. Bacterial community composition was also different in the perennial grasslands compared to edge or the cropland, however patterns of response across the land uses were not as clear for bacteria as fungi ( ). Bacterial communities may respond less than fungal communities to changes in land use and vegetation, similar patterns were found in no-till cropland and native prairie in Kansas and in comparing native and exotic grasslands . Direct relationships between land management and the microbial community is likely driven by underlying changes of soil and plants associated with land use types. Plants are an important factor affecting microbial communities, especially at our study sites, land management created three distinct plant communities across the edge. Plant species can influence soil microbes through symbiotic relationships, root exudates, and plant litter inputs . A key difference in plant community across the edge was the dominance of annual plants in the cropland and edge, while the grassland was composed of nearly all perennial plants. Brassica species, like the B . napus planted at CLC are non-mycorrhizal plants, which would greatly affect both the quantity and quality of AMF hyphae and spores observed , thus could be an aspect shaping fungal community composition. The distinction between annual and perennial plants is important as McKenna et al., (2020) found that soil fungal community composition was similar under two different perennial vegetation types a seeded monoculture of intermediate wheatgrass ( Thinopyrum intermedium (Host) Barkworth & D.R. Dewey) grassland and a native prairie. However, both perennial fungal communities were different than the fungal community under annual crop rotation. Root architecture and activity may be largely responsible for differences between annual and perennial plants, as perennial grasslands have greater root biomass and more evenly distributed and deeper roots than annual croplands . Annual plants dominated the cropland and edges, which had similar direct effects on fungal community ( ), suggesting that the life history strategies of dominant plants influence the fungal community. Although we did not observe significant pathways from soil nutrients to fungi or bacteria, we did observe a strong influence of land use on soil nutrients. The perennial grassland had more total N likely due to more biomass, but high NO 3 and NH 4 were observed in the cropland. Fertilizers containing N can reduce fungal diversity and fungal richness, possibly related to NO 3 . However, others have found no effect of N fertilizers on fungal diversity or richness , but differences in fungi community composition . Increased N availability, specifically NO 3 , may be disrupting natural plant-soil feedback relationships . By increasing the N available to soil fungi or interrupting available C exudates via N available to plants, NO 3 can alter community composition by promoting or suppressing fungi with different life history strategies based on altered soil conditions . Higher NO 3 levels in the cropland and edge may have been an important driver of microbial community composition, specifically fungi at our study sites. One aspect not considered directly in this analysis, was the soil C to N ratio. The C:N is crucial for microbial functioning and linked to soil microbial community composition [ – ]. Considering the soil C:N explicitly in the future would aid in understanding soil microbial community composition across the edge. Examining abundant fungal genera revealed further insight into the effect of land management on the fungal community. Plants and soil fungi often develop a stable environment together as their interactions can provide mutual benefits, such as aid in nutrient acquisition for plants and carbon sources for fungi through plant exudates . Different plant species can affect soil fungi differently, likely due to unique soil microbiomes associated with each plant species . For example, plant species with litter high in C:N can promote Basidomycota fungi to aid in decomposition, changing fungal community composition . Fungal genera Gibberella and Paraphoma were significantly more abundant at the edge and likely reflect the presence of both crop species and grasses. Many Gibberella species are plant pathogens that can cause significant crop diseases, such as head blights in grain crops and ear rot in corn ( Zea mays L.) . Paraphoma are common soil fungi and frequently associate with monocots . Furthermore, at the edge we found P . chrysanthemicola , a plant pathogen known to affect plants in the Asteraceae and Rosaceae families which were found at the edge. Significant fungal genera abundant in the cropland were mostly pathogenic, including Sarocladium and Parastagonospora ; P . nodorum , a major wheat pathogen, which was identified to the species level . Others have hypothesized that edges can act as a reservoir for undesirable microbial pathogens . In our study the difference between fungal communities in cropland and edges, compared to perennial grasslands, was driven by the abundance of pathogens in these more heavily managed land uses supporting this hypothesis. Differences in plant community composition and biomass across the edge was largely determined by land use type. Three different vegetation communities were observed: the perennial grassland, the edge (~1 m in width), and the cropland. Unsurprisingly, cropland vegetation was strongly influenced by the crop seeded; B . napus at CLC and L . usitatissimum at SDNWA. Living biomass was greatest in grasslands, which were dominated by brome species ( B . inermis and B . biebersteinii ) that were seeded in previous years. Both brome species were primary contributors to biomass, as grass constituted 88% of total living biomass. Plant community composition at the edge was a mixture of grassland plants, crops, and weedy species. Weed population densities are highest near, or at, an edge because these plants are disturbance tolerant . Non-native plant presence in agriculture frequently increases plant species richness in these settings and is driven by agronomic activities . Agronomic activities including general mechanical disturbance such as mowing, crop sowing, and harvesting disturb the edge . While our study sites were no-till systems, croplands still experienced a higher level of disturbance than grasslands throughout the growing season. In-field herbicide and fertilizer application can have unintended effects on adjacent areas . Herbicide and fertilizer drift can reach beyond cropland edges and affect the plant community ; for example, fertilizer drift can promote faster growing competitive plant species that outcompete others [ , , ]. In addition to higher nutrient availability, cropland edges have open space allowing undesirable weedy species to establish . These edge effects lend advantages to these plant species that may compete with crops, reducing yields and facilitate invasion of undesirable plants into adjacent, more natural, land use types . Management practices, such as using herbicides or doubling sown crop density are effective in reducing weed populations at edges . However, conventional eradication attempts may bring more detriments to larger agroecosystem, herbicide can drift into non-target areas and weedy species can become herbicide resistant . Field edges can act reservoir for invasive weeds and other undesirable microbial pathogens . However, the reverse is also true, a diverse weed community can provide ecosystem services and habitat to beneficial species [ , , ]. Multiple management strategies are needed to successfully manage edge habitats valuable to many aspects of the agroecosystem. Land management practices indirectly influenced soil physiochemical properties across perennial grassland-cropland edges through modification of aboveground plant community, and directly through fertilizer application. We found total C and N were highest in the perennial grasslands and lowest in the cropland; this is common in agroecosystems as soil quality is often poorer in cultivated land compared to non-cultivated land [ – ]. At our sites, perennial grasslands had plant species with relatively high-quality litter that likely influenced soil properties through the deposition of rich C sources. For example, at our sites in the perennial grasslands, B . inermis and M . stavia produce large amounts of litter that quickly degrades and is high in N content with a low C:N, which can increase soil organic C and rates of soil N cycling [ – ]. In addition, while the cropland is relatively productive, the majority of aboveground biomass is removed, not allowing the plant based C to return to the soil, which is a major source of soil C . Edges are subjected to fertilizer applied to the cropland, evidenced by high spikes of NO 3 in both in cropland and edges. Inorganic N amendments, applied over both long and short time periods, can increase soil total N and NO 3 [ – ]. Nitrate concentrations in edge soils were more similar to croplands, likely due to the close proximity of the edge to the cropland and inputs from surface runoff . However, our observation was only at one time point and may not provide a complete picture of N dynamics and seasonal fluctuations of NO 3 in this system. Regardless, edges in agroecosystems appear to act as a buffer for nutrient movement from managed croplands into adjacent land use types. In our study, land management appeared to have a strong influence on soil microbial community composition, as the direct pathways from land management to microbial communities were mostly significant in the SEMs. We chose to focus on community composition rather than a metric like richness, because in cases where richness is not affected, composition can detect more discreet changes . Management practices can directly and indirectly affect soil microbial communities [ – ] and long term practices have selective forces on the soil microbial community, thus changing the microbial community composition as it adapts to these disturbances . Fungal community composition was different in the grassland than cropland, as denoted by a ‘negative’ impact by the perennial grassland and a ‘positive’ by the cropland and edge. Fungal community composition was also different between the edge and cropland, though not as pronounced. Bacterial community composition was also different in the perennial grasslands compared to edge or the cropland, however patterns of response across the land uses were not as clear for bacteria as fungi ( ). Bacterial communities may respond less than fungal communities to changes in land use and vegetation, similar patterns were found in no-till cropland and native prairie in Kansas and in comparing native and exotic grasslands . Direct relationships between land management and the microbial community is likely driven by underlying changes of soil and plants associated with land use types. Plants are an important factor affecting microbial communities, especially at our study sites, land management created three distinct plant communities across the edge. Plant species can influence soil microbes through symbiotic relationships, root exudates, and plant litter inputs . A key difference in plant community across the edge was the dominance of annual plants in the cropland and edge, while the grassland was composed of nearly all perennial plants. Brassica species, like the B . napus planted at CLC are non-mycorrhizal plants, which would greatly affect both the quantity and quality of AMF hyphae and spores observed , thus could be an aspect shaping fungal community composition. The distinction between annual and perennial plants is important as McKenna et al., (2020) found that soil fungal community composition was similar under two different perennial vegetation types a seeded monoculture of intermediate wheatgrass ( Thinopyrum intermedium (Host) Barkworth & D.R. Dewey) grassland and a native prairie. However, both perennial fungal communities were different than the fungal community under annual crop rotation. Root architecture and activity may be largely responsible for differences between annual and perennial plants, as perennial grasslands have greater root biomass and more evenly distributed and deeper roots than annual croplands . Annual plants dominated the cropland and edges, which had similar direct effects on fungal community ( ), suggesting that the life history strategies of dominant plants influence the fungal community. Although we did not observe significant pathways from soil nutrients to fungi or bacteria, we did observe a strong influence of land use on soil nutrients. The perennial grassland had more total N likely due to more biomass, but high NO 3 and NH 4 were observed in the cropland. Fertilizers containing N can reduce fungal diversity and fungal richness, possibly related to NO 3 . However, others have found no effect of N fertilizers on fungal diversity or richness , but differences in fungi community composition . Increased N availability, specifically NO 3 , may be disrupting natural plant-soil feedback relationships . By increasing the N available to soil fungi or interrupting available C exudates via N available to plants, NO 3 can alter community composition by promoting or suppressing fungi with different life history strategies based on altered soil conditions . Higher NO 3 levels in the cropland and edge may have been an important driver of microbial community composition, specifically fungi at our study sites. One aspect not considered directly in this analysis, was the soil C to N ratio. The C:N is crucial for microbial functioning and linked to soil microbial community composition [ – ]. Considering the soil C:N explicitly in the future would aid in understanding soil microbial community composition across the edge. Examining abundant fungal genera revealed further insight into the effect of land management on the fungal community. Plants and soil fungi often develop a stable environment together as their interactions can provide mutual benefits, such as aid in nutrient acquisition for plants and carbon sources for fungi through plant exudates . Different plant species can affect soil fungi differently, likely due to unique soil microbiomes associated with each plant species . For example, plant species with litter high in C:N can promote Basidomycota fungi to aid in decomposition, changing fungal community composition . Fungal genera Gibberella and Paraphoma were significantly more abundant at the edge and likely reflect the presence of both crop species and grasses. Many Gibberella species are plant pathogens that can cause significant crop diseases, such as head blights in grain crops and ear rot in corn ( Zea mays L.) . Paraphoma are common soil fungi and frequently associate with monocots . Furthermore, at the edge we found P . chrysanthemicola , a plant pathogen known to affect plants in the Asteraceae and Rosaceae families which were found at the edge. Significant fungal genera abundant in the cropland were mostly pathogenic, including Sarocladium and Parastagonospora ; P . nodorum , a major wheat pathogen, which was identified to the species level . Others have hypothesized that edges can act as a reservoir for undesirable microbial pathogens . In our study the difference between fungal communities in cropland and edges, compared to perennial grasslands, was driven by the abundance of pathogens in these more heavily managed land uses supporting this hypothesis. In our study, we saw differences across the edge aboveground and belowground; changes included plant community composition, soil total N and C, and soil microbial community composition. Aboveground, weedy species were most abundant at the edge and appeared to have a positive response to the edge, where conditions from the cropland and grassland made it ideal for those species . Belowground, soil C and N were lowest in the cropland, but NO 3 was highest in the cropland and edges. Soil microbial community composition across the edge was different, and fungi had more apparent differences in community composition than bacteria. A more in-depth analysis on fungi, showed some genera were more abundant in the cropland, edge, or grassland. For a holistic understanding of agroecosystem impacts, future studies need to consider the interrelated effects of management on soil properties and plant communities as these factors are often driving changes in soil microbial communities . Further knowledge of the interactions between the soil microbial community, soil properties, plants, and edges in the agroecosystem will help to develop more sustainable agricultural practices and build healthier more resilient agroecosystem. *Raw sequence fasta files and the associated metadata can be found at the National Center for Biotechnology Information (NCBI) under Bioproject PRJNA588061 S1 Fig A priori model used for structural equation models. A priori model used for structural equation modelling. Direct relationships are represented by straight arrows and curved arrows represent unexplained covariate relationships. The first and second axes from non-metric multidimensional scaling analyses was used to represent the fungal and bacterial communities. (TIF) Click here for additional data file. S2 Fig Biomass across croplands and perennial grasslands. Aboveground vegetation biomass (dry weight g/m2) across edge locations (perennial grassland (dark grey), edge (light grey), and cropland (white) at the Conservation Learning Centre (CLC) and St. Denis National Wildlife Area (SDNWA). Boxes encompass 25–75% quantiles of the data, while whiskers encompass 5–95%. The median is indicated by the black horizontal line, and outliers are shown as dots. Different letters indicate a significant difference (p-value < 0.05) between edge locations determined by Tukey-HSD post-hoc tests on linear mixed models. (TIF) Click here for additional data file. S3 Fig Soil properties across croplands and perennial grasslands. Soil properties across edge locations (perennial grassland (dark grey), edge (light grey), and cropland (white) at the Conservation Learning Centre (CLC) and St. Denis National Wildlife Area (SDNWA). Different letters indicate a significant difference (p-value < 0.05) between edge locations determined by Tukey-HSD post-hoc tests on linear mixed models. (TIF) Click here for additional data file. S1 Table Indicator plant species for each edge location (perennial grassland, edge, and cropland) at the Conservation Learning Centre (CLC) and the St. Denis National Wildlife Area (SDNWA). Indicator species are also listed with edge + grassland and edge + cropland. Edge + Grassland is the combination of edge and grassland points on the transect, while Edge + Cropland is the combination of edge and cropland points on the transect. (DOCX) Click here for additional data file. S2 Table F-values ( p -values) from linear mixed models for biomass (g/m 2 ) and soil properties (total C and total N (%), NH 4 and NO 3 (μg/g soil), and pH) across edge location (perennial grassland, edge, and cropland), site (Conservation Learning Centre and St. Denis National Wildlife Area), and their interaction. Significant p -values are bolded and log transformed data are denoted by † . (DOCX) Click here for additional data file. S3 Table Results from the PERMANOVA for bacteria and fungi at Conservation Learning Centre and St. Denis National Wildlife Area. (DOCX) Click here for additional data file. S4 Table Estimate parameters from both final structural equation models. (DOCX) Click here for additional data file.
Quantification of diversity sampling bias resulting from rice root bacterial isolation on popular and nitrogen-free culture media using 16S amplicon barcoding
d6d4cb18-6cea-4080-af1c-761085046715
10079111
Microbiology[mh]
Plants interact continuously with a microbiota that plays an important role in their health, fitness and productivity. In the last 10 years, the low-cost accessibility of next generation sequencing (amplicon-based sequencing and metagenomics) to scientists has enabled extensive description of the diversity of this microbiota on many model and non-model plants (e.g. in Arabidopsis and wheat ). For rice, its microbiome has been widely described in different countries and rice culture practices [ – ]. This wealth of data has now provided a good overview of the main bacterial and fungal taxa inhabiting underground plant tissues (roots and rhizosphere), as well as those in their above-ground parts (phyllosphere and endosphere). The diversity determined using amplicon-barcode approaches is mainly based on fragments of ribosomal taxonomic markers such as 16S and 18S rRNA genes, with taxonomic resolution often restricted to the genus level. To access and obtain more microbial diversity and structural representativeness, several studies have been carried out using a combination of markers at different resolution levels, ranging from general (16S V3-V4 or V4 for prokaryotes, 18S V4 for microeukaryotes) to more resolutive markers ( gyrB or rpoB fragments for bacteria, ITS1/ITS2 for fungi) [ – ]. Bioinformatic analysis of amplicon barcode data has also involved several novel strategies, ranging from operational taxonomic unit (OTU) clustering at different identity percentages to more advanced clustering methods using swarming algorithms , in addition to methods inferring true amplicon sequence variants (ASV) . Harnessing plant microbiota diversity with regard to plant nutrition or tolerance to pathogens, for instance, relies on the isolation and culturing of the taxonomic and/or functional diversity of the microbiota . The capacity to culture and store such diversity allows us to design synthetic communities and test their various compositions on plant growth and health . Otherwise, different culturomics approaches have been developed to capture the bacterial diversity of plant microbiota, including culture media supplementation with various compounds, simulated natural environments, diffusion chambers, soil substrate membrane systems, isolation chips, single cell microfluidics , or using limiting dilutions on plates combined with dual barcode processing . Substantial improvements in diversity sampling have also been achieved by popular media supplementation with plant compounds or plant-based media, while microbiologists continue to develop alternative culture methods to highlight rare and unculturable plant-associated microorganisms . Several functional prediction tools, such as PICRUSt2 have recently been developed to predict functional enrichment in metagenomes and even 16S amplicon barcoding data. In theory, such tools could allow the identification of metabolic functions and ecological functions that are enriched in culture-independent compared to culture-dependent approaches in order to guide culture media design or highlight culturing conditions that could help capture them . It is well recognized in the microbiology community that commonly used non-selective bacterial medium, such as Lubria broth (LB), R2A, nutrient agar (NA), tryptic soy agar (TSA), are conducive to strong bias in the sampled diversity recovered from plant tissues . This bias has never been quantified or documented in terms of proportions using next generation sequencing (NGS) amplicon-based technologies, to the best of our knowledge. Other media, such as Norris-glucose nitrogen-free medium (NGN), nitrogen-free medium (NFb) , have been successfully designed to isolate dinitrogen-fixing bacteria, but proportions of recovery of the diversity of the dinitrogen-fixing community remain unclear. In this study, we employed both culture-independent (CIA) and culture-dependent approaches (CDA) to analyse bacterial diversity in rice roots and rhizosphere soils. Specifically, we used 16S amplicon barcode sequencing to analyse DNA directly extracted from the plant samples (CIA), as well as from mass bacterial cultures of varying dilutions plated on different media (CDA), including a popular medium for isolating plant-associated bacteria (TSA at 10 and 50%), a plant-based medium (rice flour), and two nitrogen-free media (NFb, NGN). The object of this study was: i) to quantify the bias of bacterial diversity introduced by CDA compared to CIA; ii) to determine the proportions of enriched bacterial genera per medium; iii) to use functional prediction tools on amplicon data to identify specific metabolic functions or bacterial capacities present in the rice root microbiota that are missing from the CDA. Our hypothesis is that the culture-dependent approach (CDA) we used in this study, which involved high-throughput sequencing of DNA pooling from the culture media, will help overcome the issue of losing slow-growing bacteria and provide a more accurate assessment of the culturable bacterial diversity. The approach may increase the percentage recovery of bacteria and obtain a more comprehensive picture of the bacterial diversity that reflects the real diversity present in the plant samples. Rice root sampling and processing Oryza sativa ssp indica cv FKR64 plant roots were collected in a rice field near Bama village (western Burkina Faso, Kou Valley, 10.64384 N, -4.8302 E). This field was already assessed in a previous study and described by Barro et al. . Rice sampling was authorized by a national agreement between the Burkina Faso government and farmers within the framework of a rice productivity improvement program involving INERA. Rice plants were sampled at the panicle initiation growth stage, with three sampling points chosen 10 m apart, where roots were collected from three plants (20 cm apart). Roots were hand-shaken to remove non-adherent soil. Ten roots per plant from the same sampling point were pooled to obtain three final samples in 50 mL Falcon tube containing 30 mL of sterile PBS buffer, and vortexed for 5 min to separate the rhizospheric soil from the roots. Roots were removed with sterile forceps and placed in new 50 mL Falcon tubes. From this treatment step, the rhizosphere (Rh) and roots (Ro) samples were manipulated separately ( ). The rhizosphere soil in PBS was vortexed for 10 sec and then two samples of 1 mL of the rhizosphere suspension were taken after 15 sec and placed in two separate 2 mL Eppendorf tubes to be used in bacterial culture-dependent (CDA) or culture-independent approaches (CIA) for diversity estimation by a direct 16S amplicon barcoding approach. Similarly, washed roots were cut into 2 cm fragments, and then divided and placed in two 2 mL Eppendorf tubes for CDA and CIA assessment. Bacterial culture isolation media Four culture media with different carbon and nitrogen sources were used to maximize the isolated bacterial diversity. First, non-selective tryptic soy agar (TSA, Sigma) medium was used at 10% (TSA10) and 50% (TSA50) concentration. It contained digests of casein and soybean meal, NaCl and agar. In addition, two nitrogen-free media were used for the isolation of potential nitrogen fixers, semi-solid NFb and Norris glucose nitrogen-free medium (NGN, M712, ). NFb was used as semi-solid medium, which allows the development and growth of free nitrogen-fixing bacteria, due to their growth at an optimal distance for micro-aerobic conditions favourable for nitrogen fixation . Finally, we included a plant-based medium, rice flour (RF), which is commonly used for isolation of fungal rice pathogens . The compositions of the above culture media were as follows: TSA 10% (g/L): 0.5 NaCl, 1.7 pancreatic digest of casein, 0.3 papaic digest of soybean meal, 0.25 dextrose, 0.25 K 2 HPO 4 ; NGN (g/L), 1.0 K 2 HPO 4 , 1.0 CaCO 3 , 0.2 NaCl, 0.20 MgSO 4 ·7H 2 O, 0.01 FeSO 4 ·7H 2 O O, 0.005 Na 2 MoO 4 ·2H2O, with a glucose carbon source (10 g/L) at pH 7; NFb: (g/L), 0.5 K 2 HPO 4 , 0.2 MgSO 4 .7H2O, 0.1 NaCl, 0.02 CaCl 2 . 2H2O, 4.5 KOH, 5 malic acid, 2 mL of micronutrient solution ((g/L) 0.04 CuSO 4 .5H2O, 0.12 ZnSO 4 .7H2O, 1.40 H 3 BO 3 , 1.0 Na2MoO4.2H2O, 1.175 MnSO 4 . H2O), 2 mL of bromothymol blue (5 g/L in 0.2 N KOH), 4 mL of Fe-EDTA (solution 16.4 g/L), 1 mL of vitamin solution ((mg/0.1 L) 10 biotin, 20 pyridoxal-HCl) and pH adjusted to 6.5; RF (g/L): 20 rice flour (prepared from seeds from the FKR64 rice variety), 2.5 yeast extract. Solid and the semi-solid media were obtained by adding 2% and 0.16% g of agar, respectively. Culture-dependent (CDA) and independent (CIA) approaches For the CDA, roots (200 mg) and rhizosphere soil (200 mg) were transferred into PowerBead Tubes from the DNeasy PowerSoil kit (QIAGEN) where 1 mL of PBS buffer was added, and homogenized in a TissueLyser II (QIAGEN) for 2 min ( ). Dilutions (10 −2 to 10 −5 ) were performed and 50 μL of each dilution were spread on solid culture media (TSA 10%, TSA 50%, NGN, RF). For NFb medium, 50 μL of the 10 −1 root and rhizosphere soil suspensions were inoculated in 20 mL tubes containing 10 mL of NFb semi-solid medium. Each dilution was inoculated (on plates or in tubes) with 4 replicates. After 2 to 5 days of incubation (depending on the culture medium) at 28°C, plates were examined and dilutions selected for further processing (details in ). For selected dilutions, cultivable bacteria were recovered from petri plates by adding 1 mL of sterile distilled water, scraping and mixing bacterial colonies. Bacterial suspensions obtained from the same dilution plates were collected with a pipette and transferred to sterile 15 mL Falcon tubes. For the NFb medium, bacteria which had grown in a ring shape 0.2–0.3 cm below the surface of the medium were collected. Bacterial suspensions were stored at -20°C until DNA extraction. The number of cultivable bacteria in the obtained suspensions was roughly estimated by measuring the optical density (OD) at 600 nm for all suspensions and adjusted to 10 6 (assuming that OD600 nm of 1 corresponds to 1x10 8 bacteria/mL). The volumes collected from the samples were centrifuged 10 min at 14,000 rpm, and the pellets obtained were used for DNA extraction. For the culture-independent approach (CIA), pooled roots were homogenized in liquid nitrogen using a mortar and pestle, while the pooled rhizosphere samples were used directly for DNA extraction ( ). A mass of 250 mg was used for DNA extraction from both sample types. DNA extraction Cultivable bacteria suspensions (≈ 10 6 cells) and ground roots and rhizospheres soil (250 mg) were transferred to PowerBead tubes (DNeasy PowerSoil, Qiagen) containing C1 buffer and homogenised in a TissueLyser II (Qiagen) at 240 rpm for 2 x 1 min. Extraction was then performed according to the protocol provided by the supplier. 16S amplicon-barcoding data production Quality control of DNA, PCR amplification, library construction and MiSeq Illumina sequencing were performed by Macrogen (Seoul, South Korea) using 337F (16S_337F, 5’-GACTCCTACGGGAGGCWGCAG-3’ ) and 805R (16S_805R, 5’-GACTACCAGGGTATCTAATC-3’ ) primers to amplify the V3-V4 region of the 16S rDNA gene . The sequencing data (fastq) for this study are accessible in the ENA (European Nucleotide Archive, https://www.ebi.ac.uk/ena ) database under the PRJEB55863 (ERP140807) bioproject. Bioinformatics analysis of 16S amplicons For this study, we performed all diversity analyses using an amplicon sequence variant (ASV) detection approach (DADA2 pipeline), but we also compared the diversity with an OTU clustering method (based on FROGs, ). For ASV analysis, raw amplicon barcoding data were demultiplexed and processed using the Bioconductor Workflow for Microbiome Data analysis . This workflow is based on DADA2 that infers amplicon sequence variants (ASV) from raw sequence reads. Forward and reverse reads were trimmed at 20 bp, respectively, to remove primers and adapters, and then quality-truncated at 280 and 205 bp, respectively. The dada2 denoise-paired function with default parameters was used to correct sequencing errors and infer exact amplicon sequence variants (ASVs). Then forward and reverse corrected reads were merged with a minimum 20 bp overlap, and the removeBimeraDenovo function from DADA2 was used to remove chimeric sequences. Eighty-two percent of reads passed chimeric filtering. The numbers of reads filtered, merged and non-chimeric are indicated in . A mean of 58.6% of reads passed all filters (denoising, merging, non-chimeric), with a minimum of 15,347 and a maximum of 31,134 reads in filtered libraries, yielding a total of 2,712 ASV. ASV were then assigned at the taxonomic level using the DADA2 AssignTaxonomy function, with the Silva 16S reference database (silva_nr_v132_train_set) . We subsequently filtered out plasts (especially mitochondria from root samples) to keep only ASVs assigned to the Bacteria or Archaea kingdoms. A last filtering was done to remove ASV with <10 read occurrence across all libraries. A dataset of 1,647 ASV was used for subsequent diversity analyses. A Neighbour-joining phylogenetic tree of the 1,647 ASV was constructed using MEGA11 by first aligning ASV sequences with MUSCLE and then building a Neighbour joining-tree based on a distance matrix corrected with the Kimura 2P method. Metadata and ASV tables and the phylogenetic tree were uploaded to the NAMCO server for downstream microbiota diversity analyses ( https://exbio.wzw.tum.de/namco/ , . NAMCO is a microbiome explorer server based on a set of R packages, including Phyloseq for diversity analyses and PICRUSt2 for functional predictions . Alpha-diversity analyses (observed richness, Shannon and Simpson diversity, statistical test with pairwise post-hoc Dunn test) were performed with Phyloseq and tidyverse, ggpubr, rstatix, multcompView R packages and plotted with ggplot2. Beta-diversity (NMDS, PERMANOVA) was performed with Phyloseq and Vegan. PICRUSt2 functional predictions were performed to infer metabolic capacities from our 16S amplicon ASV. Functions were predicted in three classes: enzyme classification (EC), KEGG orthology (KO) and molecular pathways (PW). Data were normalised with relative abundance, and a Kruskal-Wallis test was performed across conditions (medium used for CDA and CIA) with the ALDEx2 package . Circular phylogenetic tree annotations and mapping were obtained with iTOL . Additional R scripts for the DADA2 pipeline, Phyloseq, and the production of figures are freely available on GitHub ( https://github.com/lmoulin34/Article_Moussa_culturingbias ). For the OTU clustering approach, the FROGs pipeline (; http://frogs.toulouse.inra.fr/ ) was used in the Galaxy environment. After demultiplexing and pre-processing, reads were clustered into OTU using the swarming method with default parameters (aggregation distance of 3), then chimeric sequences were removed and OTU were affiliated with taxonomic levels using the same Assign taxonomy tool as described above. Oryza sativa ssp indica cv FKR64 plant roots were collected in a rice field near Bama village (western Burkina Faso, Kou Valley, 10.64384 N, -4.8302 E). This field was already assessed in a previous study and described by Barro et al. . Rice sampling was authorized by a national agreement between the Burkina Faso government and farmers within the framework of a rice productivity improvement program involving INERA. Rice plants were sampled at the panicle initiation growth stage, with three sampling points chosen 10 m apart, where roots were collected from three plants (20 cm apart). Roots were hand-shaken to remove non-adherent soil. Ten roots per plant from the same sampling point were pooled to obtain three final samples in 50 mL Falcon tube containing 30 mL of sterile PBS buffer, and vortexed for 5 min to separate the rhizospheric soil from the roots. Roots were removed with sterile forceps and placed in new 50 mL Falcon tubes. From this treatment step, the rhizosphere (Rh) and roots (Ro) samples were manipulated separately ( ). The rhizosphere soil in PBS was vortexed for 10 sec and then two samples of 1 mL of the rhizosphere suspension were taken after 15 sec and placed in two separate 2 mL Eppendorf tubes to be used in bacterial culture-dependent (CDA) or culture-independent approaches (CIA) for diversity estimation by a direct 16S amplicon barcoding approach. Similarly, washed roots were cut into 2 cm fragments, and then divided and placed in two 2 mL Eppendorf tubes for CDA and CIA assessment. Four culture media with different carbon and nitrogen sources were used to maximize the isolated bacterial diversity. First, non-selective tryptic soy agar (TSA, Sigma) medium was used at 10% (TSA10) and 50% (TSA50) concentration. It contained digests of casein and soybean meal, NaCl and agar. In addition, two nitrogen-free media were used for the isolation of potential nitrogen fixers, semi-solid NFb and Norris glucose nitrogen-free medium (NGN, M712, ). NFb was used as semi-solid medium, which allows the development and growth of free nitrogen-fixing bacteria, due to their growth at an optimal distance for micro-aerobic conditions favourable for nitrogen fixation . Finally, we included a plant-based medium, rice flour (RF), which is commonly used for isolation of fungal rice pathogens . The compositions of the above culture media were as follows: TSA 10% (g/L): 0.5 NaCl, 1.7 pancreatic digest of casein, 0.3 papaic digest of soybean meal, 0.25 dextrose, 0.25 K 2 HPO 4 ; NGN (g/L), 1.0 K 2 HPO 4 , 1.0 CaCO 3 , 0.2 NaCl, 0.20 MgSO 4 ·7H 2 O, 0.01 FeSO 4 ·7H 2 O O, 0.005 Na 2 MoO 4 ·2H2O, with a glucose carbon source (10 g/L) at pH 7; NFb: (g/L), 0.5 K 2 HPO 4 , 0.2 MgSO 4 .7H2O, 0.1 NaCl, 0.02 CaCl 2 . 2H2O, 4.5 KOH, 5 malic acid, 2 mL of micronutrient solution ((g/L) 0.04 CuSO 4 .5H2O, 0.12 ZnSO 4 .7H2O, 1.40 H 3 BO 3 , 1.0 Na2MoO4.2H2O, 1.175 MnSO 4 . H2O), 2 mL of bromothymol blue (5 g/L in 0.2 N KOH), 4 mL of Fe-EDTA (solution 16.4 g/L), 1 mL of vitamin solution ((mg/0.1 L) 10 biotin, 20 pyridoxal-HCl) and pH adjusted to 6.5; RF (g/L): 20 rice flour (prepared from seeds from the FKR64 rice variety), 2.5 yeast extract. Solid and the semi-solid media were obtained by adding 2% and 0.16% g of agar, respectively. For the CDA, roots (200 mg) and rhizosphere soil (200 mg) were transferred into PowerBead Tubes from the DNeasy PowerSoil kit (QIAGEN) where 1 mL of PBS buffer was added, and homogenized in a TissueLyser II (QIAGEN) for 2 min ( ). Dilutions (10 −2 to 10 −5 ) were performed and 50 μL of each dilution were spread on solid culture media (TSA 10%, TSA 50%, NGN, RF). For NFb medium, 50 μL of the 10 −1 root and rhizosphere soil suspensions were inoculated in 20 mL tubes containing 10 mL of NFb semi-solid medium. Each dilution was inoculated (on plates or in tubes) with 4 replicates. After 2 to 5 days of incubation (depending on the culture medium) at 28°C, plates were examined and dilutions selected for further processing (details in ). For selected dilutions, cultivable bacteria were recovered from petri plates by adding 1 mL of sterile distilled water, scraping and mixing bacterial colonies. Bacterial suspensions obtained from the same dilution plates were collected with a pipette and transferred to sterile 15 mL Falcon tubes. For the NFb medium, bacteria which had grown in a ring shape 0.2–0.3 cm below the surface of the medium were collected. Bacterial suspensions were stored at -20°C until DNA extraction. The number of cultivable bacteria in the obtained suspensions was roughly estimated by measuring the optical density (OD) at 600 nm for all suspensions and adjusted to 10 6 (assuming that OD600 nm of 1 corresponds to 1x10 8 bacteria/mL). The volumes collected from the samples were centrifuged 10 min at 14,000 rpm, and the pellets obtained were used for DNA extraction. For the culture-independent approach (CIA), pooled roots were homogenized in liquid nitrogen using a mortar and pestle, while the pooled rhizosphere samples were used directly for DNA extraction ( ). A mass of 250 mg was used for DNA extraction from both sample types. Cultivable bacteria suspensions (≈ 10 6 cells) and ground roots and rhizospheres soil (250 mg) were transferred to PowerBead tubes (DNeasy PowerSoil, Qiagen) containing C1 buffer and homogenised in a TissueLyser II (Qiagen) at 240 rpm for 2 x 1 min. Extraction was then performed according to the protocol provided by the supplier. Quality control of DNA, PCR amplification, library construction and MiSeq Illumina sequencing were performed by Macrogen (Seoul, South Korea) using 337F (16S_337F, 5’-GACTCCTACGGGAGGCWGCAG-3’ ) and 805R (16S_805R, 5’-GACTACCAGGGTATCTAATC-3’ ) primers to amplify the V3-V4 region of the 16S rDNA gene . The sequencing data (fastq) for this study are accessible in the ENA (European Nucleotide Archive, https://www.ebi.ac.uk/ena ) database under the PRJEB55863 (ERP140807) bioproject. For this study, we performed all diversity analyses using an amplicon sequence variant (ASV) detection approach (DADA2 pipeline), but we also compared the diversity with an OTU clustering method (based on FROGs, ). For ASV analysis, raw amplicon barcoding data were demultiplexed and processed using the Bioconductor Workflow for Microbiome Data analysis . This workflow is based on DADA2 that infers amplicon sequence variants (ASV) from raw sequence reads. Forward and reverse reads were trimmed at 20 bp, respectively, to remove primers and adapters, and then quality-truncated at 280 and 205 bp, respectively. The dada2 denoise-paired function with default parameters was used to correct sequencing errors and infer exact amplicon sequence variants (ASVs). Then forward and reverse corrected reads were merged with a minimum 20 bp overlap, and the removeBimeraDenovo function from DADA2 was used to remove chimeric sequences. Eighty-two percent of reads passed chimeric filtering. The numbers of reads filtered, merged and non-chimeric are indicated in . A mean of 58.6% of reads passed all filters (denoising, merging, non-chimeric), with a minimum of 15,347 and a maximum of 31,134 reads in filtered libraries, yielding a total of 2,712 ASV. ASV were then assigned at the taxonomic level using the DADA2 AssignTaxonomy function, with the Silva 16S reference database (silva_nr_v132_train_set) . We subsequently filtered out plasts (especially mitochondria from root samples) to keep only ASVs assigned to the Bacteria or Archaea kingdoms. A last filtering was done to remove ASV with <10 read occurrence across all libraries. A dataset of 1,647 ASV was used for subsequent diversity analyses. A Neighbour-joining phylogenetic tree of the 1,647 ASV was constructed using MEGA11 by first aligning ASV sequences with MUSCLE and then building a Neighbour joining-tree based on a distance matrix corrected with the Kimura 2P method. Metadata and ASV tables and the phylogenetic tree were uploaded to the NAMCO server for downstream microbiota diversity analyses ( https://exbio.wzw.tum.de/namco/ , . NAMCO is a microbiome explorer server based on a set of R packages, including Phyloseq for diversity analyses and PICRUSt2 for functional predictions . Alpha-diversity analyses (observed richness, Shannon and Simpson diversity, statistical test with pairwise post-hoc Dunn test) were performed with Phyloseq and tidyverse, ggpubr, rstatix, multcompView R packages and plotted with ggplot2. Beta-diversity (NMDS, PERMANOVA) was performed with Phyloseq and Vegan. PICRUSt2 functional predictions were performed to infer metabolic capacities from our 16S amplicon ASV. Functions were predicted in three classes: enzyme classification (EC), KEGG orthology (KO) and molecular pathways (PW). Data were normalised with relative abundance, and a Kruskal-Wallis test was performed across conditions (medium used for CDA and CIA) with the ALDEx2 package . Circular phylogenetic tree annotations and mapping were obtained with iTOL . Additional R scripts for the DADA2 pipeline, Phyloseq, and the production of figures are freely available on GitHub ( https://github.com/lmoulin34/Article_Moussa_culturingbias ). For the OTU clustering approach, the FROGs pipeline (; http://frogs.toulouse.inra.fr/ ) was used in the Galaxy environment. After demultiplexing and pre-processing, reads were clustered into OTU using the swarming method with default parameters (aggregation distance of 3), then chimeric sequences were removed and OTU were affiliated with taxonomic levels using the same Assign taxonomy tool as described above. Quality filtering and diversity indices of 16S amplicon libraries (CIA versus CDA) We first assessed the quantity and quality of reads produced for each amplicon library originating from direct rice root or rhizosphere genomic DNA extraction (CIA) or from DNA extracted from cultures (CDA) of the same samples grown on bacterial culture media. A range of 24,000 to 44,000 reads (mean 36,120) was obtained for all 16S amplicon libraries ( ). Rarefaction curves ( ) showed sampled diversity saturation for each library, with a clear difference between the CIA reads (much higher in alpha diversity) compared to CDA. After DADA2 pipeline processing, we obtained 2,712 amplicon sequence variants (ASV) that were assigned at the taxonomic level using the Silva database. One library (S36) was removed from the analysis (from CIA) as it showed only 3 ASV. For the remaining libraries, ASV were filtered with regards to their abundance (cumulated reads ≥ 10 among all libraries) and mitochondria, chloroplast and eukaryote reads were removed (remaining ASV = 1647). We first compared the diversity obtained from root (Ro) and rhizosphere (Rh) samples. There was no statistical difference in ASV alpha diversity (Shannon index) or beta diversity (PERMANOVA) between Ro and Rh samples ( ). These results could be explained by the fact that we did not surface disinfect or remove the rhizoplane from roots, so the rhizosphere (soil adhering to roots) and the root (rhizoplane + endosphere) from the same samples did not show significant differences. As the focus of this study was to compare the diversity obtained from a non-culturable versus a culturable approach on different media, we pooled Rh and Ro data from the same plant samples for all subsequent analyses. The bacterial sequences obtained by the CIA method exhibit significantly higher alpha diversity than those obtained from the five CDA media (TSA10, TSA50, NGN, NFb, RF) (Shannon or Simpson index, Kruskal-Wallis test, p = 0,002; ). The TSA, RF and nitrogen-free media alpha diversities were not statistically different ( ). The ASV richness sampled from each medium represented about 15% of the diversity of all ASV detected in both CIA and CDA (TSA10: 16%, TSA50: 14.9%, NFb: 17%, NGN: 17%), except for RF (11%) which captured less diversity, while the CIA approach represented 67%. NMDS on the beta diversity analyses showed no overlap between ASV obtained from the different media (CDA) and the CIA ( , PERMANOVA, R 2 (Linear fit) = 0,88, p = 0.001). A substantial overlap was observed for TSA10 and TSA50, which was expected since it was the same medium but used at two different concentrations. Culturable sampled diversity: Comparison between ASV and OTU We also analysed our amplicon barcoding reads using an OTU-clustering approach (FROGs pipeline, using the swarming method to merge reads into OTU). This approach produced 1,023 OTU after quality filtering (same as for the ASV analysis). We then assessed if the diversity obtained by OTU gave the same percentage diversity recovery compared to ASV. In , we present the number of ASV and OTU obtained from the culture-dependent approach (CDA) and from the culture-independent approach (CIA), as well as the number of classes, orders and families represented in each. The ASV analysis produced more richness (38% more) than the OTU analysis. This higher diversity was observed at different taxonomic levels: class (ASV:50; OTU:38), order (ASV: 124; OTU:67), and families (ASV:219; OTU:119). Given this result, we conducted all subsequent analyses with ASV-analysed data as it was better at capturing the diversity of our 16S amplicon libraries. In both analyses, the diversity shared between CDA and CIA was relatively low (7% for ASV, 22% for OTU). From the culturable approach, we thus recovered many bacterial taxa that were undetected in the amplicon sequencing performed on gDNA extracted from roots or the rhizosphere, yet only a small proportion of the root bacteria were able to grow on our culture media. Comparison of bacterial taxonomic diversity between culture-independent (CIA) and culture-dependent (CDA) approaches Taxonomic binning was performed at different taxonomic levels for the top 30 phyla and the top 25 classes, orders and genera ( ). The phylum distribution showed a dominance of Proteobacteria, Bacteroidetes and Firmicutes in all libraries, with a clearly higher diversity of phyla in the CIA samples. We identified 22 bacterial phyla in the rice root sample microbiota, with only 5 present in the CDA (Proteobacteria, Firmicutes, Bacteroidetes, Actinobacteria, Verrucomicrobia). The proteobacteria phylum was the most abundant in all samples, with a greater proportion noted on the rice flour culture medium. At the class level, the difference in diversity was even more visible with Gammaproteobacteria, Alphaproteobacteria and Bacteroidia dominating in the CDA, while high class diversity was present in the CIA ( ). At the order level, the CIA showed (as expected) high diversity, while the CDA data were dominated by Enterobacteriales, Betaproteobacteriales, Rhizobiales and Flavobacteriales. Finally, in the top 25 genera, differences among CDA libraries clearly appeared, with the exception of the Enterobacter genus which was enriched in all (although to a lesser extent for NFb) ( ). In the CIA, Devosia was the most represented genus. To better visualize the sampled diversity distribution, we built a phylogenetic tree of ASV (diversity labelled at the class level) and mapped their distribution and abundance in the different conditions (coloured outer circles) ( ). This representation clearly highlights which taxa diversity is sampled and over-represented with the media used in the CDA (e.g. Gammaproteobacteria in blue or Firmicutes in pink), and which whole parts of bacterial diversity were missed compared to the CIA (e.g. Patescibacteria, Armatimonadetes, Deltaproteobacteria, Planctomycetes, Chloroflexi). Statistical differential analyses between CIA and CDA at class and genus levels We performed a Kruskal-Wallis test (ɑ = 0.05, with the Bonferroni multiple test correction method) to identify classes of bacteria with significant differences among CDA and CIA conditions. The statistical test identified 45 classes of bacteria above the significance cut-off level (p<0,05), 37 of which were present only in the CIA ( ), including in the top 10 most frequent class taxa: Ignavibacteria, Saccharimonadia, Fibrobacteria and Acidobacteria. Four classes were present in both the CIA and CDA: Alphaproteobacteria, Gammaproteobacteria, Bacteroidia and Actinobacteria, with Alphaproteobacteria and Gammaproteobacteria being the most represented in the CIA and CDA, respectively (also visible in ). Then we performed differential analyses on the mean relative abundance of bacterial genera in each condition, using a Kruskal-Wallis test (ɑ = 0.05). shows the 50 most abundant genera in the CIA and their mean relative abundance in each media dataset (whole dataset is available in ). Among the 20 most frequent bacterial genera in the CIA, eleven were detected in the CDA. These were Devosia (8.25% of all genera), obtained on TSA10, TSA50 or NFB media; followed by Pseudoxanthomonas (3.62%), which was found in all media conditions except RF, then Stenotrophomonas (3.36%), Bacillus (2.29%), Pseudomonas (1.42%) and Allo / Neo / Para / Rhizobium (1.3%) found in all media; and finally Sphingopyxis (2.1%) detected in TSA50; Streptomyces (1.48%) in NGN and Pseudolabrys (1.47%) in NFb. We built Venn diagrams on shared and specific diversity at ASV ( ) and genus levels ( ). Among the 244 genera from the CIA, 173 (71%) were absent from the culturable approach, while 71 were shared (29%) and 70 others were CDA-specific ( ). We also compared the genus diversity sampled in each CDA medium, and we listed specific genera obtained for each media on the Venn diagram in . To document genera that were the most frequent in the culturable approach for each medium, a table of the 20 most statistically frequent genera (Kruskal-Wallis test, ɑ = 0.05) obtained for each medium of the CDA is given in . In this top 20 most frequent genera, several appeared in all media: Enterobacter , Stenotrophomonas , Bacillus , Sphingobacterium , Klebsiella , Brevundimonas and Rhizobium , all of which are known to be fast-growers on rich media and reported to contain plant-inhabiting species. On nitrogen-free media, species known as nitrogen-fixing Plant Growth Promoting Rhizobacteria (PGPR) were sampled: Azospirillum , Para / Burkholderia , Bradyrhizobium , Sphingomonas , etc. Prediction of enriched functions in CIA compared to culture-based approach We performed a functional prediction analysis using PICRUSt2 to infer metabolic capacities from our 16S amplicon ASV. In order to assess the predictive ability of the PICRUSt2 algorithm on our dataset, we focused on the specific enzyme nitrogenase (EC. 1.18.6.1) prediction in CDA libraries that included medium with (TSA, RF) or without nitrogen (NGN, NFb) ( ). As expected, we observed nitrogenase enrichment (p = 0.00492) in nitrogen-free NFb and NGN media, with NGN medium exhibiting much higher enrichment than NFb. The non-selective medium (TSA) and the plant-based medium (RF) did not enrich bacterial taxa with the nitrogenase function ( ). We also aimed to predict which functional pathways were specific to CIA compared to CDA in order to help design conditions to capture the yet unculturable diversity. We thus analysed the metabolic pathways (based on PW/Metacyc categories) predicted as being enriched in the CIA compared to CDA conditions, and represented the results in a dot-plot ( ). Among the detected Metacyc pathways enriched in CIA, several functions linked to specific ecological niche abilities were detected: anaerobic/fermentation metabolism, carbon dioxide fixation, bacterial photosynthesis, methanotrophy and methylotrophy. As our CDA culture conditions were aerobic and in the dark, this enrichment was logical and gave clues on the culture conditions that could ultimately capture more bacterial diversity. Enriched pathways in the TSA and RF media libraries (compared to others) could be linked to heterotrophy on rich media in aerobic conditions (sugar degradation, amino acid/lipid/nucleotide biosynthesis, vitamin biosynthesis). For the nitrogen-free media (compared to the others), several pathways were detected as phenolic compound/polyamine/amino acid degradation and sugar degradation. Nitrogen fixation does not appear as itself in Metacyc pathways, it is embedded in “nitrogen metabolism” together with “nitrification” and “denitrification” capacities among others, so that no pattern from nitrogen fixation ability is possible, apart from the analysis on E.C. for the nitrogenase enzyme ( ). We first assessed the quantity and quality of reads produced for each amplicon library originating from direct rice root or rhizosphere genomic DNA extraction (CIA) or from DNA extracted from cultures (CDA) of the same samples grown on bacterial culture media. A range of 24,000 to 44,000 reads (mean 36,120) was obtained for all 16S amplicon libraries ( ). Rarefaction curves ( ) showed sampled diversity saturation for each library, with a clear difference between the CIA reads (much higher in alpha diversity) compared to CDA. After DADA2 pipeline processing, we obtained 2,712 amplicon sequence variants (ASV) that were assigned at the taxonomic level using the Silva database. One library (S36) was removed from the analysis (from CIA) as it showed only 3 ASV. For the remaining libraries, ASV were filtered with regards to their abundance (cumulated reads ≥ 10 among all libraries) and mitochondria, chloroplast and eukaryote reads were removed (remaining ASV = 1647). We first compared the diversity obtained from root (Ro) and rhizosphere (Rh) samples. There was no statistical difference in ASV alpha diversity (Shannon index) or beta diversity (PERMANOVA) between Ro and Rh samples ( ). These results could be explained by the fact that we did not surface disinfect or remove the rhizoplane from roots, so the rhizosphere (soil adhering to roots) and the root (rhizoplane + endosphere) from the same samples did not show significant differences. As the focus of this study was to compare the diversity obtained from a non-culturable versus a culturable approach on different media, we pooled Rh and Ro data from the same plant samples for all subsequent analyses. The bacterial sequences obtained by the CIA method exhibit significantly higher alpha diversity than those obtained from the five CDA media (TSA10, TSA50, NGN, NFb, RF) (Shannon or Simpson index, Kruskal-Wallis test, p = 0,002; ). The TSA, RF and nitrogen-free media alpha diversities were not statistically different ( ). The ASV richness sampled from each medium represented about 15% of the diversity of all ASV detected in both CIA and CDA (TSA10: 16%, TSA50: 14.9%, NFb: 17%, NGN: 17%), except for RF (11%) which captured less diversity, while the CIA approach represented 67%. NMDS on the beta diversity analyses showed no overlap between ASV obtained from the different media (CDA) and the CIA ( , PERMANOVA, R 2 (Linear fit) = 0,88, p = 0.001). A substantial overlap was observed for TSA10 and TSA50, which was expected since it was the same medium but used at two different concentrations. We also analysed our amplicon barcoding reads using an OTU-clustering approach (FROGs pipeline, using the swarming method to merge reads into OTU). This approach produced 1,023 OTU after quality filtering (same as for the ASV analysis). We then assessed if the diversity obtained by OTU gave the same percentage diversity recovery compared to ASV. In , we present the number of ASV and OTU obtained from the culture-dependent approach (CDA) and from the culture-independent approach (CIA), as well as the number of classes, orders and families represented in each. The ASV analysis produced more richness (38% more) than the OTU analysis. This higher diversity was observed at different taxonomic levels: class (ASV:50; OTU:38), order (ASV: 124; OTU:67), and families (ASV:219; OTU:119). Given this result, we conducted all subsequent analyses with ASV-analysed data as it was better at capturing the diversity of our 16S amplicon libraries. In both analyses, the diversity shared between CDA and CIA was relatively low (7% for ASV, 22% for OTU). From the culturable approach, we thus recovered many bacterial taxa that were undetected in the amplicon sequencing performed on gDNA extracted from roots or the rhizosphere, yet only a small proportion of the root bacteria were able to grow on our culture media. Taxonomic binning was performed at different taxonomic levels for the top 30 phyla and the top 25 classes, orders and genera ( ). The phylum distribution showed a dominance of Proteobacteria, Bacteroidetes and Firmicutes in all libraries, with a clearly higher diversity of phyla in the CIA samples. We identified 22 bacterial phyla in the rice root sample microbiota, with only 5 present in the CDA (Proteobacteria, Firmicutes, Bacteroidetes, Actinobacteria, Verrucomicrobia). The proteobacteria phylum was the most abundant in all samples, with a greater proportion noted on the rice flour culture medium. At the class level, the difference in diversity was even more visible with Gammaproteobacteria, Alphaproteobacteria and Bacteroidia dominating in the CDA, while high class diversity was present in the CIA ( ). At the order level, the CIA showed (as expected) high diversity, while the CDA data were dominated by Enterobacteriales, Betaproteobacteriales, Rhizobiales and Flavobacteriales. Finally, in the top 25 genera, differences among CDA libraries clearly appeared, with the exception of the Enterobacter genus which was enriched in all (although to a lesser extent for NFb) ( ). In the CIA, Devosia was the most represented genus. To better visualize the sampled diversity distribution, we built a phylogenetic tree of ASV (diversity labelled at the class level) and mapped their distribution and abundance in the different conditions (coloured outer circles) ( ). This representation clearly highlights which taxa diversity is sampled and over-represented with the media used in the CDA (e.g. Gammaproteobacteria in blue or Firmicutes in pink), and which whole parts of bacterial diversity were missed compared to the CIA (e.g. Patescibacteria, Armatimonadetes, Deltaproteobacteria, Planctomycetes, Chloroflexi). We performed a Kruskal-Wallis test (ɑ = 0.05, with the Bonferroni multiple test correction method) to identify classes of bacteria with significant differences among CDA and CIA conditions. The statistical test identified 45 classes of bacteria above the significance cut-off level (p<0,05), 37 of which were present only in the CIA ( ), including in the top 10 most frequent class taxa: Ignavibacteria, Saccharimonadia, Fibrobacteria and Acidobacteria. Four classes were present in both the CIA and CDA: Alphaproteobacteria, Gammaproteobacteria, Bacteroidia and Actinobacteria, with Alphaproteobacteria and Gammaproteobacteria being the most represented in the CIA and CDA, respectively (also visible in ). Then we performed differential analyses on the mean relative abundance of bacterial genera in each condition, using a Kruskal-Wallis test (ɑ = 0.05). shows the 50 most abundant genera in the CIA and their mean relative abundance in each media dataset (whole dataset is available in ). Among the 20 most frequent bacterial genera in the CIA, eleven were detected in the CDA. These were Devosia (8.25% of all genera), obtained on TSA10, TSA50 or NFB media; followed by Pseudoxanthomonas (3.62%), which was found in all media conditions except RF, then Stenotrophomonas (3.36%), Bacillus (2.29%), Pseudomonas (1.42%) and Allo / Neo / Para / Rhizobium (1.3%) found in all media; and finally Sphingopyxis (2.1%) detected in TSA50; Streptomyces (1.48%) in NGN and Pseudolabrys (1.47%) in NFb. We built Venn diagrams on shared and specific diversity at ASV ( ) and genus levels ( ). Among the 244 genera from the CIA, 173 (71%) were absent from the culturable approach, while 71 were shared (29%) and 70 others were CDA-specific ( ). We also compared the genus diversity sampled in each CDA medium, and we listed specific genera obtained for each media on the Venn diagram in . To document genera that were the most frequent in the culturable approach for each medium, a table of the 20 most statistically frequent genera (Kruskal-Wallis test, ɑ = 0.05) obtained for each medium of the CDA is given in . In this top 20 most frequent genera, several appeared in all media: Enterobacter , Stenotrophomonas , Bacillus , Sphingobacterium , Klebsiella , Brevundimonas and Rhizobium , all of which are known to be fast-growers on rich media and reported to contain plant-inhabiting species. On nitrogen-free media, species known as nitrogen-fixing Plant Growth Promoting Rhizobacteria (PGPR) were sampled: Azospirillum , Para / Burkholderia , Bradyrhizobium , Sphingomonas , etc. We performed a functional prediction analysis using PICRUSt2 to infer metabolic capacities from our 16S amplicon ASV. In order to assess the predictive ability of the PICRUSt2 algorithm on our dataset, we focused on the specific enzyme nitrogenase (EC. 1.18.6.1) prediction in CDA libraries that included medium with (TSA, RF) or without nitrogen (NGN, NFb) ( ). As expected, we observed nitrogenase enrichment (p = 0.00492) in nitrogen-free NFb and NGN media, with NGN medium exhibiting much higher enrichment than NFb. The non-selective medium (TSA) and the plant-based medium (RF) did not enrich bacterial taxa with the nitrogenase function ( ). We also aimed to predict which functional pathways were specific to CIA compared to CDA in order to help design conditions to capture the yet unculturable diversity. We thus analysed the metabolic pathways (based on PW/Metacyc categories) predicted as being enriched in the CIA compared to CDA conditions, and represented the results in a dot-plot ( ). Among the detected Metacyc pathways enriched in CIA, several functions linked to specific ecological niche abilities were detected: anaerobic/fermentation metabolism, carbon dioxide fixation, bacterial photosynthesis, methanotrophy and methylotrophy. As our CDA culture conditions were aerobic and in the dark, this enrichment was logical and gave clues on the culture conditions that could ultimately capture more bacterial diversity. Enriched pathways in the TSA and RF media libraries (compared to others) could be linked to heterotrophy on rich media in aerobic conditions (sugar degradation, amino acid/lipid/nucleotide biosynthesis, vitamin biosynthesis). For the nitrogen-free media (compared to the others), several pathways were detected as phenolic compound/polyamine/amino acid degradation and sugar degradation. Nitrogen fixation does not appear as itself in Metacyc pathways, it is embedded in “nitrogen metabolism” together with “nitrification” and “denitrification” capacities among others, so that no pattern from nitrogen fixation ability is possible, apart from the analysis on E.C. for the nitrogenase enzyme ( ). In this study we used Illumina sequencing on 16S-amplicon barcodes (variable region V3-V4) to quantify the bacterial diversity bias that occur when culturing rice root-associated bacteria on a range of culture media compared to the real diversity. Our goal was to precisely document the bacterial taxa diversity that could be recovered from a set of culture media compared to the real diversity, the predicted and enriched/depleted functions that could be inferred from this diversity, while seeking how to design new culture conditions to capture it. We have used the term “real diversity” in reference to that captured by Illumina amplicon sequencing, although this approach can also be biased, since it is based on the amplification of a marker gene from a DNA matrix that could originate from dead bacteria. As rice is a non-perennial plant and we expected root and rhizosphere soil to be under high metabolic turnover, we hypothesized that DNA from dead bacteria in the culture-independent approach would not represent high diversity in our analysis. Yet this is a possibility and could represent bias from the unculturable approach when comparing culture and uncultured-based diversity analyses. Several studies have already compared culturable and real rice microbiota diversity [ – ], but they often relied on comparisons between regular 16S Sanger sequencing on isolated bacteria compared to NGS sequences. Here we used the same sequencing methodology at high depth and were able to compare diversity levels without sequencing/analytical bias. We also used two different analytical methods to infer operational taxonomic units, i.e. ASV (based on exact sequence variant detection, or OTU (based on clustering by swarming, ). As ASV analysis detected more diversity than OTU at different levels (even class and order levels, ), we preferred to use ASV for all subsequent diversity and functional predictions. As exact sequence variants highly rely on algorithms for sequencing error detection and correction, we could not exclude that some of the obtained diversity was due to algorithm imperfections. However, given that the obtained higher diversity also concerned higher taxonomic levels, this artificial diversity issue is unlikely as it would involve a high number of mutations. In this study, the diversity obtained from the CDA culture media (TSA10, TSA50, RF, NGN, NFb) was lower compared to the CIA. If we combine all diversity of the CDA, it represents 11.7% (ASV level), 29% (genus level), 22.4% (class level), 25.6% (order level) and 23.1% (family level) of the diversity of the CIA. As there are few comparable studies in the literature, it is hard to determine if our recovery rate was low or high since this is the first study to our knowledge to have assessed culturable recovery by amplicon barcoding and NGS sequencing. The review of Sarhan et al. detailed recent advances in culturomics methodologies, and established a recovery rate of about 10% for conventional chemically-synthetic culture media, which is in the range that we obtained at the ASV level (although we obtained 23 to 29% at higher taxonomic levels). Samson et al. claimed to have recovered up to 70% of bacterial genera (on 16S V5-V7 amplicon, at >97% similarity) from Oryza sativa indica and japonica rice microbiota, but they applied a 0.1% frequency cut-off. Applying the same cut-off on our dataset would indicate the detection of 121 genera in the CIA, with 36 (29%) of them present in the CDA. From all media used in the CDA, we could recover a total of 142 bacterial genera, with each medium capturing 15 to 23 specific genera ( ). The only exception was the plant-based rice flour medium, which in our study captured low bacterial diversity compared to the other media, probably due to its low composition complexity. Plant-based media have been suggested to be a good alternative to popular bacterial chemical media for increasing the cultivability of plant-associated microbes , but the use of homogenised roots, leaves or exudates has been recommended to complement minimal or more complex media. The recovery of specific ASV from the CDA that were not detected in the CIA was an unexpected finding in our study. This diversity represented 532 ASV, 1 class, 3 orders, 16 families and 70 genera ( , ). This number of ASV in CDA may seem high (the total number in both CIA and CDA was 1,647), but it only represented a quarter of the total ASV diversity at the class, order and family levels ( ). We cannot exclude that a technical PCR bias could have increased the diversity from the CDA, since DNA polymerase errors may arise. Moreover, if there is low diversity in the DNA matrix, a diverse range of sequence variants could be produced, but these errors would only affect diversity at species or genus levels in the amplicon sequencing, not at higher taxonomic levels as in our results. One explanation for not detecting, in the CIA, the ASV diversity found in CDA could concern the sequencing depth, yet the rarefaction curves did reach a plateau but at much higher alpha diversity for the CIA compared to the CDA ( ). The mean sequencing depth obtained was 36,120. If differences between bacterial ASV frequencies exceed 10e 4 , then several genera may be undetected in the CIA approach, whereas they may be selected by specific culturable media. We set the read number filter at 10 (cumulated in all libraries), but we also looked at lower filtering (>2) and unfiltered ASV data ( ). In the unfiltered data, we counted 102 specific bacterial genera for CDA, 243 for CIA, with 90 in common, while these numbers were 70, 173 and 71 in the filtered results (10 read filter, and ), i.e. similar proportions. Processing unfiltered data thus produced similar proportions of specific ASV for the CDA compared to CIA. Regardless of the filtering method, we detected one specific class in CDA (undetected in CIA), i.e. Erysipelotrichia, represented by one genus, i.e. Erysipelothrix , and 4 ASV recovered from TSA medium (at 10 and 50% concentration). A BLAST study of these ASV sequences revealed 100% sequence identity with the 16S rDNA of Erysipelothrix inopinata —a species whose type strain was isolated from sterile-filtered vegetable broth . As our medium was sterilized by autoclaving, it is unlikely that these 4 ASV were contaminants. It should also be noted that we are not the first to have found that isolates from culturable approaches were undetected in culture-independent approaches . It would be better to assess rice microbiota diversity by substantially increasing the sequencing depth in order to get a better image of the overall diversity. The frequency differences between ASV exceeding 10e 3 (at the genus level, ) that we found in our study means that they would not be detected in the CIA, while they would be in the culture-based approach. This was a crucial finding since several studies have underlined the role of rare species (also called satellite taxa) in plant-microbe interactions and more broadly in key ecosystem functions . Increasing the representativeness of taxonomic diversity in databases should also be the focus of further scientific research since many ASV cannot be affiliated to taxonomic ranks due to missing descriptions of these taxa in taxonomic databases. We also tried to predict functions and metabolic pathways that would be enriched when using different types of media, and we also conducted statistical tests to highlight functions that were missing from our culturable approach. Metagenome-guided isolation and cultivation of microbes has been developed in recent years, but these approaches are based on metagenomic sequences and the reconstruction of genomes and metabolic pathways . The massive sequencing effort focused on a highly diverse range of bacteria from different environments has led to the development of genome database and prediction tools that may be used with simple amplicon taxonomic markers . We applied a prediction tool to our dataset to investigate the ecology and functional capacities of our detected bacteria. We found that many taxa with anaerobic metabolisms such as methanogenesis (methane production), methanotrophy (methane degradation) and methylotrophy (one-carbon reduction), or with photosynthetic capacities, were missing from CDA ( ) compared to the CIA. It is well known that rice microbiota differ from microbiota of other crops since rice is often grown in flooded conditions, thereby creating an oxic-anoxic interface between the rhizosphere/root system and the bulk soil . Our functional prediction approach thus underlined the presence of these probably strictly anaerobic bacteria adapted to anoxic conditions in the CIA, and their absence from the CDA. These predictions provided clues on the specific conditions and compositions of media required to capture these yet unculturable functional groups of bacteria. They could also serve to develop culturomics, a growing scientific field for microbiologists interested in synthetic microbiota and for biotechnological applications of plant-associated microorganisms. S1 Fig Boxplot of the Shannon alpha diversity index (1A) and NMDS of beta-diversity (1B) of root and rhizosphere 16S amplicon libraries. (DOCX) Click here for additional data file. S1 Table Dilutions and incubation times used for DNA extraction in the culturable approach. (DOCX) Click here for additional data file. S2 Table 16S amplicon statistics in the DADA2 pipeline. (DOCX) Click here for additional data file. S3 Table Abundance of bacterial classes detected in rice microbiota (Wilcoxon test), and their occurrence in the culturable approach. Numbers are colored according to class frequencies in each medium and CIA amplicon data. (XLSX) Click here for additional data file. S4 Table Abundance of the 264 bacterial genera detected in rice microbiota (Kruskal-Wallis test), and their occurrence in the culturable approach. (XLSX) Click here for additional data file. S5 Table ASV count table with taxonomic ranks. The Excel file contains: sheet 1, unfiltered data, sheets 2 &3, filtered at 2 and 10 cumulated reads in all libraries, respectively. (XLSX) Click here for additional data file.
Taking a JAB at How Gastroenterologists Can Increase Vaccination Rates in Patients with Inflammatory Bowel Disease
518a4448-a6fc-4f55-b1cf-0c532e7b9573
10079154
Internal Medicine[mh]
The past and future of industrial hygiene in Japan
969f4408-95ca-4bcf-95f1-b6cd317bbb8d
10079497
Preventive Medicine[mh]
Industrial hygiene in Japan has generally been considered emerge in the mid to late 1950s. Of course, even before the 1950s, the importance of ensuring workers’ health had been recognized mainly in the medical field; however, it was not until the “Hepburn Sandal Incident” that industrial hygiene research, which incorporated technology and information from the science and engineering fields, was launched in earnest under the leadership of the Japanese government. The Hepburn Sandals Incident was a major industrial disease in Japan during the mid to late 1950s that was triggered by the success of one American romantic movie. The movie “Roman Holiday”, released in Japan in 1954, was a huge hit, and the sandals worn by the lead actress (Audrey Hepburn) in the movie immediately became widely popular among young Japanese women. At this time, most footwear used in Japan, including sandals, were produced by small-scale manufacturers with only several employees. Unfortunately, at a time when laws and regulations to protect workers’ health were absent, most workers in sandal manufacturers were exposed to and unprotected against toxic solvents, such as benzene, used in the production processes. Benzene, which today requires extremely strict control due to its high carcinogenic potential, was not regulated in Japan at that time. Therefore, workers—many of whom were young females—in sandal manufacturing workshops were exposed to high concentrations of benzene vapor on a daily basis, which produced a large number of victims in a short period of time. The Japanese government responded promptly and promulgated the Ordinance on Prevention of Organic Solvent Poisoning in 1960 to prevent incidents of benzene poisoning, which had frequently occurred among small-scale footwear manufacturers. The ordinance was subsequently incorporated into the Industrial Safety and Health Law (1972) and has continued to significantly impact Japanese industrial hygiene from 1960 to the present considering that the ordinance specifies the methods for measuring organic solvent concentrations and ventilation requirements for workplaces involved with organic solvents. On the other hand, the major early administrative measure in Japan for occupational dust exposure was the enactment of the Pneumoconiosis Law in 1960. Unlike the Ordinance on Prevention of Organic Solvent Poisoning mentioned earlier, the Pneumoconiosis Law regulates workers’ health care and does not provide for working environment control. Thus, it had no significant and direct impact on industrial hygiene research in Japan. The Pneumoconiosis Law was amended several times thereafter; however, even 20 years after its enactment, it made no significant contribution to the reduction of pneumoconiosis. In 1978, the Japanese government enacted the Ordinance on Prevention of Hazards Due to Dust , which mandated the wetting and sealing of dust sources, installation of various types of ventilators, wearing of personal protective equipment, and working environment measurements. This ordinance contributed to the promotion of research on methods of measuring dust concentration, particle size, and chemical composition, as well as research on techniques to protect workers from dust, such as designing effective ventilation systems and the development of high-performance dust masks. The ordinance can be considered successful given that it promoted a decrease in the number of newly diagnosed pneumoconiosis cases from 6,842 in 1980 to 124 in 2020. Indeed, industrial hygiene in Japan has reduced the number of occupational diseases in conjunction with various government regulations; however, it must be noted that the needs for industrial hygiene have gradually changed as society has evolved. Since the mid-20th century, the share of the tertiary sector in Japanese industry has steadily expanded. According to the Japanese Census, the share of tertiary workers in 1950 was 29.6%, whereas that in 2019 was 71.2%. In line with this, ensuring the health of office workers, caregivers, delivery service providers, and hospitality workers, such as preventing low back pain, muscle fatigue, eye strain, and passive smoking, had emerged as an important issue for industrial hygiene, increasing the presence of ergonomics, aerosol science, and chemical engineering. In this context, the relative presence of conventional industrial hygiene declined with times, and the “Osaka Occupational Cholangiocarcinoma Disaster (2012)” occurred. The “Osaka Occupational Cholangiocarcinoma Disaster” was an industrial disease that occurred at a small printing factory in Osaka City, in which 17 employees developed cholangiocarcinoma, among whom 9 died. Subsequent investigations found that the primary cause of their cholangiocarcinoma was exposure to dichloropropane (DCP), which was used to clean the printing presses. However, no legal restrictions on the use of DCP were in place at this time. This industrial disease prompted Japanese labor administrators and industrial hygienists to recognize that conventional controls of chemical substances through legal restrictions alone were insufficient to protect workers’ health. The Japanese government immediately designated DCP as a regulated substance while developing a new law on risk assessment for chemicals. Currently, DCP is classified as a “special organic solvent” under the Ordinance on Prevention of Hazards Due to Specified Chemical Substances , which requires particularly strict control measures for use. In 2016, the Industrial Safety and Health Law was amended to require discretionary risk assessment, in which chemical users have discretion in the frequency and method of their assessment, for 640 chemicals, including approximately 520 chemicals that have yet to be legally regulated. Since 2016, chemicals subject to risk assessment have been added continuously, with risk assessment being mandatory for 674 chemicals as of January 2023. In the future, the Japanese government looks to increase the number of substances subject to risk assessment, which is expected to reach approximately 3,000 substances within a few years. In addition, the government intends to require personal exposure measurement using a personal sampler in addition to conventional working environment measurements based on area sampling. Along with these, the government also plans to essentially abolish the Ordinance on Prevention of Organic Solvent Poisoning , the Ordinance on Prevention of Hazards Due to Specified Chemical Substances , and other ordinances that have been the cornerstones of Japanese industrial hygiene, although no definite date has yet been finalized as of January 2023. As mentioned earlier, these ordinances specify not only the measurement procedures of the substances concerned but also countermeasures against their exposure. For example, when a local ventilation system (LEV) is applied to prevent exposure to regulated organic solvents, the current ordinance specifies the type of exhaust hood to be applied and the exhaust flow velocity. Therefore, after the abolishment of the ordinance, the users of the organic solvent will be responsible for selecting exposure control methods, including the LEV at their own discretion. However, it may be difficult for most users to select appropriate control methods independently. Currently, the Ministry of Health, Labor and Welfare Japan (MHLW) is preparing a “Recommended Case Studies for Reducing Chemical Exposure” through the National Institute of Occupational Safety and Health, Japan (JNIOSH), which will be of great benefit to many industrial hygienists who are struggling with countermeasures against hazardous substances once completed and released. One of the serious problems expected by the Japanese industrial hygiene system in the near future will be the shortage of young professionally trained industrial hygienists. In fact, three Japanese universities, namely Kitasato University, University of Occupational and Environmental Health, Japan (UOEH), and Waseda University, offered specialized industrial hygiene courses just a few years ago, but only the School of Health Sciences, UOEH remains now. Furthermore, even JNIOSH, which is to be the national center of occupational safety and health research in Japan, seems to abolish its research branch on industrial ventilation within a few years. As such, the future of industrial hygiene in Japan will perhaps be directed not by experts from universities or public research institutes but primarily by engineers from ventilator or protective equipment manufacturers or publicly licensed professionals, such as certified consultants, occupational hygienists, industrial physicians, official health supervisors, and environmental measurement specialists who are in charge of health and safety practices in the workplace.
Persistent post-COVID-19 dysosmia: Practices survey of members of the French National Union of Otorhinolaryngology-Head and Neck Surgery Specialists. CROSS analysis
f0f0a918-50bb-4c80-baee-3377b96cdc10
10080269
Otolaryngology[mh]
Introduction In France, in 2021, the National Health Authority defined “long COVID” as one or more symptoms persisting or appearing during the 3 months following the acute phase ( https://www.has-sante.fr/upload/docs/application/pdf/2021-11/symptomes_prolonges_a_la_suite_d_une_COVID_19_de_l_adulte_diagnostic_et_prise_en_charge.pdf ). In the international literature, rates of persistent subjective ortho- or retro-nasal olfactory impairment beyond 1 year after COVID-19 infection range between 15.1% and 25% , , . Between 26.5% and 46% , of patients with long-COVID olfactory disease (LCOD) presented persistent quantitative and qualitative dysosmia, underestimated on interview alone , with respectively 12.5% to 24.7% and 0 to 22.9% parosmia and phantosmia. For management of LCOD, the Health Authority guideline of February 10th, 2021( https://www.has-sante.fr/upload/docs/application/pdf/2021-11/fiche_troubles_du_gout_et_de_l_odorat.pdf ) and those of the French Society of Otorhinolaryngology (SFORL) of March 20th and 27th, 2020 ( https://www.sforl.org/wp-content/uploads/2020/03/Alerte-anosmie-COVID-19.pdf & https://www.sforl.org/wp-content/uploads/2020/03/AFR-SFORL-COVID-19-V2.pdf ) recommend MRI centered on the olfactory pathways, contraindicate oral or local corticosteroids for olfactory purposes, and recommend olfactory self-training. However, guidelines implementation in daily clinical practice of Ear-Nose and Throat (ENT) doctors is not clear. The main aim of the present study was to assess clinical and paraclinical diagnostic and drug and non-drug therapeutic management of LCOD in the daily clinical practice of members of the National Union of Otorhinolaryngology-Head and Neck Surgery Specialists ( Syndicat national des médecins spécialisés en ORL et chirurgie cervico-faciale ) (SNORL). Secondary aims comprised identification of relevant factors in terms of physicians’ sector of practice, age and experience and other etiologies of dysosmia. Material and methods The study implemented the CROSS methodologies (Consensus-Based Checklist for Reporting of Survey Studies) . A pre-questionnaire on GoogleForm® Was sent to 10 ENT specialists in the Provence-Alpes-Côte-d’Azur region of France: 3 with expert level in olfaction, 7 with good level, all experienced in olfactometry. Their initial subjective feedback helped reformulate some questions and delete others, due to possible confusion or irrelevance. An anonymous questionnaire (see ) was composed on GoogleForm® and e-mailed to the 715 Union members in January 2022, this constituting the only inclusion criterion; there were no non-inclusion or exclusion criteria. Response time was closed at 3 weeks and the web link was deactivated. The risk of anyone filling out the questionnaire more than once was limited by identifying any cases of identical response. The questionnaire comprised 39 multiple-choice questions, open and closed questions and 1–5 Likert scales (“never” to “always”) on practices, and boxes for free comment. In 5 pages, the main endpoints comprised one section on demographics (age, gender, type of practice, seniority, dysosmia etiologies treated), on diagnostic practices in olfactory disorder (flexible nasal endoscopy, imaging, olfactometry and the characteristics of implementation and interpretation, impact of national health insurance cover, and frequency and type of monitoring of olfactory recovery), and one on general olfaction and specific LCOD management (drug and non-drug, and olfactory self-training or with a speech pathologist). The questionnaire is shown as Supplemental material. 2.1 Statistics To assess binary factors affecting responses, such as private versus public sector/mixed, or with/without speech pathologist therapy, Chi 2 tests were used; for quantitative variables, such as number of patients or proportion of prescriptions, Mann-Whitney U -tests were used, as distributions were non-normal. The significance threshold was set at P ≤ 0.005 , , 2-tailed, with suggestiveness for 0.05 ≥ P > 0.005. Statistics To assess binary factors affecting responses, such as private versus public sector/mixed, or with/without speech pathologist therapy, Chi 2 tests were used; for quantitative variables, such as number of patients or proportion of prescriptions, Mann-Whitney U -tests were used, as distributions were non-normal. The significance threshold was set at P ≤ 0.005 , , 2-tailed, with suggestiveness for 0.05 ≥ P > 0.005. Results 3.1 Population The response rate was 7.4% ( n = 53). All questionnaires were fully completed. There were no duplicates. Respondents were predominantly male (75.5%; n = 40), with a mean age of 56 ± 11 years (median = 58 years). Sixty-six percent ( n = 35) were in private practice, and the other 34% ( n = 18) either exercised in public-sector hospitals (9.5%; n = 5) or had mixed practice (24.5%; n = 13). All managed non-COVID-19 olfactory disorders (100%; n = 53) ( ) and post-COVID-19 olfactory disorders (94.3%; n = 50). Mean seniority was 26 ± 12 years (range, 3–61 years). 3.2 Diagnostic management of LCOD Nasal endoscopy was performed “frequently” (Likert 4; n = 4) or “systematically” (Likert 5; n = 42) in 86.8% of cases ( n = 46), whereas psychophysical olfactory tests were used “never” (Likert 1; n = 29) or “rarely” (Likert 2; n = 3) in 60.4% ( n = 32). In LCOD, olfactory pathway MRI was performed by 60.4% of respondents, compared to 83% ( n = 44) for non-COVID dysosmia. Only 1 of the 24 respondents who sometimes used psychophysical tests implemented a complete test: the full Sniffin’ Sticks Test® (SST). Just a subsection of the SST (short screening version limited to the identification section) was used in 29.2% of cases ( n = 7). Respondents using a psychophysical test were ( n = 11) “moderately satisfied” (Likert 3) or ( n = 2) “satisfied” (Likert 4) with it, for various reasons ( ). Test time averaged 14 ± 9 minutes. According to most respondents (60.4%; n = 32), including those who did not actually use olfactometry, a screening test should take less than 5 minutes and a complete test less than 20 minutes (72.2%; n = 39). Only 26.4% of respondents ( n = 14) were aware of the code for “olfactometry” in the French Common Classification of Medical Acts (GJQP001, meaning that it is not reimbursed under the national health insurance system): question 21a: “Do you know its reimbursement rate?” This lack of insurance cover was a reason for non-use for 67.9% of respondents ( n = 36): question 21b: “Does the insurance status limit its use for you?” Olfactory recovery was assessed every 2–4 months and/or every 6 months and/or annually by respectively 52.8% ( n = 28), 47.2% ( n = 25) and 17% ( n = 9) of respondents. Assessment used interview without visual analog scale (VAS), VAS, a psychophysical screening test or a full psychophysical test for respectively 83% ( n = 44), 13.2% ( n = 7), 22.6% ( n = 12) and 1.9% ( n = 1) of respondents. The concepts of “threshold”, “discrimination” and “olfactory identification” were poorly known (Likert 2; n = 14) or not at all (Likert 1; n = 18) by 60.4% ( n = 32). Concerning etiology, 73.6% ( n = 39) selected peripheral olfactory threshold impairment, 49.1% ( n = 26) central olfactory identification impairment, and 50.1% ( n = 27) mixed olfactory discrimination impairment. 3.3 Treatment Drug and non-drug treatments are detailed in . Nasal corticosteroids were administered by spray or oral route, but not by nebulization, aerosol or nasal flush. Concerning speech pathologists’ competencies, 5.8% of respondents ( n = 19) were aware that a speech pathologist can provide olfactory assessment and specific training, but they never (Likert 1; n = 11) or rarely (Likert 2; n = 2) prescribed this. When they did make such a prescription, it was for LCOD (100%; n = 8/8), or non-COVID post-viral (62.5%; n = 5/8), post-traumatic (25%; n = 2/8), neurodegenerative (37.5%; n = 3/8) or tumoral dysosmia (12.5%; n = 1/8). 63.2% of respondents ( n = 12/19) hesitated to prescribe olfactory training by a speech pathologist, because of lack of experience or knowledge of efficacy (47.4%; n = 9/19), or lack of available speech pathologist or long waiting lists (15.8%; n = 3/19). 3.4 Correlation assessments Age and seniority were not associated with specific management or number of LCOD (mean, 20 ± 23 patients/year) or non-LCOD consultations (mean, 35 ± 33 patients/year) since the onset of the pandemic. Respondents working in public-sector hospitals or with mixed practice tended to have more LCOD consultations (mean, 30 ± 35 patients/year vs. 15 ± 12 patients/year in private practice; P = 0.07). Systematic psychophysical testing, knowledge of the concepts of threshold, discrimination and identification, and rate of olfactory monitoring tests were associated with no predictive factors. In non-COVID dysosmia, nasal corticosteroid spray for associated rhinitis or chronic rhinosinusitis was more frequently prescribed in private practice than in public hospitals (57% vs. 22.2%; P = 0.016), while oral corticosteroids were more frequently prescribed in hospitals than in private practice (83% vs. 48.6%; P = 0.014), and perhaps more often in northern France (72.4% vs. 45.8%; P = 0.049). In LCOD, results suggested more frequent prescription of olfactory training in private practice than in public hospitals (88.8% vs. 66.7%; P = 0.05). Population The response rate was 7.4% ( n = 53). All questionnaires were fully completed. There were no duplicates. Respondents were predominantly male (75.5%; n = 40), with a mean age of 56 ± 11 years (median = 58 years). Sixty-six percent ( n = 35) were in private practice, and the other 34% ( n = 18) either exercised in public-sector hospitals (9.5%; n = 5) or had mixed practice (24.5%; n = 13). All managed non-COVID-19 olfactory disorders (100%; n = 53) ( ) and post-COVID-19 olfactory disorders (94.3%; n = 50). Mean seniority was 26 ± 12 years (range, 3–61 years). Diagnostic management of LCOD Nasal endoscopy was performed “frequently” (Likert 4; n = 4) or “systematically” (Likert 5; n = 42) in 86.8% of cases ( n = 46), whereas psychophysical olfactory tests were used “never” (Likert 1; n = 29) or “rarely” (Likert 2; n = 3) in 60.4% ( n = 32). In LCOD, olfactory pathway MRI was performed by 60.4% of respondents, compared to 83% ( n = 44) for non-COVID dysosmia. Only 1 of the 24 respondents who sometimes used psychophysical tests implemented a complete test: the full Sniffin’ Sticks Test® (SST). Just a subsection of the SST (short screening version limited to the identification section) was used in 29.2% of cases ( n = 7). Respondents using a psychophysical test were ( n = 11) “moderately satisfied” (Likert 3) or ( n = 2) “satisfied” (Likert 4) with it, for various reasons ( ). Test time averaged 14 ± 9 minutes. According to most respondents (60.4%; n = 32), including those who did not actually use olfactometry, a screening test should take less than 5 minutes and a complete test less than 20 minutes (72.2%; n = 39). Only 26.4% of respondents ( n = 14) were aware of the code for “olfactometry” in the French Common Classification of Medical Acts (GJQP001, meaning that it is not reimbursed under the national health insurance system): question 21a: “Do you know its reimbursement rate?” This lack of insurance cover was a reason for non-use for 67.9% of respondents ( n = 36): question 21b: “Does the insurance status limit its use for you?” Olfactory recovery was assessed every 2–4 months and/or every 6 months and/or annually by respectively 52.8% ( n = 28), 47.2% ( n = 25) and 17% ( n = 9) of respondents. Assessment used interview without visual analog scale (VAS), VAS, a psychophysical screening test or a full psychophysical test for respectively 83% ( n = 44), 13.2% ( n = 7), 22.6% ( n = 12) and 1.9% ( n = 1) of respondents. The concepts of “threshold”, “discrimination” and “olfactory identification” were poorly known (Likert 2; n = 14) or not at all (Likert 1; n = 18) by 60.4% ( n = 32). Concerning etiology, 73.6% ( n = 39) selected peripheral olfactory threshold impairment, 49.1% ( n = 26) central olfactory identification impairment, and 50.1% ( n = 27) mixed olfactory discrimination impairment. Treatment Drug and non-drug treatments are detailed in . Nasal corticosteroids were administered by spray or oral route, but not by nebulization, aerosol or nasal flush. Concerning speech pathologists’ competencies, 5.8% of respondents ( n = 19) were aware that a speech pathologist can provide olfactory assessment and specific training, but they never (Likert 1; n = 11) or rarely (Likert 2; n = 2) prescribed this. When they did make such a prescription, it was for LCOD (100%; n = 8/8), or non-COVID post-viral (62.5%; n = 5/8), post-traumatic (25%; n = 2/8), neurodegenerative (37.5%; n = 3/8) or tumoral dysosmia (12.5%; n = 1/8). 63.2% of respondents ( n = 12/19) hesitated to prescribe olfactory training by a speech pathologist, because of lack of experience or knowledge of efficacy (47.4%; n = 9/19), or lack of available speech pathologist or long waiting lists (15.8%; n = 3/19). Correlation assessments Age and seniority were not associated with specific management or number of LCOD (mean, 20 ± 23 patients/year) or non-LCOD consultations (mean, 35 ± 33 patients/year) since the onset of the pandemic. Respondents working in public-sector hospitals or with mixed practice tended to have more LCOD consultations (mean, 30 ± 35 patients/year vs. 15 ± 12 patients/year in private practice; P = 0.07). Systematic psychophysical testing, knowledge of the concepts of threshold, discrimination and identification, and rate of olfactory monitoring tests were associated with no predictive factors. In non-COVID dysosmia, nasal corticosteroid spray for associated rhinitis or chronic rhinosinusitis was more frequently prescribed in private practice than in public hospitals (57% vs. 22.2%; P = 0.016), while oral corticosteroids were more frequently prescribed in hospitals than in private practice (83% vs. 48.6%; P = 0.014), and perhaps more often in northern France (72.4% vs. 45.8%; P = 0.049). In LCOD, results suggested more frequent prescription of olfactory training in private practice than in public hospitals (88.8% vs. 66.7%; P = 0.05). Discussion Since the outbreak of the COVID-19 pandemic, all Union members were heavily involved with LCOD. These patients came on top of re-existing dysosmia cases, with a 174% rise in dysosmia in the Provence-Alpes-Côte-d’Azur region according to a 3-year estimate combing literature data at 2 years , the cumulative number of positive cases at the time of writing ( https://COVIDtracker.fr/france/ ) and the natural prevalence of dysosmia , or around 880,000 extra patients, underlining the best-endeavor requirements in ENT to face up to this increase in treatment demand. 4.1 Clinical and diagnostic management of LCOD In case of olfactory impairment, especially when COVID-19 is implicated, nasal endoscopy is an integral part of daily practice in ENT, to rule out differential or associated diagnoses. Brain MRI centered on the anterior skull base is recommended to rule out any differential diagnosis in non-COVID-19 dysosmia, but also in LCOD when olfaction loss is rated > 5 on VAS (French Health Authority guideline: see Introduction), especially as > 50% of patients show abnormalities in the olfactory bulbs or nerves on imaging . Psychophysical olfactory tests were not systematic, performed by only 56% of respondents. This in agreement with European surveys prior to the pandemic, which found a mean 50% rate of use and 10% routine use . These tests are, however, indispensable for characterizing olfactory disorder and preventing the harmful clinical impact of persistent dysosmia , , . Between 10% and 60% of the general population , in proportion to age, have dysosmia without being aware of it: anosmia in 0.3–22.2% of cases and hyposmia in 9.6–61.1%. Moreover, 50% of patients who are aware of their dysosmia tend to confuse it with taste disorder . Dysosmia assessment can no longer be restricted to subjective VAS rating or patient-reported questionnaires , which incur a risk of underestimation; systematic olfactometry is therefore now recommended by European and international scientific societies . Qualitative dysosmia comprises parosmia, phantosmia and olfactory perseverance , but only parosmia has a recent dedicated psychophysical test, the SSParoT , and this is very rarely used in daily clinical practice. Qualitative dysosmia requires targeted interview, as it is associated with the severity of olfactory complaints in LCOD and impairs quality of life . There are a large number of psychophysical ortho-olfactory tests , but in France the SST is recommended, as normal values are known, it was validated in more than 9,000 European subjects and shows a test-retest reproducibility coefficient of r = 0.72 ). It includes complete evaluation of odor detection threshold, discrimination and identification: TDI score, for Threshold-Discrimination-Identification. These 3 concepts were unknown to more than a third of our respondents (∼35%), but shed light on lesion diagnosis, location and etiology and help determine the prognosis that can be explained to the patient . Better knowledge of how to interpret them is no doubt a key factor for the inclusion of olfactometry in consultation. Lack of robustness and precision was one of the respondents’ complaints, as shown in , indicating their unawareness or under-use of the recommended tests. The full SST takes 45 minutes, and much shorter tests based on the identification subtest can be used: Sniffin’ Sticks Test 15 items , 12 items or 3 items (quick or “Q-sticks”) , taking respectively 6, 4 and 1 min. Test time needs to be less than 15 minutes in daily clinical practice, posing a recurrent problem according to respondents (see ), and these short SST versions offer a good compromise between quantitative assessment and rapid testing, which is one of the main obstacles to everyday implementation. However, these short tests have limitations: lack of specificity in distinguishing hyposmia from anosmia, and difficult implementation in case of underlying pathology affecting the superior cognitive functions that are essential for olfactory identification. They are presently mainly used in follow-up or screening, as in LCOD, for which the 12-item SST was recently validated . At all events, abnormal values on an initial short test should be followed by a complete SST. There are other factors hindering use of olfactory tests: odor quality and realism diminish quickly over time, requiring regular sample replacement at prices that are still too high. The last obstacle to everyday implementation is economic. Lack of national health insurance cover in France was a hindrance for almost 70% of respondents and is presently undergoing a complex procedure to obtain revalorization. This seems indispensable, given the medical, personal and professional stakes involved in diagnosing persistent olfactory impairment and ENT specialists’ professional investment in front of growing demand from patients in distress, whose olfactory disability is still not officially recognized. 4.2 Treatment of LCOD Olfactory treatment is at present rather variable. Respondents mainly treated LCOD by nasal corticosteroid spray. To date, no drugs, and notably oral or local corticosteroids, have proven efficacy against isolated persistent olfactory impairment, whether non-COVID or LCOD . Only in case of rhinitis or chronic rhinosinusitis associated with persistent olfactory impairment can nasal corticosteroid spray or intercurrent oral corticosteroids be introduced or continued on non-COVID olfactory disorder, according to the usual indications . In LCOD, it is recommended only to continue nasal corticosteroid spray initiated for a pathology prior to the olfactory impairment. It is, however, likely that the unjustified prescriptions of local or oral corticosteroids shown in were due to increasing pressure from the patients themselves. Olfactory training is the only significantly effective treatment for persistent olfactory impairment. Still today, it is the only attitude for patients with persistent isolated olfactory impairment , and particularly for LCOD (French health authority: https://www.has-sante.fr/upload/docs/application/pdf/2021-11/fiche_troubles_du_gout_et_de_l_odorat.pdf ). It was prescribed by 60–80% of respondents, more often in private practice than in public hospitals, probably because patients were referred to the latter after failure of any previous olfactory training, in the hope of some novel treatment. The efficacy of olfactory self-training has been demonstrated , , in non-COVID dysosmia. It significantly improves psychophysical olfactory scores (TDI score), especially in post-viral etiologies . In LCOD, our recent studies highlighted the limitations of olfactory training in correcting persistent odor identification disorder , and agree with the health authority guidelines for olfactory training by a speech pathologist in case of persistent post-COVID sensory disorder, including taste and smell. Speech pathologists regularly treat olfaction (e.g., after total laryngectomy ) or include olfaction in rehabilitation protocols (e.g., swallowing disorder in the elderly , neonatology , memory disorder ). The interest of speech pathologists’ therapy in persistent isolated olfactory disorder, apart from ensuring adherence during the session itself, lies in stimulating the central olfactory areas via other sensory inputs (memory, vision, taste, touch), all interconnected in a multisensory semantic network, which constitutes a real individual mental encyclopedia, all within a 20-30 minute session. The speech pathologist can also associate treatment of other cognitive pathologies in case of central involvement (neurodegenerative disease, post-traumatic sequelae) or neurocognitive sequelae of long COVID . Only 13% of respondents prescribed speech pathologist therapy for LCOD, which may have been due to simple lack of awareness of these competencies in a third of cass, but was also due to doubts about efficacy and the availability of such therapists. The place of speech pathologist therapy in the olfactory armamentarium is justified and highlighted by the French health authority, but its range of efficacy and indications need clarifying by robust studies, to support such prescription and also to train ever more speech pathologists in dealing with olfactory disorders so as to reduce wait times. The present study had certain limitations. The response rate was low, at around 7% of Union members, and not representative of the national ENT population, the average respondent being in private practice, aged over 50 years, with more than 26 years’ seniority, which overlooked younger physicians with more recent practice, who account for a majority of ENT physicians according to the 2021 report of the DREES (Directorate of Research, Studies, Assessment and Statistics; Direction de la Recherche, des Études, de l'Évaluation et des Statistiques ; https://drees.solidarites-sante.gouv.fr/sites/default/files/2021-03/DD76.pdf ) and whose training and practices were thus not assessed here. Digital overload and the pressure of daily work may have limited response, as did the low Union membership of French ENT physicians, estimated at 3,000 in January 2012 according to the DREES. Secondly, only a posterior check on duplicates (of which there were none in the present survey) was performed to ensure against multiple response. Lack of information on respondents’ olfactory expertise was also a bias, in that lack of expertise can lead to marginal practices. Clinical and diagnostic management of LCOD In case of olfactory impairment, especially when COVID-19 is implicated, nasal endoscopy is an integral part of daily practice in ENT, to rule out differential or associated diagnoses. Brain MRI centered on the anterior skull base is recommended to rule out any differential diagnosis in non-COVID-19 dysosmia, but also in LCOD when olfaction loss is rated > 5 on VAS (French Health Authority guideline: see Introduction), especially as > 50% of patients show abnormalities in the olfactory bulbs or nerves on imaging . Psychophysical olfactory tests were not systematic, performed by only 56% of respondents. This in agreement with European surveys prior to the pandemic, which found a mean 50% rate of use and 10% routine use . These tests are, however, indispensable for characterizing olfactory disorder and preventing the harmful clinical impact of persistent dysosmia , , . Between 10% and 60% of the general population , in proportion to age, have dysosmia without being aware of it: anosmia in 0.3–22.2% of cases and hyposmia in 9.6–61.1%. Moreover, 50% of patients who are aware of their dysosmia tend to confuse it with taste disorder . Dysosmia assessment can no longer be restricted to subjective VAS rating or patient-reported questionnaires , which incur a risk of underestimation; systematic olfactometry is therefore now recommended by European and international scientific societies . Qualitative dysosmia comprises parosmia, phantosmia and olfactory perseverance , but only parosmia has a recent dedicated psychophysical test, the SSParoT , and this is very rarely used in daily clinical practice. Qualitative dysosmia requires targeted interview, as it is associated with the severity of olfactory complaints in LCOD and impairs quality of life . There are a large number of psychophysical ortho-olfactory tests , but in France the SST is recommended, as normal values are known, it was validated in more than 9,000 European subjects and shows a test-retest reproducibility coefficient of r = 0.72 ). It includes complete evaluation of odor detection threshold, discrimination and identification: TDI score, for Threshold-Discrimination-Identification. These 3 concepts were unknown to more than a third of our respondents (∼35%), but shed light on lesion diagnosis, location and etiology and help determine the prognosis that can be explained to the patient . Better knowledge of how to interpret them is no doubt a key factor for the inclusion of olfactometry in consultation. Lack of robustness and precision was one of the respondents’ complaints, as shown in , indicating their unawareness or under-use of the recommended tests. The full SST takes 45 minutes, and much shorter tests based on the identification subtest can be used: Sniffin’ Sticks Test 15 items , 12 items or 3 items (quick or “Q-sticks”) , taking respectively 6, 4 and 1 min. Test time needs to be less than 15 minutes in daily clinical practice, posing a recurrent problem according to respondents (see ), and these short SST versions offer a good compromise between quantitative assessment and rapid testing, which is one of the main obstacles to everyday implementation. However, these short tests have limitations: lack of specificity in distinguishing hyposmia from anosmia, and difficult implementation in case of underlying pathology affecting the superior cognitive functions that are essential for olfactory identification. They are presently mainly used in follow-up or screening, as in LCOD, for which the 12-item SST was recently validated . At all events, abnormal values on an initial short test should be followed by a complete SST. There are other factors hindering use of olfactory tests: odor quality and realism diminish quickly over time, requiring regular sample replacement at prices that are still too high. The last obstacle to everyday implementation is economic. Lack of national health insurance cover in France was a hindrance for almost 70% of respondents and is presently undergoing a complex procedure to obtain revalorization. This seems indispensable, given the medical, personal and professional stakes involved in diagnosing persistent olfactory impairment and ENT specialists’ professional investment in front of growing demand from patients in distress, whose olfactory disability is still not officially recognized. Treatment of LCOD Olfactory treatment is at present rather variable. Respondents mainly treated LCOD by nasal corticosteroid spray. To date, no drugs, and notably oral or local corticosteroids, have proven efficacy against isolated persistent olfactory impairment, whether non-COVID or LCOD . Only in case of rhinitis or chronic rhinosinusitis associated with persistent olfactory impairment can nasal corticosteroid spray or intercurrent oral corticosteroids be introduced or continued on non-COVID olfactory disorder, according to the usual indications . In LCOD, it is recommended only to continue nasal corticosteroid spray initiated for a pathology prior to the olfactory impairment. It is, however, likely that the unjustified prescriptions of local or oral corticosteroids shown in were due to increasing pressure from the patients themselves. Olfactory training is the only significantly effective treatment for persistent olfactory impairment. Still today, it is the only attitude for patients with persistent isolated olfactory impairment , and particularly for LCOD (French health authority: https://www.has-sante.fr/upload/docs/application/pdf/2021-11/fiche_troubles_du_gout_et_de_l_odorat.pdf ). It was prescribed by 60–80% of respondents, more often in private practice than in public hospitals, probably because patients were referred to the latter after failure of any previous olfactory training, in the hope of some novel treatment. The efficacy of olfactory self-training has been demonstrated , , in non-COVID dysosmia. It significantly improves psychophysical olfactory scores (TDI score), especially in post-viral etiologies . In LCOD, our recent studies highlighted the limitations of olfactory training in correcting persistent odor identification disorder , and agree with the health authority guidelines for olfactory training by a speech pathologist in case of persistent post-COVID sensory disorder, including taste and smell. Speech pathologists regularly treat olfaction (e.g., after total laryngectomy ) or include olfaction in rehabilitation protocols (e.g., swallowing disorder in the elderly , neonatology , memory disorder ). The interest of speech pathologists’ therapy in persistent isolated olfactory disorder, apart from ensuring adherence during the session itself, lies in stimulating the central olfactory areas via other sensory inputs (memory, vision, taste, touch), all interconnected in a multisensory semantic network, which constitutes a real individual mental encyclopedia, all within a 20-30 minute session. The speech pathologist can also associate treatment of other cognitive pathologies in case of central involvement (neurodegenerative disease, post-traumatic sequelae) or neurocognitive sequelae of long COVID . Only 13% of respondents prescribed speech pathologist therapy for LCOD, which may have been due to simple lack of awareness of these competencies in a third of cass, but was also due to doubts about efficacy and the availability of such therapists. The place of speech pathologist therapy in the olfactory armamentarium is justified and highlighted by the French health authority, but its range of efficacy and indications need clarifying by robust studies, to support such prescription and also to train ever more speech pathologists in dealing with olfactory disorders so as to reduce wait times. The present study had certain limitations. The response rate was low, at around 7% of Union members, and not representative of the national ENT population, the average respondent being in private practice, aged over 50 years, with more than 26 years’ seniority, which overlooked younger physicians with more recent practice, who account for a majority of ENT physicians according to the 2021 report of the DREES (Directorate of Research, Studies, Assessment and Statistics; Direction de la Recherche, des Études, de l'Évaluation et des Statistiques ; https://drees.solidarites-sante.gouv.fr/sites/default/files/2021-03/DD76.pdf ) and whose training and practices were thus not assessed here. Digital overload and the pressure of daily work may have limited response, as did the low Union membership of French ENT physicians, estimated at 3,000 in January 2012 according to the DREES. Secondly, only a posterior check on duplicates (of which there were none in the present survey) was performed to ensure against multiple response. Lack of information on respondents’ olfactory expertise was also a bias, in that lack of expertise can lead to marginal practices. Conclusion Members of the French National Union of Otorhinolaryngology-Head and Neck Surgery Specialists (Syndicat national des médecins spécialisés en ORL et chirurgie cervico-faciale: SNORL) are particularly involved in LCOD. Clinical and olfactory assessment, however, is sadly insufficient, for numerous reasons. Although psychophysical olfactory tests with known normal values, validation and proven reproducibility were available, test time, difficulties of interpretation and lack of cover by the public authorities hindered the spread of good olfactory diagnostic practice and management of LCOD, for which demand is increasing. Treatment is presently based on olfactory training; speech pathologists are insufficiently involved as yet and may lack knowledge of the literature. The authors received no specific funding. The authors declare that they have no competing interest.
Dental care and oral conditions are associated with the prevalence of sarcopenia in people with type 2 diabetes: a cross-sectional study
60513cbb-7c8d-4c79-9bbf-6c82fddb95d1
10080754
Dental[mh]
The population of people with type 2 diabetes mellitus (T2DM) is increasing worldwide . T2DM is a chronic disease characterized by hyperglycemia because of insulin resistance (IR). In IR states, insulin-stimulated glucose disposal is severely impaired in the skeletal muscle . Therefore, IR induces loss of muscle mass. Sarcopenia, which is defined as muscle strength, mass, and function loss with age, has been associated with cardiovascular disease (CVD) and low quality of life . In addition, sarcopenia is known as a risk factor for mortality . People with T2DM have been reported to have a 1.55-to-3-fold higher risk of sarcopenia than the general population, since people with T2DM recognize IR . Therefore, sarcopenia in people with T2DM requires more attention than that in individuals without diabetes. People with T2DM had a higher risk of periodontal disease than those without . The severity of periodontal disease was related to glucose tolerance status and the development of glucose intolerance and glycosylated hemoglobin (HbA1c) levels . Furthermore, the severity of periodontal disease affects inflammation and IR . Infection with porphyromonas gingivalis , which causes periodontal disease, is a risk of metabolic syndrome and skeletal muscle metabolic dysfunction via gut microbiome alteration . Furthermore, toothbrushing behavior was associated with smaller increments in the number of teeth with periodontal pocketing . Therefore, it is important for people with diabetes to have a family dentist and regular visits with their dentist. Chewing is a process that includes predation, crushing, and mixing of food; the formation of a bolus; and delivery of that bolus to the pharynx, which greatly affects food intake . Chewing ability has been shown to be associated with sarcopenia in the general population . In addition, several studies have reported the relationship between chewing ability and muscle strength , physical performance and all-cause mortality . Moreover, we also reported that low tongue pressure was related to the presence of sarcopenia . On the other hand, poor chewing ability was associated with the use of removable dentures . The use of complete dentures has been shown to be related to the presence of low handgrip strength . However, previous studies have not researched the relationship between dental care and oral conditions, such as having a family dentist, toothbrushing behavior, chewing ability or use of complete dentures, and the presence of sarcopenia in people with T2DM. Therefore, this cross-sectional study researched the association between dental care and oral conditions, such as having a family dentist, toothbrushing, chewing ability, or use of complete dentures, and sarcopenia in people with T2DM. Study design, setting, and participants The KAMOGAWA-DM cohort study, which is a cohort study in progress with diabetes mellitus, was introduced in 2014 to understand the natural disease history of individuals with diabetes mellitus . The KAMOGAWA-DM cohort study included outpatients at the Department of Endocrinology and Metabolism, Kyoto Prefectural University of Medicine Hospital (Kyoto, Japan). The present study was approved by the Research Ethics Committee of Kyoto Prefectural University of Medicine (No. RBMR-E-466-6) and was conducted in accordance with the principles of the Declaration of Helsinki. After obtaining written informed consent, medical data were anonymously collected and compiled into a database. This study included people with T2DM who responded to questionnaires about dental care and oral conditions from March 2015 to April 2021 and agreed to participate in the KAMOGAWA-DM cohort study. The exclusion criteria were as follows: 1) no data on body composition and 2) no data on handgrip strength. Questionnaire about lifestyle characteristics and chewing ability Family history of diabetes, duration of diabetes, smoking status, exercise habit, and alcohol consumption habit were assessed using a standardized questionnaire. Based on their responses to the questionnaire, “exercise habit” was defined as carrying out any type of physical activity once or more per week, “smoking habit” was defined as smoking cigarettes or another tobacco product currently, and “alcohol consumption habit” was defined as daily alcohol consumption. Dental care and oral condition questionnaire Participants were grouped into two groups: those who had a family dentist or those who did not have a family dentist. The frequency of toothbrushing was how often they brushed their teeth per day: none, sometimes, once, twice, thrice, four times, or five times or more per day. We defined people with toothbrushing behavior if they brushed their teeth ≥ twice per day . Chewing ability was evaluated by the following statements: “I can chew and eat anything,” “There are some food I cannot chew,” “There are many food I cannot chew,” or “I cannot eat with chewing.” In this study, “I can chew and eat anything” was defined as good chewing ability and “There are some food I cannot chew,” “There are many food I cannot chew,” or “I cannot eat with chewing” were defined as having poor chewing ability . Participants were grouped into two groups: those with or without complete denture usage. Participants’ data After fasting overnight, venous blood samples were collected to measure the concentrations of fasting plasma glucose, high-density lipoprotein cholesterol, triglycerides, uric acid, and creatinine. Glycosylated hemoglobin (HbA1c) was measured by high-performance liquid chromatography and expressed in the National Glycohemoglobin Standardization Program. The estimated glomerular filtration rate (eGFR; mL/min/1.73 m 2 ) was estimated as follows: eGFR = 194 × serum creatinine − 1.094 × age − 0.287 (×0.739 for women) . Blood pressure measurements were performed automatically using an automatic blood pressure measurement device (HEM-906; OMRON, Kyoto, Japan) after resting for 5 min in a quiet room. The handgrip strength of each hand was tested by a handgrip dynamometer (Smedley, Takei Scientific Instruments Co., Ltd., Niigata, Japan) twice with each hand, and the maximum value was recorded and used for analysis. Body composition was assessed using a multifrequency impedance body composition analyzer, InBody 720 (InBody Japan, Tokyo, Japan), which has been shown to have good correlation with dual-energy X-ray absorptiometry . Using this analyzer, body weight (BW, kg) and appendicular muscle mass (kg) were determined, and then, body mass index (BMI, kg/m 2 ) and skeletal muscle mass index (SMI, kg/m 2 ) were calculated, BMI = BW (kg)/height squared (m 2 ) and SMI = appendicular muscle mass (kg)/ height squared (m 2 ), respectively. Data of medications for diabetes, including glucagon-like peptide-1 agonist, insulin, sodium glucose cotransporter-2 inhibitor, metformin, dipeptidyl peptide 4 inhibitor, sulfonylurea, thiazolidines, glinides, α-glycosidase inhibitors and antihypertensive drugs, were obtained from medical records. Having hypertension was defined as antihypertensive drugs usage, systolic blood pressure ≥ 140 mmHg, and/or diastolic blood pressure ≥ 90 mmHg. Definition of sarcopenia Sarcopenia was defined according to the Asian Working Group for Sarcopenia guidelines, utilizing SMI and handgrip strength . People who had both low muscle strength, indicating handgrip strength < 28 kg for men and < 18 kg for women, and low skeletal muscle mass indicating SMI < 7.0 kg/m 2 for men and < 5.7 kg/m 2 for women, were diagnosed with sarcopenia . Statistical analyses Data are presented as frequencies of potential confounding variables or means (standard deviation [SD]). The participants were classified into the following two groups based on having a family dentist, toothbrushing behavior, chewing ability and use of complete dentures. The differences in the continuous variables and categorical variables were evaluated using Student’s t-test and chi-square test, respectively. Logistic regression analyses were run to determine the odds ratio (OR) and 95% confidence interval (CI) for having a family dentist, toothbrushing behavior, chewing ability, or use of complete dentures in the presence of sarcopenia, adjusting for age, sex, smoking habits and exercise habits. Statistical analyses were conducted using EZR (Saitama Medical Center, Jichi Medical University, Saitama, Japan) , a graphical user interface for R (The R Foundation for Statistical Computing, Vienna, Austria). Differences were considered statistically significant at p values of < 0.05. The KAMOGAWA-DM cohort study, which is a cohort study in progress with diabetes mellitus, was introduced in 2014 to understand the natural disease history of individuals with diabetes mellitus . The KAMOGAWA-DM cohort study included outpatients at the Department of Endocrinology and Metabolism, Kyoto Prefectural University of Medicine Hospital (Kyoto, Japan). The present study was approved by the Research Ethics Committee of Kyoto Prefectural University of Medicine (No. RBMR-E-466-6) and was conducted in accordance with the principles of the Declaration of Helsinki. After obtaining written informed consent, medical data were anonymously collected and compiled into a database. This study included people with T2DM who responded to questionnaires about dental care and oral conditions from March 2015 to April 2021 and agreed to participate in the KAMOGAWA-DM cohort study. The exclusion criteria were as follows: 1) no data on body composition and 2) no data on handgrip strength. Family history of diabetes, duration of diabetes, smoking status, exercise habit, and alcohol consumption habit were assessed using a standardized questionnaire. Based on their responses to the questionnaire, “exercise habit” was defined as carrying out any type of physical activity once or more per week, “smoking habit” was defined as smoking cigarettes or another tobacco product currently, and “alcohol consumption habit” was defined as daily alcohol consumption. Participants were grouped into two groups: those who had a family dentist or those who did not have a family dentist. The frequency of toothbrushing was how often they brushed their teeth per day: none, sometimes, once, twice, thrice, four times, or five times or more per day. We defined people with toothbrushing behavior if they brushed their teeth ≥ twice per day . Chewing ability was evaluated by the following statements: “I can chew and eat anything,” “There are some food I cannot chew,” “There are many food I cannot chew,” or “I cannot eat with chewing.” In this study, “I can chew and eat anything” was defined as good chewing ability and “There are some food I cannot chew,” “There are many food I cannot chew,” or “I cannot eat with chewing” were defined as having poor chewing ability . Participants were grouped into two groups: those with or without complete denture usage. After fasting overnight, venous blood samples were collected to measure the concentrations of fasting plasma glucose, high-density lipoprotein cholesterol, triglycerides, uric acid, and creatinine. Glycosylated hemoglobin (HbA1c) was measured by high-performance liquid chromatography and expressed in the National Glycohemoglobin Standardization Program. The estimated glomerular filtration rate (eGFR; mL/min/1.73 m 2 ) was estimated as follows: eGFR = 194 × serum creatinine − 1.094 × age − 0.287 (×0.739 for women) . Blood pressure measurements were performed automatically using an automatic blood pressure measurement device (HEM-906; OMRON, Kyoto, Japan) after resting for 5 min in a quiet room. The handgrip strength of each hand was tested by a handgrip dynamometer (Smedley, Takei Scientific Instruments Co., Ltd., Niigata, Japan) twice with each hand, and the maximum value was recorded and used for analysis. Body composition was assessed using a multifrequency impedance body composition analyzer, InBody 720 (InBody Japan, Tokyo, Japan), which has been shown to have good correlation with dual-energy X-ray absorptiometry . Using this analyzer, body weight (BW, kg) and appendicular muscle mass (kg) were determined, and then, body mass index (BMI, kg/m 2 ) and skeletal muscle mass index (SMI, kg/m 2 ) were calculated, BMI = BW (kg)/height squared (m 2 ) and SMI = appendicular muscle mass (kg)/ height squared (m 2 ), respectively. Data of medications for diabetes, including glucagon-like peptide-1 agonist, insulin, sodium glucose cotransporter-2 inhibitor, metformin, dipeptidyl peptide 4 inhibitor, sulfonylurea, thiazolidines, glinides, α-glycosidase inhibitors and antihypertensive drugs, were obtained from medical records. Having hypertension was defined as antihypertensive drugs usage, systolic blood pressure ≥ 140 mmHg, and/or diastolic blood pressure ≥ 90 mmHg. Sarcopenia was defined according to the Asian Working Group for Sarcopenia guidelines, utilizing SMI and handgrip strength . People who had both low muscle strength, indicating handgrip strength < 28 kg for men and < 18 kg for women, and low skeletal muscle mass indicating SMI < 7.0 kg/m 2 for men and < 5.7 kg/m 2 for women, were diagnosed with sarcopenia . Data are presented as frequencies of potential confounding variables or means (standard deviation [SD]). The participants were classified into the following two groups based on having a family dentist, toothbrushing behavior, chewing ability and use of complete dentures. The differences in the continuous variables and categorical variables were evaluated using Student’s t-test and chi-square test, respectively. Logistic regression analyses were run to determine the odds ratio (OR) and 95% confidence interval (CI) for having a family dentist, toothbrushing behavior, chewing ability, or use of complete dentures in the presence of sarcopenia, adjusting for age, sex, smoking habits and exercise habits. Statistical analyses were conducted using EZR (Saitama Medical Center, Jichi Medical University, Saitama, Japan) , a graphical user interface for R (The R Foundation for Statistical Computing, Vienna, Austria). Differences were considered statistically significant at p values of < 0.05. A total of 304 individuals with T2DM were included in the present study. We excluded 38 people: 26 who did not undergo the multifrequency impedance body composition analyzer test, and 12 who did not undergo measurement of handgrip strength; finally, a total of 266 people (162 men and 104 women) were included in this study (shown in Fig. ). The clinical characteristics of the study participants are summarized in Table . Mean age, BMI, SMI, and handgrip strength were 69.1 ± 8.7 years, 23.6 ± 3.9 kg/m 2 , 6.9 ± 1.1 kg/m 2 , and 26.4 ± 8.5 kg, respectively. The proportion of sarcopenia was 18.0% ( n = 48), and the proportions of participants not having a family dentist, those without a toothbrushing behavior, those with poor chewing ability, and those with complete dentures usage were 30.5% ( n = 81), 33.1% ( n = 88), 25.2% ( n = 67), and 14.3% ( n = 38), respectively. Metformin and dipeptidyl peptide 4 inhibitor were used 43.6% ( n = 116) and 35.3% ( n = 94). Table reveals the results of the clinical characteristics of the participants according to dental care and oral condition. The proportion of sarcopenia with people not having a family dentist was higher than those having a family dentist (27.2% vs. 14.1%, p = 0.017). The proportion of sarcopenia with poor chewing ability was higher than those with good chewing ability (26.9% vs. 15.1%, p = 0.047), and those with use of complete denture were higher than those without use of complete denture (36.8% vs. 14.9%, p = 0.002). The proportion of sarcopenia in people without toothbrushing behavior tended to be higher than that in people with toothbrushing behavior, although the difference was not statistically significant (25.0% vs. 14.6%, p = 0.057). The proportion of not having a family dentist (52.6% vs. 26.8%, p = 0.003), no toothbrushing behavior (55.3% vs. 20.2%, p < 0.001), and low chewing ability (55.3% vs. 20.2%, p < 0.001) in people with use of complete dentures was higher than those without. Furthermore, not having a family dentist (adjusted OR, 2.48 [95% CI: 1.21–5.09], p = 0.013), poor chewing ability (adjusted OR, 2.12 [95% CI: 1.01–4.46], p = 0.048), and use of complete dentures (adjusted OR, 2.38 [95% CI: 1.01–5.99], p = 0.046) were related to the presence of sarcopenia. The absence of toothbrushing behavior was associated with the presence of sarcopenia (unadjusted OR, 1.95 [95% CI: 1.03–3.68], p = 0.040), although it was not statistically significant after adjusting for covariates (adjusted OR, 1.71 [95% CI: 0.81–3.59], p = 0.157) (Table ). The present study is the first investigation of the relationship between dental care and oral conditions, such as having a family dentist, toothbrushing behavior, chewing ability or usage of complete dentures, and the prevalence of sarcopenia in people with T2DM. The results of the present study showed that not having a family dentist, poor chewing ability, and use of complete dentures were associated with a higher prevalence of sarcopenia. Possible explanations for the association between dental care or oral condition and a higher prevalence of sarcopenia are as follows: Periodontal disease severity affects chronic inflammation and IR . Chronic inflammation that occurs in response to many kinds of bacterial community in the subgingival region is features of periodontal disease . Although this chronic inflammatory happens locally in the oral cavity, inflammatory mediators produced by periodontitis, as well as bacteria, can expand from the oral cavity, causing various diseases outside the oral cavity . Inflammatory cytokines, such as tumor necrosis factor-α (TNF-α), can trigger IR states , and the epidemiological studies also have reported that inflammation is an independent risk of both IR and T2DM . IR has been shown to be a reason for sarcopenia . Furthermore, periodontal disease is recognized as a risk factor for metabolic dysfunction of skeletal muscle . In this study, the proportion of sarcopenia in people who had a family dentist was lower than that in people who did not have a family dentist. This suggests that having a family dentist and maintaining good oral health may reduce IR and prevent sarcopenia, although the presence or absence of periodontal disease was not evaluated. Furthermore, toothbrushing is considered a prerequisite for maintaining good oral health and preventing periodontal disease . In this study, the proportion of sarcopenia in people without toothbrushing behavior was higher than that in those with toothbrushing behavior, although the results of multivariate analysis were not statistically significant. A previous study showed that toothbrushing behavior was related to handgrip strength . Porphyromonas gingivalis , which is periodontitis bacteria, impairs glucose uptake in skeletal muscle associated with altering gut microbiota . In this study, toothbrushing behavior was associated with the presence of low muscle strength. Although further research is needed, toothbrushing may prevent sarcopenia, because toothbrushing protect the development of periodontal disease. In addition, maintaining good oral health prevents oral frailty. Oral frailty, which is now recognized as the accumulation of a poor oral function and condition, is reported to be associated with risk of incident mortality, malnutrition, dysphagia, physical frailty, and need for long-term care, and oral frailty causes poor chewing ability . Previous studies have reported the relationship between chewing ability and handgrip strength or general function . Furthermore, chewing ability has been found to be related to sarcopenia in the general population . Poor chewing ability has been known to be a risk factor for malnutrition . In this study, the presence of sarcopenia in people with poor chewing ability was higher than those with good chewing ability. Therefore, maintaining good chewing ability may prevent sarcopenia. A previous study showed that the use of complete dentures is associated with the presence of low handgrip strength . In this study, the use of complete dentures was related to the presence of sarcopenia. People who use complete dentures often have denture stomatitis, which is a common inflammatory disease that affects the mucosa under complete dentures, and the progression of denture stomatitis without treatment may cause systemic infection . Oral infections increase the levels of interleukin-6 and TNF-α receptors , which are associated with inflammation. However, there were certain limitations in this study. First, the data of dental care and oral health status were based on self-reporting, and some concerns were raised the accuracy of the data. Second, the presence or absence of periodontal disease and denture stomatitis were not evaluated. Finally, the design of this study was cross-sectional in nature. Thus, the causal relationship between dental care and oral condition, such as having a family dentist, toothbrushing behavior, chewing ability, or use of complete dentures, and the prevalence of sarcopenia is unclear. Moreover, having a family dentist, toothbrushing behavior, chewing ability, and use of complete dentures may affect each other. This study identified that not having a family dentist, poor chewing ability, and use of complete dentures were related to a higher prevalence of sarcopenia in people with T2DM. Clinicians should pay attention to the dental care and oral conditions of individuals with T2DM to prevent sarcopenia.
The effectiveness of clinical guideline implementation strategies in oncology—a systematic review
e5b90054-f9bf-4fc8-9673-d64877036950
10080872
Internal Medicine[mh]
Question: What are the most effective clinical guideline implementation strategies in oncology? Findings: The nine included studies assessed multi-component guideline implementation interventions compared to no intervention. Educational meetings combined with materials, opinion leaders, audit and feedback, a tailored intervention or academic detailing may have little to no effect on overall survival, quality of life and adverse events of cancer patients compared to no intervention, however, the evidence is either uncertain or very uncertain. Multi-component interventions may increase or slightly increase guideline adherence regarding screening, referral and prescribing behaviour of healthcare professionals according to guidelines, but the certainty in evidence is low. The interventions may have little to no effect on attitudes and knowledge of healthcare professionals, still, the evidence is very uncertain. Meaning: This systematic review gives an overview of recent strategies used for guideline implementation in oncology in order to inform policymakers and professional organisations on the development and adoption of implementation strategies. Clinical practice guidelines (CPGs) are a powerful tool of evidence-based medicine, designed to mitigate the gap between clinical research and current practice . It was shown that non-adherence to guidelines may lead to unnecessary diagnostics and suboptimal treatment . On the contrary, a systematic review concluded that adherence to breast cancer guidelines was associated with increased overall survival and disease-free survival . The implementation of CPGs in oncology is considered to be very complex and therefore challenging due to the heterogeneity of cancer types, high number of CPGs of different methodologies, inconsistent use of guideline-based quality indicators, complexity of therapeutic decisions, and the various influences of multiple interconnected clinical specialties involved in this setting . This may lead to inconsistencies and patient and practitioner confusion due to information overload . Also, the heterogeneity in structure, target groups and endpoints addressed in guidelines may be a challenge for implementation, as discovered by comparing nine oncological CPGs of well-known organisations on advanced breast, lung, and colon cancer . Due to these barriers, recommendations may not be adequately applied in practice and patients may not benefit from evidence-based research. The use of CPGs in practice is reported as being unpredictable and slow . It was estimated that approximately 30–50% of patients receive treatment that is not evidence-based, and 20–25% receive unnecessary or even potentially harmful treatments . For example, a study concluded that guideline-discordant imaging appears to be common as almost half of men with low-risk localised prostate cancer receive unnecessary imaging while there is underuse of imaging among men with a high-risk disease in the USA . Furthermore, it was shown that nurses' failure to routinely screen and implement appropriate cancer pain management has an adverse impact on health-related quality of life . Moreover, another study showed that urgent referral guideline recommendations were not followed for the majority of patients with common possible cancer features in the UK . Consequently, the development of high methodological CPGs alone does not automatically result in their use. In order to improve patient outcomes and decrease variations in the current oncological practice, it is important to identify and assess optimal strategies for the implementation of CPGs . There are various implementation strategies that have been tested over the years. These strategies can be used alone as single-component strategies or in combination as multi-component interventions to facilitate the use of CPGs in clinical practice. The dissemination of printed educational materials has been considered as accessible, convenient to use, and potentially cost-effective intervention across healthcare settings . It was shown that used alone and compared to no intervention, it may have a small beneficial effect on professional practice outcomes. The effect of opinion leaders was examined in a recent Cochrane review which concluded that used alone or in combination with other implementation strategies, it probably improves the compliance with evidence-based practice of professionals . Further, reminders (manually and computer-generated) were shown to probably improve the quality of care compared to usual care or other co-interventions . Moreover, it was shown that audit and feedback lead to small but potentially important improvements in professional practice. The effectiveness of guideline implementation strategies seems to depend on how the feedback is provided and on the baseline performance of professionals . The systematic review of Grimshaw 2004 found that 73% of the included studies examined multi-component interventions, and the most effective single strategies were reminders, dissemination of educational materials, and audit and feedback . In the hospital setting of emergency departments, reminders alone or educational interventions combined with audit and feedback were likely to be effective in improving guideline adherence . In the care of chronic diseases at the primary level setting, passively receiving educational materials was least effective compared to educational meetings implying the active involvement of professionals . Multi-component interventions were slightly more effective compared to single interventions . Still, in all these reviews, although assessing change in health care provider behaviours, it was rather uncertain whether the interventions really lead to improved patient outcomes. One review concluded that reminders and feedback as a single intervention, and group education and organisational strategies used as part of a multi-component intervention, corresponded with positive changes on professionals’ behaviour and patient outcomes in the oncological setting . Still, this review relies mostly on studies published more than ten years ago. Moreover, the research findings from Grimshaw and Hakkennes and Dodd serve as a foundation for understanding CPG implementation strategies among professionals, yet they rely on papers published almost 20 years ago and are not specific to oncology. Other more recent reviews assess the effectiveness of implementation strategies, however, do not particularly focus on oncology . Despite the current interest in CPGs and innovative methods to promote knowledge transfer into practice, a surprisingly high uncertainty about the effectiveness of guideline implementation strategies in oncology remains. This systematic review aims to fill the gap regarding the synthesis of the effectiveness of recent guideline implementation strategies on patient-relevant outcomes and guideline adherence of healthcare professionals in the oncological settings. This systematic review was performed according to the recommendations of the Cochrane Handbook for Systematic Reviews of Interventions and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement . The PICO ( P opulation, I ntervention, C omparison, O utcome) framework was used to guide the eligibility criteria of this review (Table ). To identify most recent studies, a comprehensive electronic literature search for studies as of year 2011 was performed. The following electronic databases were searched on 16 December 2022: PubMed, Web of Science, GIN, CENTRAL and CINAHL. With the assistance of an experienced information specialist (IM), these search strategies were optimised. Search filters were used to identify papers with study designs of interest (e.g. filter for RCTs). The search strategies included keywords such as clinical practice guideline, implementation, survival, adherence, behaviour, health professionals, patients, oncology . The full search strategies can be found in Additional file in the Supplement of this review. Screening and selection of studies were performed independently by two reviewers (AB, VP, NK). Reference lists of all eligible studies and relevant systematic reviews were hand searched by one author (AB) for additional eligible studies. Only prospectively registered controlled studies (e.g. (cluster) randomized controlled trials, controlled pre-post trial designs) were included in this review. The risk of bias of each included study was independently assessed by pairs of two reviewers (AB, VP, AW, NK). Disagreements were resolved by discussion and consensus and a third reviewer (NS) was involved when consensus was not reached. The Cochrane risk of bias tool for RCTs quality assessment and ROBINS-tool for non-randomized studies were used . Overall, a study was judged to have a high risk of bias if at least one bias domain was judged to be at high risk. Moreover, the certainty in the evidence was rated for all outcomes using the GRADE tool and recommendations . It was judged by one reviewer (AB) and checked by two other reviewers (NK, NS). The assessment was separated according to the study design. A total of 1326 records were identified through electronic database searching. After identifying nine additional records through hand-searching references and removing fifteen duplicate records, a total of 1320 records were included for the title and abstract screening. The detailed study selection process is described in the PRISMA flow-diagram (Fig. ). Furthermore, the list of excluded studies at full-text screening stage can be found in the Supplement (Additional file ). A total of nine studies published between 2017 and 2022, five cluster RCTs and four controlled NRSIs with before-and after study design , were included in the synthesis. Further, six randomized ongoing trials were identified and seven studies were categorised as awaiting classification due to no published results and conference abstracts with insufficient information or inaccessible full text . The nine included studies assessed multi-component guideline implementation interventions compared to no intervention in 3577 cancer patients and more than 450 oncologists, nurses and medical staff . The most frequently used strategies, which were applied in all nine interventions, were educational meetings and educational materials (Table ). Population characteristics of the included studies, a detailed description of the interventions, outcomes and characteristics of ongoing and awaiting assessment studies can be found in the Supplement (Additional files , , , , ) . Risk of bias in randomized studies Overall, the risk of bias was rated as high for all five cluster RCTs due to the lack of blinding of outcome assessors and participants, which affected both objective and subjective outcomes (Fig. ). Additionally, the detailed risk of bias judgement table and the risk of bias summary plot are listed in Additional files and in the Supplement. Risk of bias in non-randomized studies For both objective and subjective outcomes, the overall risk of bias was judged to be serious in three studies (studies have some important problems) and critical in one study (the study is too problematic to provide any useful evidence and should not be included in synthesis) (Fig. ). This due to the lack of comparability between groups and lack of control for confounders, bias in the measurement and the reporting of outcomes. Additionally, the detailed risk of bias judgement table and the risk of bias summary plot are listed Additional files and in the Supplement. Effects on primary (patient-level) and secondary (provider-level) outcomes The effects on primary and secondary outcomes of this review are summarised in Additional file in the outcome effects Tables , , , , , , , in the Supplement of this review. Patient-level outcomes Overall survival Two cluster RCTs reported overall survival in 865 cancer patients . Both studies suggested little to no difference in effects (HR 1.05, 95% CI: 0.85 to 1.29, p = 0.68; RR 0.946, 95% CI: 0.895 to 1.228, p = 0.813; eTable ) . Overall, the effect of the implementation interventions may have little to no effect on overall survival, still, this is uncertain due to the serious risk of bias and imprecision of outcome measurement. Quality of life and patient-reported outcomes One cluster RCT reported pain scores for 544 cancer patients . The intervention combining opinion leaders with educational meetings, materials and audit and feedback may have little to no effect on pain scores and on total quality of life QLQ-C15-PAL scores measured at different follow-up times (eTable ). The certainty in the evidence is low due to serious limitations in the study design and imprecision. Two NRSIs reported quality of life as different symptoms measured with different scales in 472 patients . The time points reported in both studies were not clearly described. Measured on a scale from 0 (no pain) to 10 (worst pain), the results of Cowperthwaite 2019 suggested little to no difference between intervention and comparator at T1 (MD 0.090, 95% CI: -0.6131 to 0.7931, p = 0.8013; eTable ) and a small effect at T2 (MD 0.210, 95% CI: -0.3477 to 0.7677, p = 0.4594; eTable ) on pain intensity . Still, the evidence is very uncertain. The results of Knoerl 2019 (eTable ) suggested effects in favour of the intervention on CIPN sensory (at T1, T2, T3) and motor severity (at T1, T3) on a Lickert Scale from 1 to 4 , however, the certainty in the evidence is very uncertain. Overall, the effect of the implementation interventions compared to no intervention resulting from non-randomized studies on quality of life is very uncertain due to the very serious risk of bias, imprecision, and serious indirectness of outcome measurement. Adverse events Two cluster RCTs reported adverse events in 865 cancer patients . Gilbert 2021 suggested an effect in favour of the comparator regarding the proportion of patients having at least one adverse event or one postsurgical complication (eTable ), still, the evidence is very uncertain . The results of Mohile 2021 suggested an effect in favour of the intervention at 3 months in terms of 3–5 grade adverse events (adjusted RR 0.74, 95% CI: 0.64 to 0.86, p = 0.001; eTable ) . Overall, the effect of the implementation interventions may have little to no effect on adverse events, still, the evidence is very uncertain due to the serious risk of bias, imprecision and inconsistency in outcome results. Provider-level outcomes Screening Two cluster RCTs reported screening for 454 cancer patients . Both studies combined educational meetings with educational materials, academic detailing. One study additionally combined opinion leaders with audit and feedback. Both suggested an effect in favour of the intervention (OR 348.82, 95% CI 69.31 to 1755.62, p < 0.0001; RR = 5.29, 95% CI 3.03 to 9.23, p < 0.0001; eTable ). Overall, the multicomponent interventions implemented in these studies may increase adherence to guidelines regarding screening rates, still, the certainty of the evidence is low due to serious limitations in the study design and imprecision of results. Referral Two cluster RCTs reported referrals for 1219 patients . The results of Brown 2018 suggested little to no difference between the intervention (educational meetings, materials, audit and feedback, opinion leaders, tailored intervention) and no intervention on referrals (RR 1.0474, 95% CI: 0.8631 to 1.2711, p = 0.6389; eTable ) . The results of McCarter 2018 suggested an effect in favour of almost the same intervention (used academic detailing instead of a tailored intervention) on referral rates (OR 37.70, 95% CI: 0.93 to 1530, p = 0.0537; eTable ) . Overall, the multicomponent interventions implemented in these studies may increase referrals slightly according to guidelines, still, the certainty of the evidence is low due to serious limitations in the study design and serious imprecision. Prescribing behaviour One cluster RCT reported this outcome for 718 cancer patients . Combining educational materials and meetings for guideline implementation may increase adherence to guidelines slightly regarding prescribing behaviour compared to no intervention (eTable ). However, the certainty in the evidence is low due to serious limitations in the study design and serious imprecision. Two NRSIs reported prescribing behaviour . The results of Bonkowski 2018 suggested effects in favour of the intervention on narcotic administration of one and three doses. Further, the evidence suggested little effect in one item (MD -0.440, 95% CI: -0.0867 to 0.7933, p = 0.0157; eTable ), and no difference in another item (MD 0.000, 95% CI: -0.3354 to 0.3354, p = 1.000; eTable ) . Knoerl 2021 suggested an effect in favour of the intervention on the frequency of appropriate mild CIPN management (OR = 2.5278, 95% CI: 0.8356 to 7.6471, p = 0.1006; eTable ), whereas the frequency for appropriate moderate-severe CIPN management (OR = 0.8571, 95% CI: 0.2463 to 2.9827, p = 0.8086; eTable ) was lower after the intervention . Overall, the interventions may have little to no effect on this outcome, but the evidence from non-randomized studies is very uncertain due to very serious limitations in the study design and serious imprecision. Attitudes Six included studies reported this outcome using different measurements (eTable ) . One RCT and one NRSI reported attitudes combined with knowledge . The other RCT reported that the majority of dieticians indicate that the implementation intervention was helpful or very helpful . In one NRSI it was narratively reported that nurses were highly satisfied with the intervention . Knoerl 2021 reported acceptability and feasibility scores only for the intervention group . The results of Phillips 2017 suggested lower self-perceived knowledge directly after the intervention (MD 1.9, 95% CI: 0.5 to 3.4, p = 0.012; eTable ) and an effect in favour of the intervention at ten weeks after (MD -1.4, 95% CI: -1.9 to -1.0, p < 0.001; eTable ) . Overall, the interventions may have little to no effect on attitudes, but the evidence is very uncertain due to serious (for RCTs) and very serious (for NRSIs) limitations in the study design, serious indirectness and very serious imprecision of results. Knowledge One RCT and one NRSI reported knowledge combined with attitudes scores . The results of Phillips 2017 for perceived knowledge and assessment tool suggested an effect in favour of the intervention measured directly after (MD -1.3, 95% CI: -2.1 to 0.6, p < 0.001 and MD -3.6, 95% CI: -0.5 to 2.2, p < 0.001; eTable ) and at ten weeks after the intervention (MD -1.7, 95% CI: -2.2 to 1.1, p < 0.001 and MD -3.6, 95% CI: -0.5 to 2.2, p < 0.001; eTable ), whereas little to no difference has been suggested between these time points (MD -0.3, 95% CI: -1.1 to 0.4 and MD 0.00, 95% CI: -0.7 to 0.7; eTable ) . Overall, the interventions may have little to no effect on knowledge, but the evidence is very uncertain evidence due to serious (for RCTs) and extremely serious (for NRSIs) limitations in the study design, serious indirectness and very serious imprecision. Certainty in the evidence Table provides the GRADE Evidence Profile including the detailed judgement of the certainty and the narrative summary of findings. Overall, the certainty in the evidence was judged to be low for overall survival, quality of life (evidence from RCTs), screening, referrals, prescribing behaviour (evidence from RCTs), and very low for all other outcomes (Table ). The most frequent reasons for downgrading of the certainty were limitations in the study design, indirectness and imprecision of the results (Table ). Overall, the risk of bias was rated as high for all five cluster RCTs due to the lack of blinding of outcome assessors and participants, which affected both objective and subjective outcomes (Fig. ). Additionally, the detailed risk of bias judgement table and the risk of bias summary plot are listed in Additional files and in the Supplement. For both objective and subjective outcomes, the overall risk of bias was judged to be serious in three studies (studies have some important problems) and critical in one study (the study is too problematic to provide any useful evidence and should not be included in synthesis) (Fig. ). This due to the lack of comparability between groups and lack of control for confounders, bias in the measurement and the reporting of outcomes. Additionally, the detailed risk of bias judgement table and the risk of bias summary plot are listed Additional files and in the Supplement. The effects on primary and secondary outcomes of this review are summarised in Additional file in the outcome effects Tables , , , , , , , in the Supplement of this review. Patient-level outcomes Two cluster RCTs reported overall survival in 865 cancer patients . Both studies suggested little to no difference in effects (HR 1.05, 95% CI: 0.85 to 1.29, p = 0.68; RR 0.946, 95% CI: 0.895 to 1.228, p = 0.813; eTable ) . Overall, the effect of the implementation interventions may have little to no effect on overall survival, still, this is uncertain due to the serious risk of bias and imprecision of outcome measurement. One cluster RCT reported pain scores for 544 cancer patients . The intervention combining opinion leaders with educational meetings, materials and audit and feedback may have little to no effect on pain scores and on total quality of life QLQ-C15-PAL scores measured at different follow-up times (eTable ). The certainty in the evidence is low due to serious limitations in the study design and imprecision. Two NRSIs reported quality of life as different symptoms measured with different scales in 472 patients . The time points reported in both studies were not clearly described. Measured on a scale from 0 (no pain) to 10 (worst pain), the results of Cowperthwaite 2019 suggested little to no difference between intervention and comparator at T1 (MD 0.090, 95% CI: -0.6131 to 0.7931, p = 0.8013; eTable ) and a small effect at T2 (MD 0.210, 95% CI: -0.3477 to 0.7677, p = 0.4594; eTable ) on pain intensity . Still, the evidence is very uncertain. The results of Knoerl 2019 (eTable ) suggested effects in favour of the intervention on CIPN sensory (at T1, T2, T3) and motor severity (at T1, T3) on a Lickert Scale from 1 to 4 , however, the certainty in the evidence is very uncertain. Overall, the effect of the implementation interventions compared to no intervention resulting from non-randomized studies on quality of life is very uncertain due to the very serious risk of bias, imprecision, and serious indirectness of outcome measurement. Two cluster RCTs reported adverse events in 865 cancer patients . Gilbert 2021 suggested an effect in favour of the comparator regarding the proportion of patients having at least one adverse event or one postsurgical complication (eTable ), still, the evidence is very uncertain . The results of Mohile 2021 suggested an effect in favour of the intervention at 3 months in terms of 3–5 grade adverse events (adjusted RR 0.74, 95% CI: 0.64 to 0.86, p = 0.001; eTable ) . Overall, the effect of the implementation interventions may have little to no effect on adverse events, still, the evidence is very uncertain due to the serious risk of bias, imprecision and inconsistency in outcome results. Provider-level outcomes Two cluster RCTs reported screening for 454 cancer patients . Both studies combined educational meetings with educational materials, academic detailing. One study additionally combined opinion leaders with audit and feedback. Both suggested an effect in favour of the intervention (OR 348.82, 95% CI 69.31 to 1755.62, p < 0.0001; RR = 5.29, 95% CI 3.03 to 9.23, p < 0.0001; eTable ). Overall, the multicomponent interventions implemented in these studies may increase adherence to guidelines regarding screening rates, still, the certainty of the evidence is low due to serious limitations in the study design and imprecision of results. Two cluster RCTs reported referrals for 1219 patients . The results of Brown 2018 suggested little to no difference between the intervention (educational meetings, materials, audit and feedback, opinion leaders, tailored intervention) and no intervention on referrals (RR 1.0474, 95% CI: 0.8631 to 1.2711, p = 0.6389; eTable ) . The results of McCarter 2018 suggested an effect in favour of almost the same intervention (used academic detailing instead of a tailored intervention) on referral rates (OR 37.70, 95% CI: 0.93 to 1530, p = 0.0537; eTable ) . Overall, the multicomponent interventions implemented in these studies may increase referrals slightly according to guidelines, still, the certainty of the evidence is low due to serious limitations in the study design and serious imprecision. One cluster RCT reported this outcome for 718 cancer patients . Combining educational materials and meetings for guideline implementation may increase adherence to guidelines slightly regarding prescribing behaviour compared to no intervention (eTable ). However, the certainty in the evidence is low due to serious limitations in the study design and serious imprecision. Two NRSIs reported prescribing behaviour . The results of Bonkowski 2018 suggested effects in favour of the intervention on narcotic administration of one and three doses. Further, the evidence suggested little effect in one item (MD -0.440, 95% CI: -0.0867 to 0.7933, p = 0.0157; eTable ), and no difference in another item (MD 0.000, 95% CI: -0.3354 to 0.3354, p = 1.000; eTable ) . Knoerl 2021 suggested an effect in favour of the intervention on the frequency of appropriate mild CIPN management (OR = 2.5278, 95% CI: 0.8356 to 7.6471, p = 0.1006; eTable ), whereas the frequency for appropriate moderate-severe CIPN management (OR = 0.8571, 95% CI: 0.2463 to 2.9827, p = 0.8086; eTable ) was lower after the intervention . Overall, the interventions may have little to no effect on this outcome, but the evidence from non-randomized studies is very uncertain due to very serious limitations in the study design and serious imprecision. Six included studies reported this outcome using different measurements (eTable ) . One RCT and one NRSI reported attitudes combined with knowledge . The other RCT reported that the majority of dieticians indicate that the implementation intervention was helpful or very helpful . In one NRSI it was narratively reported that nurses were highly satisfied with the intervention . Knoerl 2021 reported acceptability and feasibility scores only for the intervention group . The results of Phillips 2017 suggested lower self-perceived knowledge directly after the intervention (MD 1.9, 95% CI: 0.5 to 3.4, p = 0.012; eTable ) and an effect in favour of the intervention at ten weeks after (MD -1.4, 95% CI: -1.9 to -1.0, p < 0.001; eTable ) . Overall, the interventions may have little to no effect on attitudes, but the evidence is very uncertain due to serious (for RCTs) and very serious (for NRSIs) limitations in the study design, serious indirectness and very serious imprecision of results. One RCT and one NRSI reported knowledge combined with attitudes scores . The results of Phillips 2017 for perceived knowledge and assessment tool suggested an effect in favour of the intervention measured directly after (MD -1.3, 95% CI: -2.1 to 0.6, p < 0.001 and MD -3.6, 95% CI: -0.5 to 2.2, p < 0.001; eTable ) and at ten weeks after the intervention (MD -1.7, 95% CI: -2.2 to 1.1, p < 0.001 and MD -3.6, 95% CI: -0.5 to 2.2, p < 0.001; eTable ), whereas little to no difference has been suggested between these time points (MD -0.3, 95% CI: -1.1 to 0.4 and MD 0.00, 95% CI: -0.7 to 0.7; eTable ) . Overall, the interventions may have little to no effect on knowledge, but the evidence is very uncertain evidence due to serious (for RCTs) and extremely serious (for NRSIs) limitations in the study design, serious indirectness and very serious imprecision. Table provides the GRADE Evidence Profile including the detailed judgement of the certainty and the narrative summary of findings. Overall, the certainty in the evidence was judged to be low for overall survival, quality of life (evidence from RCTs), screening, referrals, prescribing behaviour (evidence from RCTs), and very low for all other outcomes (Table ). The most frequent reasons for downgrading of the certainty were limitations in the study design, indirectness and imprecision of the results (Table ). A total of nine studies, five cluster RCTs and four controlled NRSIs with before-and after study design, were included in the synthesis . All studies assessed multi-component guideline implementation interventions compared to no intervention in 3577 cancer patients and more than 450 oncologists, nurses and medical staff. Educational meetings combined with materials, opinion leaders, audit and feedback, a tailored intervention or academic detailing may have little to no effect on overall survival, quality of life and adverse events of cancer patients compared to no intervention, however, the evidence is either uncertain or very uncertain. Multi-component interventions may increase or slightly increase guideline adherence regarding screening, referral and prescribing behaviour of healthcare professionals according to guidelines, but the certainty in evidence is low. The interventions may have little to no effect on attitudes and knowledge of healthcare professionals, still, the evidence is very uncertain. The present review confirms the findings from previous reviews in other clinical settings that educational strategies are the most frequently used strategies for guideline implementation and the most commonly used component in multi-component interventions . Compared to the the review of Tomasone et al. , this review focuses on more recent literature. While Tomasone 2020 included 33 studies published between 1998 and 2018, we focused on more recent evidence up to December 2022. In contrast to the present review, Tomasone 2020 primarily focused on guideline adherence of healthcare professionals and secondarily on patient-relevant outcomes (survival, quality of life, test completion, pain) and concluded that the most used strategies were educational strategies and feedback on guideline compliance. In addition, the authors used a different taxonomy for coding the interventions, the Mazza taxonomy. This taxonomy builds upon the EPOC taxonomy but has implemented and adapted further domains . A review in the dental care setting showed that multi-component interventions showed slightly higher improvements in guideline adherence outcomes when compared to single interventions. In contrast to the present review, these reviews focused on other clinical settings and included studies conducted in other geographical locations. Moreover, these focused primarily on guideline adherence outcomes. Further, these reviews included a higher number of studies due to different inclusion criteria compared to the current review, such as wider time frames for publication of studies, the inclusion of further study designs (e.g. uncontrolled, retrospective), different classification of interventions, and different outcomes. Some limitations of the included body of evidence in this review need to be mentioned. For instance, although authors of included studies that did not clearly describe the effect estimates were contacted, no response was obtained, affecting he completeness of the results. Only studies conducted in high-income countries, USA, Australia and France, were identified. Therefore, results may not apply to other countries, where different health systems, local values, and preferences exist. Also, the results may not be generalised to all cancer types or oncological settings. Due to the substantial clinical heterogeneity in participant characteristics, interventions, and outcomes, the pooling of results in meta-analyses was not feasible. This affected the ability of the synthesis to determine the quantitative effect of the interventions. Potential weaknesses in the review process were that we did not search clinical trial registries due to time constraints, however, this mainly impacts the completeness of the list of ongoing studies and less the results of this review. The time restriction (studies published after 2011) may be interpreted as weakness, as otherwise fitting studies may have been excluded. However, the time frame was chosen to reflect more recent guideline implementation strategies. In addition, the categorisation of interventions conforming to the revised EPOC taxonomy was done by one reviewer (AB) and did not follow a standardised algorithm. This subjective assessment could have introduced bias, as other reviewers could have differently classified the interventions into the predefined categories. The poor reporting of intervention details of some studies amplified the difficulty to classify strategies according to the taxonomy. One of the strengths of the current review is its comprehensive electronic literature search in five databases, and additionall screening of references of relevant studies and relevant systematic reviews. The search in each database was optimised with the assistance of an experienced information specialist. Six ongoing trials were also included in the review to reflect the latest state of research in this area. Further, this review was conducted according to the recommendations of the Cochrane Handbook for Systematic Reviews of Interventions , of the PRISMA statement , in concordance with the EPOC taxonomy , and was previously registered in PROSPERO (CRD42021268593). The effect of CPGs depends on how they are implemented and embedded in clinical practice . The implementation strategies suggested by the German Association of the Scientific Medical Societies (AWMF) coincide with the strategies identified by this review, namely interactive training, discussions, feedback, and local opinion leaders . Moreover, in Germany, guideline-derived Quality Indicators (QIs) are used as key figures during cancer type-specific certification processes in order to determine whether a given center actually provides guideline-based treatment . Here plays the interaction between guideline groups, QIs, certified centers and clinical cancer registries an important role, this being part of the German National Cancer Plan . Additional digital strategies of implementation found in this review, such as online education modules or digital monitoring of patient-reported outcomes, seem to be feasible in the current oncological setting . For instance, online-spaced learning involves the sending of short clinical case-based scenarios that take less than five minutes to consider, to participants’ e-mail or mobile device . Although the evidence of this present review was rated to be low or very low, it does not necessarily mean that these multi-component strategies are not effective in oncology. According to the recommendations of the Institute of Medicine (IOM) and multiple behavioural change frameworks (e.g. COM-B-Behaviour Change Framework ), the effect may be amplified when combining many strategies that target the same or different patterns of behaviour change of health professionals and such interventions should be preferred . The current preference for multi-component, professional-targeted interventions in oncology is in line with the implementation of CPGs in other settings . Also, the IOM recommends that “effective multi-component implementation strategies targeting both individuals and healthcare systems should be employed by implementers to promote adherence to trustworthy CPGs” . However, the implementation of multi-component interventions can be demanding as two or more single strategies are involved. Especially, professional-targeted interventions may be hard to implement as behaviour patterns are always difficult to change, regardless of the setting. Moreover, strategies need to be in line with the high volume of recommendations and their frequent updates. This may require resources like time, money, and trained staff. As stated by the ESMO, without adoption in clinical routine practice, “even CPGs of the highest quality may be useless” . In order to be successful, “CPGs have to be developed, disseminated to the right target audience, and finally be implemented” . Facilitators of implementation of CPGs in oncology, found in a recent review, include the accessibility and ease of use of guidelines, dissemination of CPGs, adequate access to treatment facilities and resources, awareness of CPGs, belief in their relevance, and support in decision-making . Furthermore, provider-related barriers such as behavioural patterns of health professionals, values, attitudes, and prior knowledge that affect the adaptability and adherence to changes should be addressed prior to the development of implementation strategies. The focus on local organisational structures, multidisciplinarity of the setting, availability of resources and support is essential when developing guideline implementation strategies. The development of complex and clearly reported interventions based on evidence-based theoretical frameworks may offer a greater potential for changing the clinical practice and a better understanding of barriers and facilitators of guideline implementation . We developed a list of quality parameters for future guideline implementation research that can be found in Fig. (Key Messages Box). High-quality cluster randomized controlled trials and prospectively registered observational studies, designed to primarily assess patient-relevant outcomes emerging from changes in the behaviour of healthcare professionals, are needed. Future studies could consider the analysis of participants in subgroups, as the impact on the various clinical specialities involved in the oncological setting could differ (e.g. doctors vs nurses). In addition, a clear definition of the interventions is essential. The consistency in reporting of the implementation strategies according to a classification framework or taxonomy should be improved in future studies. The results should be interpreted with caution due to the low certainty of the evidence for overall survival, quality of life (in RCTs), screening, referrals, prescribing behaviour (in RCTs), and very low certainty of the evidence for all other outcomes. Team-oriented or online educational training and dissemination of materials embedded in multi-component interventions seem to be the most frequently researched strategies in the last years in oncology. This systematic review provides an overview of recent guideline implementation strategies in oncology, encourages future implementation research in this area and informs policymakers and professional organisations on the development and adoption of implementation strategies. Additional file 1. Search strategies. List of excluded studies at full-text screening stage. Population characteristics. Detailed description of interventions of included studies. Reported outcomes in the included studies. Characteristics of ongoing studies. Characteristics of studies awaiting classification. Risk of bias judgement for randomized controlled trials. Risk of bias judgement for non-randomized controlled studies of interventions. Risk of bias summary plots. Outcome effect tables. PRISMA Checklist.
Effect of educational intervention programme on the health-related quality of life (HRQOL) of individuals with type 2 diabetes mellitus in South-East, Nigeria
9c3ede54-889c-4b26-aaea-370f02bc0faa
10080927
Patient Education as Topic[mh]
Diabetes Mellitus (DM) is a metabolic disorder known to affect people of all ages and racial backgrounds. It has been acclaimed as one of the major health challenges ravaging the global community . In the past, it was known to affect the affluent more than the non-affluent but in the contemporary periods, its effect is being felt significantly in developing countries . The previous report has shown that as high as 80% of diabetes-related deaths were recorded in low and middle-income countries . Interestingly, previous studies have reported a progressive increase in the prevalence of diabetes both at global, regional, and national levels [ , – ]. In 2011, it was estimated that 285 million adults were affected with DM globally . Also, in 2013, another report had that about 382 million adults globally were affected by DM with a prevalence of 3.8%. In 2014, the global prevalence rose to 9% with an estimated 387 million adults living with diabetes . A report from a previous study revealed that nearly half a billion adults globally were estimated to be living with diabetes . A previous study reported the prevalence of diabetes in Nigeria to be within the region of 8 − 10% with over 4 million cases as reported by another study . Because of the rising global prevalence of diabetes associated with poor quality of life, the WHO projected that diabetes may become the 7th leading cause of death by 2030 . It has been posited that diabetes mellitus often results in frequent hospital hospitalization which is associated with high economic costs, and consequently affects the quality of life of persons with diabetes . Hence, attention to strategies such as patient training and education to promote quality of life is critical to reducing early complications and readmissions of patients with chronic diseases such as diabetes. The QOL represents the effect of illness on a person as perceived by the person . Quality of life also encompasses people’s emotional, social and physical well-being and their ability to function in the ordinary . From the perceptive of the WHO, the QOL of an individual is perceived as their position in life in the context of the culture and value system in which they live and about their goal, expectations, standards, and concerns . The perception of the meaning of QOL varies from one individual to another, and from one group to another group. These differences in the definition stem from the multi-disciplinary use of the term. Hence, the Centers for Disease Control (CDC) explained QOL to mean a broad multidimensional concept that usually includes subjective evaluation of both positive and negative aspects of life . However, HRQOL is put in context when the QOL is considered concerning the impact on Health and Disease . Health-Related Quality of Life is a health outcome that quantifies how disease, disability, or disorder affects an individual’s well-being. . According to the WHO, HRQL is measured in the dimensions of the physical, mental, and social well-being of an individual . Diabetes is one of the most important chronic diseases that have a great impact on health. People with diabetes are constantly being reminded of their disease daily; they have to choose their diet as well as decide when to schedule their meals, they also have to exercise, test their blood glucose, take their medication, monitor their blood pressure, check for symptoms of hyper or hypoglycemia as well as deal with the fear of the possibility of complications. As a result, they often feel challenged by their disease because of its day-to-day management demands and these may affect their quality of life. Diabetes education is concerned with encouraging independence and self-confidence so that people carry out their self-care activities. Knowledge of self-management of diabetes is an important aspect of better glycemic control and better quality of life. The aim is to enable the patient to become the most knowledgeable and hopefully the most active participant in his or her diabetes care . It also aims at optimizing metabolic control, preventing acute and chronic complications, and improving quality of life . However, previous studies on self-care practice revealed that persons with DM have inadequate knowledge of self-care . This, the researchers assumed may affect their HRQOL. We realized that there was a dearth of studies on non-pharmacological interventions in the care of people with type 2 diabetes in Nigeria, especially in the South Eastern part. This has therefore created a knowledge gap that needs to be filled hence our desire to find an answer to the research question, what will be the effect of an educational intervention program on the health-related quality of life (HRQol) of individuals with type 2 diabetes mellitus in South-East, Nigeria? We hereby hypothesized that an educational intervention program on the HRQOL of individuals with type 2 DM recruited from selected tertiary institutions in South East, Nigeria will not lead to an improvement in their HRQOL. The success of the incorporation of educational intervention in the management of type 2 DM will go a long way in improving the HRQOL of individuals with type 2 diabetes. Study type The study was a multi-center quasi-experimental design involving three hundred and eighty-two (382) persons living with type 2 DM purposively recruited from the diabetic clinics of four tertiary health institutions in South East, Nigeria. Ethics approval to carry out the research was obtained from the Institutional Ethics Committee of Nnamdi Azikiwe Teaching Hospital University, University of Nigeria Teaching Hospital, and Federal Medical Center, Umuahia. Step 1. Selection of Study Area/States used for the study - There are five (5) States that make up the South Eastern Region of Nigeria. Each State houses two tertiary health institutions making a total of ten (10) tertiary health institutions in South Eastern, Nigeria. These States with their tertiary health institutions were listed and a simple random technique with replacement was used to select four (4) States with their tertiary health institutions. The technique involved writing the name of each state on a piece of paper, folded and placed in a bag, a child was asked to pick from the bag one piece of paper at a time. The state picked was written down, and the piece of paper was folded and put back in the bag. This procedure was repeated until four States were selected. The States selected were: Abia, Anambra, Enugu, and Imo States. Step 2: Selection of Study center/ site – Simple random technique was used to select a health institution from each state that making a total of four (4) tertiary health institutions that were used for the study. The institutions are Federal Medical Center, Umuahia, (FMCU), Nnamdi Azikiwe University Teaching Hospital, Nnewi, (NAUTHN), University of Nigeria Teaching Hospital, Ituku-Ozalla, (UNTHI), and Federal Medical Center, Owerri (FMCO). Step 3: Determination of Experimental (Intervention) and Comparison (Control) groups – Participating tertiary health institutions were assigned to experimental and control groups by randomly assigning each to experimental and comparison (control) health institutions using simple randomization with replacement. This was achieved by writing the number 1,2,3,4,5,6 on a piece of paper, folded, and placed in a tray. Four girls, (each representing a health institution) were asked to pick a piece of paper from the tray. Odd numbers formed experimental hospitals while even numbers form comparison (control) hospitals. The institutions picked for experimental were UNTH Ituku-Ozalla and FMC Owerri, whereas NAUTH Nnewi and FMC Umuahia were picked as control hospitals. Hence, participants from UNTH Ituku-Ozalla and FMC Owerri formed the experimental (intervention) group while those from NAUTH Nnewi and FMC Umuahia formed the control (comparison) group. The Original sample population for the study was 410. A proportionate sampling technique was used to determine the number of participants recruited from each study site based on the proportion of people living with diabetes mellitus (PLWDM) from each site to the entire population of PLDM from the 4 hospitals selected for the study. Thus, experimental hospitals which were UNTH Enugu and FMC Owerri had 121 & 86 respectively. People living with diabetes respectively, total = 207. Control hospitals which were NAUTH Nnewi and FMC, Owerri had 103 & 100 PLWDM respectively, making a total of 203 PLWDM for the control group. However, before the intervention, it was observed that some copies of the questionnaire (9 from the experimental & 10 from the control group) were not properly filled/completed. Also, during the post-test, 9 participants from the control group did not show up, as a result, their pretest scores were removed. In total, we recorded an attrition of 28 out of the 410. So the analysis of the questionnaire was based on 198 experimental participants’ scores and 184 control participants’ scores. Step 4 Finally, a purposive sampling technique was used in recruiting participants for the study. The researcher met the diabetic persons at the diabetic clinics in the selected health institutions (Experimental and Control health institutions) on different occasions, after introducing the purpose of the study and the steps/procedures involved to them, those that opted for the study and who met the inclusion criteria were recruited. Their names and phone numbers or support persons’ phone contact were collected. Patient Education- Educational intervention covered areas such as meaning, types, causes, complications of DM, adherence to diet therapy, blood glucose monitoring, physical activity/exercise, foot care, adherence to medication, recognition of symptoms of hypo and hyperglycemia, and actions to take, blood pressure monitoring, regular health checkups on eye care, health care use, 3-monthly laboratory test for glycosylated hemoglobin (HbAic), communication with the physician, lifestyle changes, managing emotional problems as well as stress management. Method of data collection Research Assistants Six research assistants (final year student nurses) trained by the researchers assisted in data collection from the selected health institutions. All the research assistants received training on the areas they were to assist in the study. Each item in the questionnaire was explained to them and the need to maintain objectivity was emphasized. The training of research assistants lasted for two weeks. The study participants were shared into groups of not more than 25 persons/group for easy administration of the questionnaire as well as education of intervention group participants. Each group of study participants was invited to the clinic on a particular day in the week for pre-intervention data collection. Pretest data were collected from study participants (both experimental and control group) who met the inclusion criteria, using the English version of the questionnaire. However, a specialist in the native language (Igbo language), who was earlier trained on the purpose of the study, was involved in translating the questionnaire for non-literate participants. Pretest data collection lasted for 6weeks. Educational intervention Educational intervention material centered on general diabetes management such as involvement in physical activity/exercise, diet adherence, foot care, monitoring of blood sugar, blood pressure monitoring, recognition of signs of hypo and hyperglycemia and actions to take, and eye checkups. Other areas covered include lifestyle changes (avoidance of intake of alcohol/sweetened wine, cigarette smoking, etc.), involvement in healthy social functions (joining the diabetic club, etc.) health care use (even in the absence of symptoms), communication with physician, lifestyle changes, emotional and stress management. The diabetes self-management education commenced for the experimental group and lasted for 9 weeks. An unpublished booklet titled “ Managing Your Diabetes ” developed by the researchers from a module on diabetes education and other relevant materials was given to the experimental group to go home with. The experimental group was followed up, two weekly meetings were arranged with them to emphasize more on diabetes self-management and also encouraged them to practice self-management. Phone calls were made between meetings to answer the participant’s questions. Also, the two weekly meetings helped the researchers to be having contact with the experimental participants to identify the areas they were having a problem with the practice of self-care. The control group participants received normal care during the period of intervention. After six months of commencement of training with follow-up, copies of the questionnaire on quality of life were administered as a posttest to both the experimental and control groups to observe the effect of the education on the quality of life of the intervention (experimental) group. At the end of the post-test data collection activities, the researchers educated the participants in the control group and gave each of them a copy of the educational material as means of support. The educational materials were leaflets that contain brief but catchy/vital information on diabetes e.g. causes, prevention, and medical treatment for diabetes. Both groups and their family members/caregivers were given psychoeducation as part of the measures to help them accept the condition in which they found themselves and to assist the loved ones comply with the instructions given during educational interventions. Psychoeducation includes information on how to explain aspects of living with an illness to family members so that they can understand the effect of the illness and assist the patient and treatment providers in the treatment program. There is evidence that psychoeducation improves the outcomes of mental illness and many other medical illnesses . An instrument for data collection : Data was collected using the Rand Short Form 36 (SF-36) Health Survey. SF-36 questionnaire has a total of 36 questions with eight (8) scales that measure 8 dimensions (domains) of an individual’s health. Each scale contains specific questions that assess the quality of life in that domain. The domains are: Physical functioning contains ten (10) questions, Role limitation due to physical health (4 questions), Role limitation due to emotional problem (3 questions), Energy/Fatigue (4 questions), Emotional well-being (5 questions), Social functioning (2 questions), Pain (2 questions) and General health (6 questions). The SF-36 has been validated for use in Nigeria population by two previous studies; the first study on sickle cell disease patients attending outpatient clinics in Ibadan, reported that the reliability of each of the dimensions was above 0.70. Item internal consistency ranged from 0.42 to 0.91 and scaling success ranged between 0.98 and 100% , while the second study on translation, cross-cultural adaptation and psychometric evaluation of Yoruba version of the short-form 36 health survey reported that the concurrent validity of the Yoruba SF-36 was high, with scales and domains having co-efficient ranges greater than 0.70 that was considered desirable for good validity of a new tool. Also, the convergent validity was satisfactory, ranging from 0.421 to 0.907 . Similarly, in the current study, SF-36 was tested relative to the current sample before application, and it shows acceptable internal consistency (0.63– 0.95), known-group validity (0.60–0.99), convergent validity, and ceiling and floor effects. The above domains of the HRQOL have further grouped into two components viz.: the physical and the mental components. Scoring of SF – 36 questionnaires was done using RAND Scoring guide. All questions were scored on a scale from 0 to 100, with 100 representing the highest level of functioning possible. Aggregate scores were compiled as a percentage of the total points possible, using the RAND scoring table. The scores from those questions that addressed each specific area of functional health status were averaged together, for a final score within each of the dimensions measured. The scores were entered into SPSS for statistical analysis. Method of data analyses The data were analyzed using IBM Statistical Packages for the Social Sciences (SPSS 25.0: SPSS Inc., Chicago, IL, USA). The Socio-demographic characteristics data were summarized using descriptive statistics of frequency count, percentages, mean, and standard deviation. An Independent t-test was used to compare the baseline QOL scores between the experimental and control groups before the intervention. Analysis of Covariance (ANCOVA) test statistics was used to compare the changes that occurred in HRQOL between the experimental and control groups 6 months’ post-intervention. Paired samples test was used to examine the changes that occurred between the components of HRQOL. Spearman rank order correlation was used to test the relationship between age and HRQOL domains. T-test was used to test the association between gender and the domains of the HRQOL in all tests, p-value less than 0.05 alpha levels were considered significant. The study was a multi-center quasi-experimental design involving three hundred and eighty-two (382) persons living with type 2 DM purposively recruited from the diabetic clinics of four tertiary health institutions in South East, Nigeria. Ethics approval to carry out the research was obtained from the Institutional Ethics Committee of Nnamdi Azikiwe Teaching Hospital University, University of Nigeria Teaching Hospital, and Federal Medical Center, Umuahia. Step 1. Selection of Study Area/States used for the study - There are five (5) States that make up the South Eastern Region of Nigeria. Each State houses two tertiary health institutions making a total of ten (10) tertiary health institutions in South Eastern, Nigeria. These States with their tertiary health institutions were listed and a simple random technique with replacement was used to select four (4) States with their tertiary health institutions. The technique involved writing the name of each state on a piece of paper, folded and placed in a bag, a child was asked to pick from the bag one piece of paper at a time. The state picked was written down, and the piece of paper was folded and put back in the bag. This procedure was repeated until four States were selected. The States selected were: Abia, Anambra, Enugu, and Imo States. Step 2: Selection of Study center/ site – Simple random technique was used to select a health institution from each state that making a total of four (4) tertiary health institutions that were used for the study. The institutions are Federal Medical Center, Umuahia, (FMCU), Nnamdi Azikiwe University Teaching Hospital, Nnewi, (NAUTHN), University of Nigeria Teaching Hospital, Ituku-Ozalla, (UNTHI), and Federal Medical Center, Owerri (FMCO). Step 3: Determination of Experimental (Intervention) and Comparison (Control) groups – Participating tertiary health institutions were assigned to experimental and control groups by randomly assigning each to experimental and comparison (control) health institutions using simple randomization with replacement. This was achieved by writing the number 1,2,3,4,5,6 on a piece of paper, folded, and placed in a tray. Four girls, (each representing a health institution) were asked to pick a piece of paper from the tray. Odd numbers formed experimental hospitals while even numbers form comparison (control) hospitals. The institutions picked for experimental were UNTH Ituku-Ozalla and FMC Owerri, whereas NAUTH Nnewi and FMC Umuahia were picked as control hospitals. Hence, participants from UNTH Ituku-Ozalla and FMC Owerri formed the experimental (intervention) group while those from NAUTH Nnewi and FMC Umuahia formed the control (comparison) group. The Original sample population for the study was 410. A proportionate sampling technique was used to determine the number of participants recruited from each study site based on the proportion of people living with diabetes mellitus (PLWDM) from each site to the entire population of PLDM from the 4 hospitals selected for the study. Thus, experimental hospitals which were UNTH Enugu and FMC Owerri had 121 & 86 respectively. People living with diabetes respectively, total = 207. Control hospitals which were NAUTH Nnewi and FMC, Owerri had 103 & 100 PLWDM respectively, making a total of 203 PLWDM for the control group. However, before the intervention, it was observed that some copies of the questionnaire (9 from the experimental & 10 from the control group) were not properly filled/completed. Also, during the post-test, 9 participants from the control group did not show up, as a result, their pretest scores were removed. In total, we recorded an attrition of 28 out of the 410. So the analysis of the questionnaire was based on 198 experimental participants’ scores and 184 control participants’ scores. Finally, a purposive sampling technique was used in recruiting participants for the study. The researcher met the diabetic persons at the diabetic clinics in the selected health institutions (Experimental and Control health institutions) on different occasions, after introducing the purpose of the study and the steps/procedures involved to them, those that opted for the study and who met the inclusion criteria were recruited. Their names and phone numbers or support persons’ phone contact were collected. Patient Education- Educational intervention covered areas such as meaning, types, causes, complications of DM, adherence to diet therapy, blood glucose monitoring, physical activity/exercise, foot care, adherence to medication, recognition of symptoms of hypo and hyperglycemia, and actions to take, blood pressure monitoring, regular health checkups on eye care, health care use, 3-monthly laboratory test for glycosylated hemoglobin (HbAic), communication with the physician, lifestyle changes, managing emotional problems as well as stress management. Research Assistants Six research assistants (final year student nurses) trained by the researchers assisted in data collection from the selected health institutions. All the research assistants received training on the areas they were to assist in the study. Each item in the questionnaire was explained to them and the need to maintain objectivity was emphasized. The training of research assistants lasted for two weeks. The study participants were shared into groups of not more than 25 persons/group for easy administration of the questionnaire as well as education of intervention group participants. Each group of study participants was invited to the clinic on a particular day in the week for pre-intervention data collection. Pretest data were collected from study participants (both experimental and control group) who met the inclusion criteria, using the English version of the questionnaire. However, a specialist in the native language (Igbo language), who was earlier trained on the purpose of the study, was involved in translating the questionnaire for non-literate participants. Pretest data collection lasted for 6weeks. Educational intervention Educational intervention material centered on general diabetes management such as involvement in physical activity/exercise, diet adherence, foot care, monitoring of blood sugar, blood pressure monitoring, recognition of signs of hypo and hyperglycemia and actions to take, and eye checkups. Other areas covered include lifestyle changes (avoidance of intake of alcohol/sweetened wine, cigarette smoking, etc.), involvement in healthy social functions (joining the diabetic club, etc.) health care use (even in the absence of symptoms), communication with physician, lifestyle changes, emotional and stress management. The diabetes self-management education commenced for the experimental group and lasted for 9 weeks. An unpublished booklet titled “ Managing Your Diabetes ” developed by the researchers from a module on diabetes education and other relevant materials was given to the experimental group to go home with. The experimental group was followed up, two weekly meetings were arranged with them to emphasize more on diabetes self-management and also encouraged them to practice self-management. Phone calls were made between meetings to answer the participant’s questions. Also, the two weekly meetings helped the researchers to be having contact with the experimental participants to identify the areas they were having a problem with the practice of self-care. The control group participants received normal care during the period of intervention. After six months of commencement of training with follow-up, copies of the questionnaire on quality of life were administered as a posttest to both the experimental and control groups to observe the effect of the education on the quality of life of the intervention (experimental) group. At the end of the post-test data collection activities, the researchers educated the participants in the control group and gave each of them a copy of the educational material as means of support. The educational materials were leaflets that contain brief but catchy/vital information on diabetes e.g. causes, prevention, and medical treatment for diabetes. Both groups and their family members/caregivers were given psychoeducation as part of the measures to help them accept the condition in which they found themselves and to assist the loved ones comply with the instructions given during educational interventions. Psychoeducation includes information on how to explain aspects of living with an illness to family members so that they can understand the effect of the illness and assist the patient and treatment providers in the treatment program. There is evidence that psychoeducation improves the outcomes of mental illness and many other medical illnesses . An instrument for data collection : Data was collected using the Rand Short Form 36 (SF-36) Health Survey. SF-36 questionnaire has a total of 36 questions with eight (8) scales that measure 8 dimensions (domains) of an individual’s health. Each scale contains specific questions that assess the quality of life in that domain. The domains are: Physical functioning contains ten (10) questions, Role limitation due to physical health (4 questions), Role limitation due to emotional problem (3 questions), Energy/Fatigue (4 questions), Emotional well-being (5 questions), Social functioning (2 questions), Pain (2 questions) and General health (6 questions). The SF-36 has been validated for use in Nigeria population by two previous studies; the first study on sickle cell disease patients attending outpatient clinics in Ibadan, reported that the reliability of each of the dimensions was above 0.70. Item internal consistency ranged from 0.42 to 0.91 and scaling success ranged between 0.98 and 100% , while the second study on translation, cross-cultural adaptation and psychometric evaluation of Yoruba version of the short-form 36 health survey reported that the concurrent validity of the Yoruba SF-36 was high, with scales and domains having co-efficient ranges greater than 0.70 that was considered desirable for good validity of a new tool. Also, the convergent validity was satisfactory, ranging from 0.421 to 0.907 . Similarly, in the current study, SF-36 was tested relative to the current sample before application, and it shows acceptable internal consistency (0.63– 0.95), known-group validity (0.60–0.99), convergent validity, and ceiling and floor effects. The above domains of the HRQOL have further grouped into two components viz.: the physical and the mental components. Scoring of SF – 36 questionnaires was done using RAND Scoring guide. All questions were scored on a scale from 0 to 100, with 100 representing the highest level of functioning possible. Aggregate scores were compiled as a percentage of the total points possible, using the RAND scoring table. The scores from those questions that addressed each specific area of functional health status were averaged together, for a final score within each of the dimensions measured. The scores were entered into SPSS for statistical analysis. Six research assistants (final year student nurses) trained by the researchers assisted in data collection from the selected health institutions. All the research assistants received training on the areas they were to assist in the study. Each item in the questionnaire was explained to them and the need to maintain objectivity was emphasized. The training of research assistants lasted for two weeks. The study participants were shared into groups of not more than 25 persons/group for easy administration of the questionnaire as well as education of intervention group participants. Each group of study participants was invited to the clinic on a particular day in the week for pre-intervention data collection. Pretest data were collected from study participants (both experimental and control group) who met the inclusion criteria, using the English version of the questionnaire. However, a specialist in the native language (Igbo language), who was earlier trained on the purpose of the study, was involved in translating the questionnaire for non-literate participants. Pretest data collection lasted for 6weeks. Educational intervention material centered on general diabetes management such as involvement in physical activity/exercise, diet adherence, foot care, monitoring of blood sugar, blood pressure monitoring, recognition of signs of hypo and hyperglycemia and actions to take, and eye checkups. Other areas covered include lifestyle changes (avoidance of intake of alcohol/sweetened wine, cigarette smoking, etc.), involvement in healthy social functions (joining the diabetic club, etc.) health care use (even in the absence of symptoms), communication with physician, lifestyle changes, emotional and stress management. The diabetes self-management education commenced for the experimental group and lasted for 9 weeks. An unpublished booklet titled “ Managing Your Diabetes ” developed by the researchers from a module on diabetes education and other relevant materials was given to the experimental group to go home with. The experimental group was followed up, two weekly meetings were arranged with them to emphasize more on diabetes self-management and also encouraged them to practice self-management. Phone calls were made between meetings to answer the participant’s questions. Also, the two weekly meetings helped the researchers to be having contact with the experimental participants to identify the areas they were having a problem with the practice of self-care. The control group participants received normal care during the period of intervention. After six months of commencement of training with follow-up, copies of the questionnaire on quality of life were administered as a posttest to both the experimental and control groups to observe the effect of the education on the quality of life of the intervention (experimental) group. At the end of the post-test data collection activities, the researchers educated the participants in the control group and gave each of them a copy of the educational material as means of support. The educational materials were leaflets that contain brief but catchy/vital information on diabetes e.g. causes, prevention, and medical treatment for diabetes. Both groups and their family members/caregivers were given psychoeducation as part of the measures to help them accept the condition in which they found themselves and to assist the loved ones comply with the instructions given during educational interventions. Psychoeducation includes information on how to explain aspects of living with an illness to family members so that they can understand the effect of the illness and assist the patient and treatment providers in the treatment program. There is evidence that psychoeducation improves the outcomes of mental illness and many other medical illnesses . An instrument for data collection : Data was collected using the Rand Short Form 36 (SF-36) Health Survey. SF-36 questionnaire has a total of 36 questions with eight (8) scales that measure 8 dimensions (domains) of an individual’s health. Each scale contains specific questions that assess the quality of life in that domain. The domains are: Physical functioning contains ten (10) questions, Role limitation due to physical health (4 questions), Role limitation due to emotional problem (3 questions), Energy/Fatigue (4 questions), Emotional well-being (5 questions), Social functioning (2 questions), Pain (2 questions) and General health (6 questions). The SF-36 has been validated for use in Nigeria population by two previous studies; the first study on sickle cell disease patients attending outpatient clinics in Ibadan, reported that the reliability of each of the dimensions was above 0.70. Item internal consistency ranged from 0.42 to 0.91 and scaling success ranged between 0.98 and 100% , while the second study on translation, cross-cultural adaptation and psychometric evaluation of Yoruba version of the short-form 36 health survey reported that the concurrent validity of the Yoruba SF-36 was high, with scales and domains having co-efficient ranges greater than 0.70 that was considered desirable for good validity of a new tool. Also, the convergent validity was satisfactory, ranging from 0.421 to 0.907 . Similarly, in the current study, SF-36 was tested relative to the current sample before application, and it shows acceptable internal consistency (0.63– 0.95), known-group validity (0.60–0.99), convergent validity, and ceiling and floor effects. The above domains of the HRQOL have further grouped into two components viz.: the physical and the mental components. Scoring of SF – 36 questionnaires was done using RAND Scoring guide. All questions were scored on a scale from 0 to 100, with 100 representing the highest level of functioning possible. Aggregate scores were compiled as a percentage of the total points possible, using the RAND scoring table. The scores from those questions that addressed each specific area of functional health status were averaged together, for a final score within each of the dimensions measured. The scores were entered into SPSS for statistical analysis. The data were analyzed using IBM Statistical Packages for the Social Sciences (SPSS 25.0: SPSS Inc., Chicago, IL, USA). The Socio-demographic characteristics data were summarized using descriptive statistics of frequency count, percentages, mean, and standard deviation. An Independent t-test was used to compare the baseline QOL scores between the experimental and control groups before the intervention. Analysis of Covariance (ANCOVA) test statistics was used to compare the changes that occurred in HRQOL between the experimental and control groups 6 months’ post-intervention. Paired samples test was used to examine the changes that occurred between the components of HRQOL. Spearman rank order correlation was used to test the relationship between age and HRQOL domains. T-test was used to test the association between gender and the domains of the HRQOL in all tests, p-value less than 0.05 alpha levels were considered significant. Table shows that the comparison of the number of both male and female participants was matched. They were not statistically significant p = > 0.256. Table shows that both groups had similar proportions of participants across gender. No significant difference was observed between the groups regarding gender. The mean age of participants in the experimental group (58.52 ± 11.40, t = 1.87) was similar to that of the control group (56.29 ± 11.92, t = 1.86, p = 0.063). Table : shows the mean and standard deviation of quality of life scores of the experimental and control groups before educational intervention. Independent t-test result shows significantly higher mean QOL scores in the control group before intervention in the following domains: Energy/fatigue (57.03 ± 17.20 vs. 51.60 ± 14.15; t = -3.379, p = 0.001) , Emotional wellbeing (67.64 ± 16.02 vs. 59.19 ± 12.96; t = -5.690, p = 0.001) , Social functioning (62.96 ± 21.37 vs. 58.33 ± 20.09; -2.187, p = 0.029) , and General Health (56.95 ± 16.66 vs. 47.75 ± 12.85; t = -6.072, p = 0.001) . The table further shows differences in overall QOL between the groups; the control group had a significantly higher overall QOL mean score than the experimental group before intervention (56.51 ± 16.44 vs. 52.02 ± 15.02, t = -2.792, p = 0.006). Table shows a comparison of quality of life scores between experimental and control groups before and after the educational intervention. Before the intervention, the control group had significantly higher QOL mean scores than the experimental group in the following QOL domains: energy/fatigue (57.02 ± 17.20 vs. 51.60 ± 14.15, p = 0.001) , emotional wellbeing (67.64 ± 16.02 vs. 59.19 ± 12.96, p = 0.001) , social functioning (62.96 ± 21.37 vs. 58.33 ± 20.09, p = 0.029) , general health ( 47.75 ± 12.85 v s56.95 ± 16.66, t = -6.072, p = 0.001 ). However, 6-months after the intervention, the experimental group made significantly better improvements in all QOL domains compared to the control group (p < 0.05), Eta-squared ( η 2 ) = 0 .14. This is an indication that the educational intervention administered on the participants has a large effect size. The overall QOL mean score of the experimental group was observed to be significantly higher by 5.87 than the control group after the intervention (p = 0.001). Table further showed significant differences in the QOL mean score of the physical and mental components 6 months’ post-intervention. The physical component QOL mean increased by 9.12points higher than the pretest QOL mean, while the mental component QOL mean increased by 6.3points higher than the pretest QOL mean, thus indicating the effectiveness of the intervention on both components of QOL although the effect was more on the physical component (t = -14.51) than the mental component (t = -10.82). Table shows an inverse correlation between age and the following domains of HRQOL: Physical functioning (γ = − 0.175, p = 0.001 ), role limitation due to physical health (γ = − 0.219, p = 0.001) , energy/fatigue (γ = − 0.102, P = 0.047) , Pain (γ = − 0.117, P = 0.022) . As age increases, QOL decreases in the above-mentioned domains. Table . shows no significant relationship between gender and QOL (p ˃ 0.05). The baseline findings on HRQOL revealed that a good number of the study participants scored above 50 in most domains of QOL at the pretest except in role limitation due to physical health in which more than half of all participants scored below 50. This finding concurs with a previous finding which reported QOL scores of more than 50 in most domains of the SF – 36 measurements . HRQOL between the two groups before intervention showed fewer proportions of participants in the experimental group scored above 50 in most domains of the SF – 36. Independent t-tests on baseline QOL showed that both groups were similar except in the domains of energy/fatigue, emotional well-being, social functioning, and general health where the control group indicated a significantly higher mean QOL score. This implies that control group participants experience less fatigue, and have better emotional well-being, better social functioning, and general health than the experimental group before intervention. This finding disagrees with the findings of a previous study that reported poor QOL in all the domains of QOL in both experimental and control groups before their educational intervention . A significant difference was also observed in the overall mean QOL score of the intervention and control groups at the pretest stage, the control group had a higher overall mean QOL score than the intervention group. This further showed that the control group participants had a better QOL than the intervention group before the intervention. However, 6 months after educational intervention, the mean QOL scores of the experimental group increased significantly in all the domains of QOL. Also, the overall mean QOL score of the intervention group increased significantly by 5.87 points after the intervention. This implies a positive effect of the intervention on the experimental group as shown in the HRQoL scores. This underscores the fact that educational intervention for people living with diabetes will be helpful in the non-medical management of DM. The outcome of the study runs counter to the study hypothesis that stated that an educational intervention program on the HRQOL of individuals with type 2 DM recruited from selected tertiary institutions in South East, Nigeria will not lead to an improvement in their HRQOL. This finding is similar to the findings of the previous finding in Saudi Arabia which revealed statistically significant improvement in four dimensions of HRQOL after their psychoeducational intervention (P < 0.01) . It is also similar to the finding of another study in Iran, which revealed a significant difference in mean scores of physical, psychological, and social domains of QOL after the intervention . Age was inversely correlated with the domains of physical functioning, role physical, energy/fatigue, and pain. This implies that as age increases, QOL decreases in these domains. Several studies found that quality of life (QOL) in their study population worsens with increasing age [ – ]. This may be due to a high rate of comorbidities and other health challenges associated with old age. Also, the finding on age with pain is similar to the finding of a study that reported that participants over 60 years in their study experience bodily pain . Further, the association between age with role limitation also agrees with the findings of a study where the age of their participants influenced QOL in the dimensions of role limitation and physical endurance . In this study, gender had no significant association with any of the domains of HRQOL. This implies that being a male or a female does not produce any difference in the participants’ response to HRQOL scores after the educational intervention. We speculate that what matters most is the patient’s compliance with the diabetic educational management instructions given to the participants more than their gender. The HRQol scores of the participants are not influenced by the participants being males or females rather they could be influenced by the mastery of the standard management plans. This contradicts the findings of Miguel, et al. (2014) in which significant differences were observed between men and women in the domains of pain and social functioning (P < 0.05) . It also contradicts the findings of Mahmoud et al. (2016) that revealed male participants to be better than female participants in HRQOL (P < 0.05) . A significant difference was observed in the physical components (PCS) and mental components (MCS) QOL mean score after the intervention; the PCS increased significantly by 9.12points higher than the pretest mean, while the MCS increased by 6.3, thus indicating the effectiveness of the intervention on the PCS and MCS components of HRQOL. A similar study observed similar findings of an increase in QOL scores in PCS and MCS of their participants after 6 months of intervention (P < 0.05) . Educational intervention was effective in improving the quality of life in individuals living with type 2 diabetes mellitus with a large effect size (0.80). This is reflected in the significant improvement in the HRQOL of the experimental group after educational intervention as against the control group that did not receive the intervention but just relied on routine diabetic care. The large effect size shows that the educational intervention was very effective in improving the quality of life of those living with DM, and should be incorporated as an adjunct in the management of DM. Limitations of study The quasi-experimental design used may limit the study’s ability to conclude a causal relationship between the educational intervention and the outcome. Also, the sample size was not adequate for a study of this magnitude hence generalisability of the outcome should be done with caution. Contribution to knowledge This study was an attempt to find out the effect of an educational intervention program on the QOL of individuals living with type 2 DM in South East, Nigeria. We believe that the incorporation of educational intervention in the management of type 2 DM will go a long way in minimizing the development of comorbidities and drug intervention in diabetic patients hence improving the HRQOL of individuals with type 2 diabetes. The outcome has shown that when educational intervention is diligently delivered by the concerned health professionals and complied with by the diabetic patients that positive outcomes in HRQOL are guaranteed. There is a need for health managers to develop a policy that will encourage different health institutions and professionals to incorporate educational intervention in the management of type 2 diabetic patients in their practices. Self-management education should be included in the diabetes care plan and should be given serious attention. We recommend that future studies will be a randomized controlled study involving different regions of the country so that the cause-and-effect relationship will be determined. The quasi-experimental design used may limit the study’s ability to conclude a causal relationship between the educational intervention and the outcome. Also, the sample size was not adequate for a study of this magnitude hence generalisability of the outcome should be done with caution. This study was an attempt to find out the effect of an educational intervention program on the QOL of individuals living with type 2 DM in South East, Nigeria. We believe that the incorporation of educational intervention in the management of type 2 DM will go a long way in minimizing the development of comorbidities and drug intervention in diabetic patients hence improving the HRQOL of individuals with type 2 diabetes. The outcome has shown that when educational intervention is diligently delivered by the concerned health professionals and complied with by the diabetic patients that positive outcomes in HRQOL are guaranteed. There is a need for health managers to develop a policy that will encourage different health institutions and professionals to incorporate educational intervention in the management of type 2 diabetic patients in their practices. Self-management education should be included in the diabetes care plan and should be given serious attention. We recommend that future studies will be a randomized controlled study involving different regions of the country so that the cause-and-effect relationship will be determined. Below is the link to the electronic supplementary material. Supplementary Material 1
Ultrafast Plasmonic Nucleic Acid Amplification and Real-Time Quantification for Decentralized Molecular Diagnostics
ced2cc2a-dcfb-498d-a151-1ce1f195fa6c
10081571
Pathology[mh]
The pRT-qPCR system fully integrates the PTC, PoM cartridge, MAF microscope, mechanical loading units, and fluorescence monitoring units into a single platform ( b and S1 ). The PoM cartridge is loaded on the body-fixed frame of a movable tray and tightly contacted with the PTC. The movable tray is manually mounted along a rail frame up to two ball plunger switches, which confirm the cartridge loading and then release the user lock for PCR operation. Two electrodes of the Pt-RTD on the PTC are mechanically coupled to a C-clip contact on the upper body-fixed frame and connected to the print circuit board (PCB) for resistance-based temperature detection. A high-powered WLED as a photothermal excitation source is located at a distance of 2.5 mm below the PTC. A blue LED (BLED) and an excitation bandpass filter are arranged at 60° relative to the detection path between the PoM cartridge and the MAF microscope for fluorescence imaging. The physical dimension of the pRT-qPCR system is 190 mm × 140 mm × 23 mm with a 580 g weight ( c and Supporting Information Video 1 ). All functions such as plasmonic thermocycling, temperature monitoring, and fluorescence imaging are precisely controlled by a Raspberry Pi board. The embedded software automatically performs the diagnostic protocols and displays the analysis results on a liquid crystal display (LCD) screen. The internal product temperature is maintained below 30 °C through a plate-fin heat sink on the backside of the product and fan-based air circulation ( Figure S2 ). The PTC involves the wafer-level nanofabrication of NPS for a light-to-heat converter and Pt-RTD for a surface temperature monitor over a large area ( a). First, 180 nm high glass nanopillar arrays (GNAs) were formed on a borosilicate glass wafer by using reactive ion etching of thermally annealed silver nanoislands. Gold nanoislands (AuNIs) with diverse sizes and gaps were formed by using thermal evaporation of 40 nm thick Au layer across the top and sidewalls of the GNAs. A 500 nm thick silicon dioxide (SiO 2 ) was deposited using plasma-enhanced chemical vapor deposition (PECVD) to decrease the surface roughness of the NPS. A 100 nm thick Pt on the SiO 2 -coated NPS was defined for RTD by using lift-off process and further annealed for 10 min at 200 °C. Note that the sheet resistance of the Pt-RTD is decreased during the thermal annealing due to the increase in grain size, achieving a high temperature coefficient of resistance (TCR) and thermal stability during the PCR cycling ( Figure S3 ). The Pt-RTD on the NPS was passivated with a 500 nm thick hydrogen silsesquioxane (HSQ) resist to prevent electrical contact with the PoM cartridge. The Pt electrodes were further coated with an electrically conductive epoxy adhesive to prevent mechanical scratch from the C-clip contact. Finally, the PTC was diced into 30 mm × 12 mm of 12 pieces from a single 4 in. wafer and placed on the movable tray for permanent use ( Figure S4 ). The focused ion beam scanning electron microscopy (FIB SEM) of the PTC shows well-established NPS and Pt-RTD at the nanoscale ( b). The NPS strongly absorbs the full spectrum of visible light, rapidly converts into photothermal heat, and efficiently dissipates the heat through air voids in the nanopillar configuration ( Figure S5 ). A spiral line pattern of the Pt-RTD with a length of 50 mm and a width of 100 μm is arranged in the center of the WLED-driven heating area. The resistive response of the Pt-RTD at different surface temperatures was measured by using an infrared thermographic camera with surface emissivity correction ( c). The measured nominal resistance is 689 Ω at 0 °C, which linearly increases with 441 ppm/°C of TCR value ranging to 100 °C. The local surface temperature was determined by the relative voltage value of the Pt-RTD (V RTD ) to the current sensing resistor (V REF ) under a constant input voltage ( Figure S6 ). The magnitude of the input voltage source is derived from the temperature fluctuation and the self-heating of the Pt-RTD to affect the reliability of thermal sensing ( d). The temperature fluctuation error, i.e., the standard deviation of measured temperature values, is apparently reduced as the input voltage increases, however, causing critical self-heating errors generated from power dissipation in the Pt-RTD. The operating voltage at 3 V DC clearly suppresses the temperature fluctuation error <0.1 °C as well as self-heating errors <0.5 °C. The Pt-RTD also interferes with accurate temperature monitoring of the NPS due to the intrinsic photothermal effect of Pt in visible wavelengths ( Figure S7 ). However, the transmitted light through the NPS was absorbed by the Pt-RTD and converted into a relatively small temperature increase of 3 °C, having little effect on the plasmonic thermocycling. As a result, the PTC demonstrates a significant correlation between the WLED illumination and the thermocycling with precise temperature monitoring ( e). The average heating and cooling rates between 60 and 95 °C were 18.85 and 8.89 °C/s, respectively. A two-step RT-PCR cycling (50 °C for RT, 95 °C for denaturation, and 60 °C for annealing and extension) was then performed by using the pRT-qPCR system ( f). The isothermal profiles for the complementary DNA (cDNA) synthesis and primer extension steps were acquired through the proportional-integral-derivative (PID) modulation of the WLED, which optimized the duty cycle and the light intensity and resulted in an accurate temperature correction ( Figure S8 ). All the diagnostic protocols, including the RT process (210 s) and the amplification process (400 s for 40 cycles) were successfully accomplished within 10 min. The PoM cartridge exhibits high disposability and cost-effectiveness suitable for cost-effective POC testing. The cartridge configuration contains a polypropylene (PP) layer, an adhesive layer, and an Al thin film layer ( a and S9 ). The PP layer of 1.65 mm thickness retains a 200 μm thick PCR chamber and sample loading ports. The PCR samples are directly loaded with a micropipet into the inlet, gathered at the outlet adjacent to the inlet, and sealed together with a film seal tape to prevent water evaporation and cross-contamination during the whole PCR process. The adhesive layer of 50 μm thickness contains a polyethylene terephthalate film and double-sided acrylic adhesives, serving as a role for a microfluidic channel 300 μm in width. Finally, the adhesive layer was tightly attached to an Al thin film layer of 13 μm in thickness. The physical dimensions of the PoM cartridge are 15 mm × 28 mm × 1.71 mm. The PoM cartridges were batch-fabricated by using plastic injection molding and fine blanking techniques ( b). The captured cross-sectional image clearly indicates that the PCR chamber and microfluidic channel were successfully formed on the planar Al thin film layer ( c). The Al layer separates the PTC from the PCR mixtures and thus offers outstanding reusability of plasmonic nanostructures. In addition, the thin thickness and high thermal conductivity of the Al layer improve the ramping rate of PCR cycling. The measured times to heat up ( t H ) and cool down ( t C ) between 60 and 95 °C were compared for the unloaded PTC (before loading, black line), the PTC loaded with the PoM cartridge (PoM cartridge, red line), and the plastic-on-glass cartridge (PoG cartridge, blue line) ( d). The measured t H are 1.4 and 2.7 s for the PTC under the unloaded and loaded PoM cartridge, respectively. For the experimental comparison, the PoM cartridge efficiently transfers heat from the PTC to the PCR mixtures by 2.9 times faster than the PoG cartridge. Moreover, the Al thin film layer exhibits a substantial improvement in the cooling rate with 2.2 times faster than the passive cooling of the PTCs and 3.2 times faster than the PoG cartridge. The rapid dissipation of photothermal energy through the Al layer was confirmed by using finite element analysis in COMSOL Multiphysics ( Figure S10 ). Note that shortening of t C leads to high amplification efficiency by preventing hybridization of single-stranded DNA before reaching the annealing temperature. The PoM cartridge also provides in situ fluorescence detection of amplicons during the photothermal heating due to the high reflectivity of the Al layer. The emitted fluorescence signals from fluorescein isothiocyanate (FITC) were directly monitored in real time for the PoM and PoG cartridges during the plasmonic thermocycling to verify the spectral crosstalk between the WLED and fluorescence light ( e). Leaking WLED light from the plasmonic thermal cycler is transmitted through the bottom glass of the PoG cartridge and significantly delays the fluorescence recovery rate after photobleaching, disrupting fast fluorescence detection. However, the Al thin layer of the PoM cartridge completely blocks WLED light and effectively prevents photobleaching and photodamage of PCR components during the PCR reaction. Note that the fluorophores typically have a slight drop in the quantum yield with a temperature rise. An antifouling surface treatment using ethanol cleaning and oxygen plasma was further performed on the PoM cartridge to prevent nonspecific adsorption of PCR components and bubble formation for high amplification efficiency. The water contact angles (θ) of the PP layer were measured after surface treatment for one month ( Figure S11 ). The antifouling treatment clearly enhanced the surface energy of the PP layer to 22.5°, which kept it below 40° for one month. Sample preservation in the PoM cartridge during the plasmonic thermocycling was then verified for stability of PCR conditions ( Figure S12 ). Microbubbles produced from the PCR reaction were guided and assembled at the top of the PCR chamber in the vertically standing PoM cartridge, which retained more than 80% of the sample after the plasmonic PCR with low bubble coverage, allowing stable fluorescence monitoring. The fully packaged MAF microscope inside the pRT-qPCR system contains a commercial emission bandpass filter, inverted microlens arrays with a light absorber in a 3 × 4 arrangement, and a single CMOS imager ( a and S13 ). The total thickness of the microscope is 2.13 mm. The microlens arrays have a lens diameter of 300 μm, a focal length of 430 μm, and a pitch size of 800 μm to capture the whole area of the PCR chamber inside the PoM cartridge at a short object distance of 10 mm ( Figure S14 ). The light absorber blocks the optical crosstalk between adjacent microlenses by highly absorbing visible wavelengths of light. The lateral resolution was measured from the fluorescence intensity profile by capturing a USAF 1951 target ( b). The single microlens image clearly distinguishes the line width of 99.2 μm (group 2 element 3), which is sufficient to observe the PCR chamber and the microchannels of the PoM cartridge. In addition, the MAF microscope shows a lateral resolution of 7 μm (group 6 element 2) at the contact position between the front window and the object ( Figure S15 ). The field-of-view (FOV) of each microlens was then measured by capturing a grid target, resulting in wide FOV of 77.2° and the imaging area of 8 mm × 8 mm at the target distance of 10 mm ( c). The HDR fluorescence image was successfully achieved by using the image reconstruction algorithm including an image stacking, an image averaging, an image subtraction, and an image masking ( d and S16 ). The array fluorescence images were captured by using the MAF microscope for the PCR mixtures containing FAM-labeled TaqMan probes in the PoM cartridge after 40 plasmonic PCR cycles. The array images of individual microlenses were cropped for the image stacking and merged by using the image averaging tool in Chasys Draw IES, which reduces random background noises and improves the image contrast. The intensity profiles for the MAF image show that the image averaging process provides a high-contrast fluorescence image with a 1.33-fold improvement in signal-to-noise ratio (SNR)( e). Note that the background noise is mostly caused by the reflection of excitation light from the Al layer and the diffraction from the PP layer. The reconstructed image was demonstrated by subtracting the merged image after plasmonic PCR from that of the initial state and extracting the fluorescence image of the PCR chamber. The image reconstruction completely reduces all the background noises by 98.69%, achieving reliable cyclic quantification during the PCR reaction. Finally, the MAF microscope demonstrates the limit-of-detection (LOD) for FITC dye in nanomolar concentration, comparable to a conventional fluorescence microscope ( f and S17 ). Note that FITC and FAM have identical spectral characteristics originating from fluorescein. The LOD of the MAF microscope was determined to be 123 nM, which is 2 times higher than that of the fluorescence microscope. The experimental results also indicate that the image reconstruction exhibits a substantial improvement of 16 times in fluorescence sensitivity. For instance, the pixel intensities in 2 –10 mM concentration, i.e., 976 nM, from the single microlens and the reconstructed image and were 13.3 and 168, respectively. The highly sensitive detection at lower concentrations affects the cycle threshold (C T ) in real-time PCR ( g). An amplification curve for 40 cycles was performed by using the pRT-qPCR system, and the C T value was determined as the crossing point with the threshold line at 5 times the standard deviation of background fluorescence levels. As a result, the MAF microscope improves the LOD of fluorescence intensity and shortens the C T values by about 4 cycles after the image reconstruction, comparable to conventional fluorescence microscopes. Rapid amplification and real-time quantification of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) have been finally demonstrated with the pRT-qPCR system. A 4 μL aliquot of PCR mixtures containing RNA templates, reaction buffer, enzyme buffer, forward/reverse primers, FAM-labeled TaqMan probes, bovine serum albumin, and nuclease-free water were injected into the PoM cartridge ( Table S1 ). The minimum concentration of RNA templates was 10 4 copies/mL, corresponding to 20 copies/2 μL for the PCR chamber. The pRT-qPCR test was automatically performed to amplify a target sequence of 112 base pairs (bp) after loading the PoM cartridge and assigning specific PCR conditions. The RT-PCR protocol was set at an initial RT of 50 °C for 5 min, followed by 40 cycles of denaturation at 95 °C for a holding time of 0 s and annealing/extension at 60 °C for 2 s. The array fluorescence images were captured with an exposure time of 1 s in each cycle at the annealing temperature and converted into normalized intensity for in situ quantification ( Figure S18 ). The amplification curves follow a sigmoidal trend for different concentrations, indicating true exponential increases in the target amplicons above a threshold line ( a). No template control (NTC) tests show no amplification beyond the baseline fluorescence. A standard curve for the target concentrations versus the C T values was further compared with a conventional benchtop qRT-PCR system (StepOnePlus Real-time PCR system, Thermo Fisher Scientific Inc.) ( b). The benchtop qRT-PCR system performed a two-step fast-mode PCR test (95 °C for 0 s and 60 °C for 10 s, ramping rate of 2.2 °C/s, total 60 min) for 20 μL of sample volume with the amplification efficiency of 98.8%. The pRT-qPCR system demonstrates high amplification efficiency of 95.6% with a slope of −3.43 even with the shorter annealing time and leads to the small difference of about 2 cycles in the C T values compared to the benchtop qPCR system. Note that the small volume of PCR mixtures, 10 times lower than the benchtop qPCR system, and the high detection limits of the MAF microscope, 2 times higher than the fluorescence microscope, increase the C T value in the amplification curve. The remaining solution including PCR products was extracted from the PoM cartridge by punching and pipetting via the Al thin layer, and verified by using gel electrophoresis; lane 1 for 50 bp DNA ladder and lane 2 for PCR products after the pRT-qPCR. The pRT-qPCR system clearly confirms the band for a 112 bp target amplicon in lane 2 with relatively low intensity due to the small extraction volume. The analytical specificity was then evaluated for SARS-CoV-2 RNA and lambda DNA (λ-DNA) with different templates and primer-probe sets ( c). The C T values for the SARS-CoV-2 RNA and E gene set are significantly lower than those for the mismatches between templates and primer-probe sets. All viral strains are consequently identified without any cross-reaction of the interfering reagents. The preoperational trials ( n = 40 for positive samples, and n = 20 for NTC) yield 100, 90, and 96.6% for the sensitivity, specificity, and accuracy, respectively. Note that high primer concentration for ultrafast RT-PCR may cause nonspecific amplification in the NTC reaction and lead to a decrease in the specificity; the concentrations of primer, probe, and polymerase would be further optimized for practical use. A receiver operating characteristic (ROC) curve analysis was also performed to determine the cutoff point for high classification performance ( d). The area under the ROC curve (AUC) is 0.986, comparable to the classification accuracy of a conventional real-time RT-PCR system. The cutoff point in C T values was set to 34.5 cycles for the best positive predictive value to distinguish SARS-CoV-2 samples from the NTC and further utilized for following clinical test. The ultrafast molecular diagnosis of the COVID-19 infection was finally performed to assess the clinical applicability of the pRT-qPCR system. The clinical samples were collected by using nasopharyngeal and oropharyngeal swabs from 37 infected patients of COVID-19 and 38 healthy controls. The COVID-19 infection status was confirmed in advance using a conventional benchtop qRT-PCR system after the purification of viral RNA from the clinical samples. The clinical diagnostic tests were conducted for 75 randomized samples by using the pRT-qPCR system with about 10 min per test ( e and Table S2 ). The C T values from the unknown samples were successfully classified depending on the cutoff line for true positive (TP), false positive (FP), false negative (FN), and true negative (TN) groups. The positive, negative, and total percent agreements are 87, 95, and 91%, respectively. The cutoff value of 34.5 cycles clearly differentiates the patient samples from the healthy controls with high classification accuracy of over 90% in the clinical diagnostic trials. The two FP samples can be further separated from the positive diagnosis due to their nonsigmoidal amplification behavior for higher specificity. In addition, the ROC curve for the clinical test was constructed with an AUC value of 0.943 ( f). The optimal cutoff point in the clinical test was determined to be 34.6 cycles for the best predictive values, which shows a considerable match for that of the preoperational test. As a result, the pRT-qPCR system satisfies the POC requirement of time, cost, and size at the same time ( Figure S19 ); it not only shows ultrafast molecular diagnostics 6 times faster than the conventional benchtop qPCR system but also exhibits a substantial improvement of 88 times in the package size as well as 41 times in the weight. Furthermore, the ultrafast and hand-held system achieves a good correlation with the benchtop qPCR system even with the short turnaround time and tightly meets the detailed criteria of target product profiles for the POC testing even outperforms competitive POC RT-PCR instruments and recent plasmonic PCR papers ( Tables S3–S5 ). In summary, this work has successfully demonstrated a decentralized biomedical diagnostic platform using the pRT-qPCR system for nanotechnology-driven ultrafast nucleic acid amplification and microtechnology-driven real-time fluorescence detection. The POC molecular diagnostic system consists of an ultrafast PTC, a cost-effective PoM cartridge, and a compact MAF microscope. The PTC contributes to a rapid response in both ramping up rate of 18.85 °C/s for photothermal heating and ramping down rate of 8.89 °C/s for passive cooling and direct surface temperature monitoring. The PoM cartridge increases the cooling rate by 2.2 times faster than the passive cooling and achieves in situ fluorescence detection without any spectral crosstalk during the WLED-driven plasmonic thermocycling. The MAF microscope captures close-up array fluorescence images of a tiny PCR chamber at such a short object distance of 10 mm and also enhances the detection limit by 16 times and the SNR by 1.33 times by the array image reconstruction. A fully packaged pRT-qPCR systems demonstrates rapid RT-PCR and real-time quantification of COVID-19 within 10 min, and exhibits affordable amplification efficiency (>95%), classification accuracy of preoperational test (>95%), and total percent agreement of clinical test (>90%). The pRT-qPCR system, with further technical developments, has much potential benefit for the clinical response to the pandemic at the POC level; microfluidic design for sufficient sample volume, reagent optimization suitable for ultrafast amplification, and integration with extraction-free sample preparation methods. This decentralized platform can contribute to quality improvement in not only public health but also primary health care, and further provide the POC test for assorted diseases such as hepatitis, hospital-acquired infections, and sexually transmitted diseases. Preparation of PTC and PoM Cartridge The NPS in the PTC was fabricated as previously described. , The HSQ (Fox-16, Dow Corning Corp.) was spin-coated at 4000 rpm for 30 s on the PTC to passivate the Pt-RTD. The electrically conductive epoxy adhesive (EO-21, Hightemp, Korea) was used to cover the Pt electrodes with a 100:4 mixing ratio of resin to hardener at 100 °C for 1 h. All the experimental data were acquired from one PTC, which showed long-term durability of at least 1000 RT-PCR tests without any loss of function. The PTC was integrated in the fully packaged pRT-qPCR system for permanent use, and only the PoM cartridge was used as disposable consumables. For the PoM cartridge, the Al thin film layer (A1050, LIB cathode foil for electronics, Sama Aluminum Co., Ltd.), the adhesive layer (DCA-93100H, Chemcos Co., Ltd.), and the film seal tape (Optical adhesive covers, Applied Biosystems) were prepared for handmade production. The Al layer and the adhesive layer were first adhered to a flat plate in order to prevent the Al thin film layer from warping and then precisely aligned with the PP layer. The plasma-Asher was used for oxygen plasma treatment after ethanol cleaning to increase the surface hydrophilicity of the PP layer with 270 W for 5 min. The ethanol cleaning of the PP layer converted it to a more hydrophobic surface by removing the impurities. The initial θ of the PP layer has a low surface energy of 78° close to the hydrophobicity and it was converted to 22.5°, which kept it for a month. The long-lasting hydrophilic properties exert a direct influence on the PCR efficiency inside the PoM cartridge. The PoM cartridge with all the surface treatment processes results in a higher end-point fluorescence intensity than the other conditions after the plasmonic PCR. The PoM cartridge provides a full guarantee of the high biocompatibility for a minimum of one month. Temperature Calibration The surface temperature of PTC was monitored and determined using an infrared (IR) thermographic camera (E75, FLIR Systems). The temperature value measured from the IR camera was first calibrated by a hot plate on the basis of surface emissivity correction of the PTC because the metal films reflect IR radiation. Then, the resistance shift of Pt-RTD during WLED modulation was calibrated with the average surface temperature in the area of the spiral resistance pattern from the IR camera. Contact Angle Measurement The degree of surface hydrophilicity or hydrophobicity was measured by a drop shape analyzer (EasyDrop FM40, KRUSS GmbH). The contact angle of a droplet on the Al thin layer or the PP layer was determined as the average value from 5 tests and observed for one month. Microfabrication of MAF Microscope The MAF microscope was fabricated as previously described. The wafer-level microfabrication of the MAF microscope was done by integrating a metal–insulator–metal (MIM) light absorber and microlens arrays. DNR photoresist (DNR L300-D1, Dong-jin Semichem, Dong-jin, Korea) was patterned and then deposited by a 5 nm thick chrome (Cr) thin film. After the thin Cr lift-off, the 95 nm thick SiO 2 was deposited using PECVD. The MIM structure (Cr–SiO 2 –Cr) was defined by a 100 nm thick Cr lift-off for highly absorbing the visible wavelengths of light and blocking optical crosstalk between adjacent microlenses. The DNR microlens arrays were further formed on the light absorber by using the DNR patterning, hydrophobic coating, and thermal reflow. The DNR microlens arrays were optimized with a lens diameter of 300 μm, a focal length of 430 μm, and a pitch size of 800 μm to fully image the PCR chamber of the PoM cartridge at a 10 mm object distance. The optical element was diced and placed on CMOS image sensor arrays (CMOS ISA, Sony IMX 219, 3280 × 2464 pixels) with 430 μm thick gap spacers. MAF Image Reconstruction The MAF image was acquired at each PCR cycle and transformed into the HDR single image after the RT-PCR process by using a series of image reconstruction protocols; array image slicing, image averaging, image subtraction, and image masking. (i) The array fluorescence image captured from the MAF microscope was cropped into 12 single channel images. (ii) The single channel images were stacked and merged by using the image averaging tools in Chasys Draw IES (ver. 5.23.01). (iii) The merged image after plasmonic PCR was subtracted from that of the initial state for reducing the background noise. (iv) The reconstructed image was finally achieved by using the image masking technique to extract the target fluorescence intensities from only the PCR chamber of the PoM cartridge. (v) The reconstructed images in every PCR cycle were converted into numerical values as the sum of pixel intensities. All the image reconstruction steps were performed in a minute using MATLAB software except for the image averaging. Device Configuration of pRT-qPCR System The WLED (LUXEON 7070, L170-5080701200000, Lumileds) was used for the photothermal excitation source with a power per unit area of 168 mW/mm 2 and the BLED (LXML-PR02-A900, Lumileds) for the fluorescence excitation source. The excitation bandpass filter (ET480/40x, Chroma Technology Corp.) was diced to 7.5 × 5 mm and arranged perpendicular to the BLED. The angle of incidence was determined at 30° to excite the TaqMan probes in the PoM cartridge at a short object distance. The emission bandpass filter (ET520/20m, Chroma Technology Co.) was diced to 7.5 mm × 7.5 mm to cover all microlens arrays and precisely placed in the front glass window of the MAF microscope. The movable tray (Al alloy, AL6061) with the PTC and the PoM cartridge inserted was operated along the rail by hand. The movable tray at complete insertion was in contact with the ball-plunger switch (FBPJS5, Woojin LM bearing) and the electrodes of PTC with a C-clip contact (PMT-1048, KH Electronics, Inc.). The single-board computer (Raspberry Pi 4 Model B, Raspberry Pi Foundation) managed all operations involving the pulse-width modulation of WLED and BLED, the resistance-based temperature reading, and the array fluorescence imaging. The cooling fan was attached to a Raspberry Pi for air circulation. The embedded software program was controlled by the touch panel of the liquid crystal display (5 in. capacitive touch display for Raspberry Pi, 800 × 480, Waveshare Electronics). All optoelectronic components were encapsulated in a custom-made plastic housing (acrylonitrile butadiene styrene, ABS). Preparation of Plasmonic PCR Mixtures The plasmonic PCR mixtures containing 4 μL of target viral RNA, 4 μL of 5× DF reaction buffer (optimized blend of Mg 2+ , dUTP, and dNTP, DirectFast qRT-PCR kit, NanoHelix Co., Ltd.), 2 μL of 10× DF enzyme buffer (optimized blend of reverse transcriptase, antibody-coupled Taq DNA polymerase, RNase inhibitor and heat-labile uracil-DNA-glycosylase, DirectFast qRT-PCR kit, NanoHelix Co., Ltd.), 1 μL of forward primer (100 μM, E_Sarbeco_F1, Koma Biotech Inc.), 1 μL of reverse primer (100 μM, E_Sarbeco_R2, Koma Biotech Inc.), 1 μL of TaqMan probe (10 μM, E_Sarbeco_P1, Koma Biotech Inc.), and 2 μL of bovine serum albumin (10 μg/μL BSA, Sigma-Aldrich) were brought to 20 μL of RNase-free water. The NTC was prepared by substituting 1 μL of RNase water for the target RNA in the PCR mixtures. The RT process time was set to 210 s for a minimum condition of ultrafast molecular diagnosis. For more efficient amplification of target sequences, the hold time for the RT process as well as the annealing and extension processes is recommended to be increased sufficiently. The λ-DNA (48,502 bp, Roche Applied Science) was prepared with forward and reverse primer (oligonucleotide synthesis for target sequence of 98 bp, Bioneer Inc.), and TaqMan probe (oligonucleotide synthesis, BIONICS) for analytical specificity evaluation. The PCR mixtures of 1 μL including the target amplicons after pRT-qPCR was extracted from the Al thin layer of the PoM cartridge using a pipet and loaded onto 2% agarose gel after mixing with 9 μL of RNase-free water, 1.5 μL of 10× sample loading buffer (Takara Bio Inc.), and SYBR Safe DNA gel stain (BioAssay Co., Ltd.). The gel electrophoresis was demonstrated by using Mupid-2plus (Takara Bio Inc.) with a GeneRuler 50 bp DNA ladder (Thermo Fisher Scientific Inc.). Clinical Diagnostic Test The nasopharyngeal and oropharyngeal samples were provided by the U2 Clinical Laboratories (Jangwon Medical Foundation). The viral RNA of the COVID-19 disease was extracted and purified by using the QIAamp DSP Viral RNA Mini Kit (QIAGEN). The conventional RT-qPCR tests targeting the E gene, open reading frame 1 (ORF1) gene, and internal control (ribonuclease P; RNaseP) were performed with a STANDARD M nCoV real-time detection kit (SD Biosensor) for clinical diagnosis of COVID-19, and the remaining samples were collected for the clinical diagnostic test of the pRT-qPCR system. The patient samples and healthy controls are given identification numbers after a random arrangement and delivered under single-blind condition. The pRT-qPCR system performed E-gene-targeted clinical diagnostic tests and internal control on each PoM cartridge. Statistical Analysis All of the data are described as means ± standard deviations from at least five experiments. The threshold line in the amplification curve was determined as five times the standard deviation of background values. The NPS in the PTC was fabricated as previously described. , The HSQ (Fox-16, Dow Corning Corp.) was spin-coated at 4000 rpm for 30 s on the PTC to passivate the Pt-RTD. The electrically conductive epoxy adhesive (EO-21, Hightemp, Korea) was used to cover the Pt electrodes with a 100:4 mixing ratio of resin to hardener at 100 °C for 1 h. All the experimental data were acquired from one PTC, which showed long-term durability of at least 1000 RT-PCR tests without any loss of function. The PTC was integrated in the fully packaged pRT-qPCR system for permanent use, and only the PoM cartridge was used as disposable consumables. For the PoM cartridge, the Al thin film layer (A1050, LIB cathode foil for electronics, Sama Aluminum Co., Ltd.), the adhesive layer (DCA-93100H, Chemcos Co., Ltd.), and the film seal tape (Optical adhesive covers, Applied Biosystems) were prepared for handmade production. The Al layer and the adhesive layer were first adhered to a flat plate in order to prevent the Al thin film layer from warping and then precisely aligned with the PP layer. The plasma-Asher was used for oxygen plasma treatment after ethanol cleaning to increase the surface hydrophilicity of the PP layer with 270 W for 5 min. The ethanol cleaning of the PP layer converted it to a more hydrophobic surface by removing the impurities. The initial θ of the PP layer has a low surface energy of 78° close to the hydrophobicity and it was converted to 22.5°, which kept it for a month. The long-lasting hydrophilic properties exert a direct influence on the PCR efficiency inside the PoM cartridge. The PoM cartridge with all the surface treatment processes results in a higher end-point fluorescence intensity than the other conditions after the plasmonic PCR. The PoM cartridge provides a full guarantee of the high biocompatibility for a minimum of one month. The surface temperature of PTC was monitored and determined using an infrared (IR) thermographic camera (E75, FLIR Systems). The temperature value measured from the IR camera was first calibrated by a hot plate on the basis of surface emissivity correction of the PTC because the metal films reflect IR radiation. Then, the resistance shift of Pt-RTD during WLED modulation was calibrated with the average surface temperature in the area of the spiral resistance pattern from the IR camera. The degree of surface hydrophilicity or hydrophobicity was measured by a drop shape analyzer (EasyDrop FM40, KRUSS GmbH). The contact angle of a droplet on the Al thin layer or the PP layer was determined as the average value from 5 tests and observed for one month. The MAF microscope was fabricated as previously described. The wafer-level microfabrication of the MAF microscope was done by integrating a metal–insulator–metal (MIM) light absorber and microlens arrays. DNR photoresist (DNR L300-D1, Dong-jin Semichem, Dong-jin, Korea) was patterned and then deposited by a 5 nm thick chrome (Cr) thin film. After the thin Cr lift-off, the 95 nm thick SiO 2 was deposited using PECVD. The MIM structure (Cr–SiO 2 –Cr) was defined by a 100 nm thick Cr lift-off for highly absorbing the visible wavelengths of light and blocking optical crosstalk between adjacent microlenses. The DNR microlens arrays were further formed on the light absorber by using the DNR patterning, hydrophobic coating, and thermal reflow. The DNR microlens arrays were optimized with a lens diameter of 300 μm, a focal length of 430 μm, and a pitch size of 800 μm to fully image the PCR chamber of the PoM cartridge at a 10 mm object distance. The optical element was diced and placed on CMOS image sensor arrays (CMOS ISA, Sony IMX 219, 3280 × 2464 pixels) with 430 μm thick gap spacers. The MAF image was acquired at each PCR cycle and transformed into the HDR single image after the RT-PCR process by using a series of image reconstruction protocols; array image slicing, image averaging, image subtraction, and image masking. (i) The array fluorescence image captured from the MAF microscope was cropped into 12 single channel images. (ii) The single channel images were stacked and merged by using the image averaging tools in Chasys Draw IES (ver. 5.23.01). (iii) The merged image after plasmonic PCR was subtracted from that of the initial state for reducing the background noise. (iv) The reconstructed image was finally achieved by using the image masking technique to extract the target fluorescence intensities from only the PCR chamber of the PoM cartridge. (v) The reconstructed images in every PCR cycle were converted into numerical values as the sum of pixel intensities. All the image reconstruction steps were performed in a minute using MATLAB software except for the image averaging. The WLED (LUXEON 7070, L170-5080701200000, Lumileds) was used for the photothermal excitation source with a power per unit area of 168 mW/mm 2 and the BLED (LXML-PR02-A900, Lumileds) for the fluorescence excitation source. The excitation bandpass filter (ET480/40x, Chroma Technology Corp.) was diced to 7.5 × 5 mm and arranged perpendicular to the BLED. The angle of incidence was determined at 30° to excite the TaqMan probes in the PoM cartridge at a short object distance. The emission bandpass filter (ET520/20m, Chroma Technology Co.) was diced to 7.5 mm × 7.5 mm to cover all microlens arrays and precisely placed in the front glass window of the MAF microscope. The movable tray (Al alloy, AL6061) with the PTC and the PoM cartridge inserted was operated along the rail by hand. The movable tray at complete insertion was in contact with the ball-plunger switch (FBPJS5, Woojin LM bearing) and the electrodes of PTC with a C-clip contact (PMT-1048, KH Electronics, Inc.). The single-board computer (Raspberry Pi 4 Model B, Raspberry Pi Foundation) managed all operations involving the pulse-width modulation of WLED and BLED, the resistance-based temperature reading, and the array fluorescence imaging. The cooling fan was attached to a Raspberry Pi for air circulation. The embedded software program was controlled by the touch panel of the liquid crystal display (5 in. capacitive touch display for Raspberry Pi, 800 × 480, Waveshare Electronics). All optoelectronic components were encapsulated in a custom-made plastic housing (acrylonitrile butadiene styrene, ABS). The plasmonic PCR mixtures containing 4 μL of target viral RNA, 4 μL of 5× DF reaction buffer (optimized blend of Mg 2+ , dUTP, and dNTP, DirectFast qRT-PCR kit, NanoHelix Co., Ltd.), 2 μL of 10× DF enzyme buffer (optimized blend of reverse transcriptase, antibody-coupled Taq DNA polymerase, RNase inhibitor and heat-labile uracil-DNA-glycosylase, DirectFast qRT-PCR kit, NanoHelix Co., Ltd.), 1 μL of forward primer (100 μM, E_Sarbeco_F1, Koma Biotech Inc.), 1 μL of reverse primer (100 μM, E_Sarbeco_R2, Koma Biotech Inc.), 1 μL of TaqMan probe (10 μM, E_Sarbeco_P1, Koma Biotech Inc.), and 2 μL of bovine serum albumin (10 μg/μL BSA, Sigma-Aldrich) were brought to 20 μL of RNase-free water. The NTC was prepared by substituting 1 μL of RNase water for the target RNA in the PCR mixtures. The RT process time was set to 210 s for a minimum condition of ultrafast molecular diagnosis. For more efficient amplification of target sequences, the hold time for the RT process as well as the annealing and extension processes is recommended to be increased sufficiently. The λ-DNA (48,502 bp, Roche Applied Science) was prepared with forward and reverse primer (oligonucleotide synthesis for target sequence of 98 bp, Bioneer Inc.), and TaqMan probe (oligonucleotide synthesis, BIONICS) for analytical specificity evaluation. The PCR mixtures of 1 μL including the target amplicons after pRT-qPCR was extracted from the Al thin layer of the PoM cartridge using a pipet and loaded onto 2% agarose gel after mixing with 9 μL of RNase-free water, 1.5 μL of 10× sample loading buffer (Takara Bio Inc.), and SYBR Safe DNA gel stain (BioAssay Co., Ltd.). The gel electrophoresis was demonstrated by using Mupid-2plus (Takara Bio Inc.) with a GeneRuler 50 bp DNA ladder (Thermo Fisher Scientific Inc.). The nasopharyngeal and oropharyngeal samples were provided by the U2 Clinical Laboratories (Jangwon Medical Foundation). The viral RNA of the COVID-19 disease was extracted and purified by using the QIAamp DSP Viral RNA Mini Kit (QIAGEN). The conventional RT-qPCR tests targeting the E gene, open reading frame 1 (ORF1) gene, and internal control (ribonuclease P; RNaseP) were performed with a STANDARD M nCoV real-time detection kit (SD Biosensor) for clinical diagnosis of COVID-19, and the remaining samples were collected for the clinical diagnostic test of the pRT-qPCR system. The patient samples and healthy controls are given identification numbers after a random arrangement and delivered under single-blind condition. The pRT-qPCR system performed E-gene-targeted clinical diagnostic tests and internal control on each PoM cartridge. All of the data are described as means ± standard deviations from at least five experiments. The threshold line in the amplification curve was determined as five times the standard deviation of background values.
Distribution of soil microorganisms in different complex soil layers in Mu Us sandy land
b11cd80b-428b-4211-b938-aaac97215f64
10081775
Microbiology[mh]
Soil bacteria are the most abundant group of soil microorganisms, accounting for about 80% of the total microorganisms, and they are also rich in functional types . Because of their high adaptability, small size, great quantity and big surface area, bacteria have become the biggest living surface in the soil, so they are the most active living element, and they are always exchanging with the surrounding matter . Their can decompose organic residues in the soil, participate in the transformation of soil nutrients, and are the key organisms in the material cycle and energy flow of the ecosystem . Along with more and more human interference to the soil, for example, the changes of soil control, fertilizer application and planting pattern, it is found that the disturbance has great influence on the structure, variety, and even function of the soil microorganism community . However, the change trend of bacterial community in the soil of Mu Us Sandy Land with development potential still needs to be further explored, which provides a theoretical basis for the increase of national cultivated land area and quality improvement. Microbes are the most sensitive to environmental conditions. The quantity, components and activity of microbial in the course of improving the sand soil can be greatly influenced by the use of different improving methods . A large number of domestic and foreign scholars have studied the effects of agricultural use patterns on the soil quality of sandy farmland based on different experimental areas, indicating that if there is a reasonable land use pattern, soil carbon and nitrogen storage can improve the microbial quality of the region . We believe that long-term land use changes soil bacterial community structure in specific ways. The migration and interaction between microbial communities in the early stage of sandy land consolidation provides a unique environment for the development of soil ecosystems . Liu et al . found that conservation tillage and fine management of irrigated farmland were beneficial to soil environment improvement and ecosystem restoration in sandy land . Su et al . showed that after the desert sandy land was reclaimed into farmland, the soil fertility and microbial community were significantly improved with the increase of reclamation years, but the soil fertility in this area was still at a low level . Moreover, some researches have also been done on the improvement of sandy soil by artificial sand wall, which can enhance the quantity of microbes and the activity of urease . The Mu Us Sandy Land lies in a semiarid region in northern China . Because of the shortage of surface water resources, low vegetation cover, vulnerable to human activities, and severe soil erosion, the ecology of Mu Us Sandy Land is vulnerable . He et al . used an engineering measure to improve the sandy land, indicating that soft rock was a loose rock widely distributed in the Mu us Sandy land, and its mixing with aeolian sandy soil can significantly improve the water and fertilizer retention capacity of the sandy land . Moreover, it was thought that the soft rock will be as soft as mud when it comes into contact with water, which can improve the chemistry and physics properties of the sand and the production of crops, and also enhance the colloid content of sand . Therefore, it is suggested that the application of soft rock to improve the wind-blown sand in Mu Us Sandy Land can not only improve the soil moisture and fertilizer, but also increase the acreage of arable land, and improve the production of the crop, and resolve the material need of improving the Mu Us Sandy Land, and keep the ecological environment sustainable . Under the same site conditions, soil microorganisms showed vertical variation with soil depth. Liu et al . believed that bacterial community composition in desert areas was highly stratified, and surface soil microorganisms were greatly disturbed . Du et al . found that the number and diversity of microorganisms decreased with the deepening of soil depth, and there were common and specific microbial groups in each soil layer . It can be seen that the depth of soil layer has an obvious effect on microbial gradient distribution. The improvement of sandy land by soft rock is an engineering measure to organically reconstruct the sandy land, which has a layered structure. The study on the influence of soft rock on soil bacteria in sandy land is of great significance to reveal the response mechanism of underground microbial community to engineering measures and to study the improvement measures of soil quality in sandy land. However, previous studies on soft rock and sand compound soil mainly focused on the physical structure and chemical properties, while there are few reports on the differences in soil bacterial community structure and its driving factors during soil development. Therefore, our research objectives are to (1) clarify the soil (soft rock and sand compound soil) nutrient gradient changes after sandy land improvement, (2) reveal the layered structure of the bacterial community in the compound soil, and (3) elucidate soil factors that regulate bacterial community structure in compound soils. Overview of the test site The experimental area of soft rock-sand composite soil was situated in Mu Us Sandy Land (E109°28′58″-109°30′10″, N38°27′53″-38°28′23″) in Yuyang District of Yulin City, which lied in the region of Northwestern Shaanxi. The experimental region is a typical middle temperate semiarid continental monsoon climate belt, which was characterized by irregular rainfall in time and space, dry weather, long winter and short summer, four distinct seasons and abundant sunlight. The average annual temperature was 8.1°C, and the average annual frost was 154 days, the average annual rainfall was 413.9 millimeters, and the precipitation was 60.9% in June-September. The average number of sunshine hours per year was 2879 hours, and the proportion of sunlight was 65%. The soil type of the project area was mostly sand. Experiment design The experimental field was used to simulate the soil conditions of the mixture of soft gravel and sand in the Mu Us Desert. In the experiment field with a 5 m × 12 m = 60 m 2 , the chosen proportion of soft stone to sand (0: 1, 1: 5, 1: 2, 1: 1) was used for 3 times. And CK, P1, P2 and P3 were used to represent the above four volume ratios in turn. The field trial was carried out on an annual basis, starting in the middle of April, and picking from the middle to the end of September. Artificial planting method was used all year round. During the farming years, the application of chemical fertilizers is used to promote the growth of crops and the accumulation of root exudates, and at the same time promote the metabolic activities of microorganisms, and promote the increase of the nutrient content of the compound soil of soft rock and sand. The experimental fertilizer were composed of urea, diammonium phosphate and potassium chloride. The quantity of fertilizer applied was N 300 kg ha -1 , P 2 O 5 375 kg ha -1 and K 2 O 180 kg ha -1 per year. Soil sample collection After the potatoes were harvested in September 2020, soil samples from 0 to 60 cm (0–30 cm, 30–60 cm) were taken from each plot. In every field, three mixed soil samples were taken, and all of them were collected and mixed with five points. The soil samples were separated into two sections with the removal of plant and animal residues. One of them was natural air dried and screened with a sample of 1 mm and 0. 149 mm, and the other one was kept in a freezer at -80°C for microbiological analysis. Determination of soil physical and chemical properties Determination of soil organic carbon (SOC) by external heating of potassium dichromate . Determination of total nitrogen (TN) by Kjeldahl digestion, the molybdate blue colorimetric method for determination of available phosphorus (AP), and atomic absorption spectrometry for determination of available potassium (AK) . NO 3 − -N and NH 4 + -N were extracted at a ratio of 10 g fresh soil to 100 mL 2 M KCl. After shaking for 1 h, the extracts were filtered and analyzed by continuous flow analytical system (San++ System, Skalar, Holland) for NO 3 − -N and NH 4 + -N . pH was measured using a pH meter (PHS-3E, INESA, China), and the soil-to-water ratio was 1:5 . Soil DNA extraction and sequencing The E.Z.N.A. ® Soil DNA Kit (Omega, Inc., USA) was used to extract the entire DNA from the soil sample. Then, the concentration and purity of the DNA were measured with the Nitragon 2000 spectrophotometer, and the results were measured with 1% agarose gel electrophoresis. The PCR amplification was performed with V3-V4 region specific primers 338F ( 5’-ACTCCTACGGGAGGCAGCAG-3’ ) and 806R ( 5’-GGACTACHVGGGTWTCTAAT-3’ ) based on the total microorganism DNA in every soil specimen . Fluorescence quantitative PCR amplification Fluorescent quantitative PCR was carried out with the same primers as the above-mentioned high throughput sequencing . Amplification was performed with a fluorescent PCR apparatus (Applied Biosystems, USA). Three copies were established for each specimen, and the final genetic abundance was calculated according to the dry mass of the soil. Data processing and analysis The experimental data was analyzed with SPSS 20.0 for variance analysis. The original reading was obtained by sequencing. The quality of the high-throughput sequencing raw data was checked by FastQC software, and the low-quality reading was filtered. Then, FLASH (V1.2.7) was used for assembly and QIME re-filtering. The 97% similarity was used as the basis to ensure that the most effective data was used for clustering into operational taxonomic units (OTU). Sample composition was performed by using QIME (version 1.9.1) software, obtain the data of the bacterial community composition and relative abundance of the sample at different taxonomic levels, and make the relative abundance maps of the species at the Phylum and Genus level. The dilution curve analysis of OTU was performed using QIME (version 1.9.1) and the species diversity index was calculated. The β diversity is based on the distance matrix between the two samples to reflect the differences between the samples. The principal component analysis (PCA) diagram of soil bacterial community structure was drawn using R language (version 3.3.1). The redundancy analysis (RDA) between bacterial community composition and environmental factors was conducted using Canoco. The Spearman correlation coefficient was used to analyze the correlation between environmental factors and species, and the Heatmap was drawn with the aid of R software. The correlation between species was analyzed by Network. The experimental area of soft rock-sand composite soil was situated in Mu Us Sandy Land (E109°28′58″-109°30′10″, N38°27′53″-38°28′23″) in Yuyang District of Yulin City, which lied in the region of Northwestern Shaanxi. The experimental region is a typical middle temperate semiarid continental monsoon climate belt, which was characterized by irregular rainfall in time and space, dry weather, long winter and short summer, four distinct seasons and abundant sunlight. The average annual temperature was 8.1°C, and the average annual frost was 154 days, the average annual rainfall was 413.9 millimeters, and the precipitation was 60.9% in June-September. The average number of sunshine hours per year was 2879 hours, and the proportion of sunlight was 65%. The soil type of the project area was mostly sand. The experimental field was used to simulate the soil conditions of the mixture of soft gravel and sand in the Mu Us Desert. In the experiment field with a 5 m × 12 m = 60 m 2 , the chosen proportion of soft stone to sand (0: 1, 1: 5, 1: 2, 1: 1) was used for 3 times. And CK, P1, P2 and P3 were used to represent the above four volume ratios in turn. The field trial was carried out on an annual basis, starting in the middle of April, and picking from the middle to the end of September. Artificial planting method was used all year round. During the farming years, the application of chemical fertilizers is used to promote the growth of crops and the accumulation of root exudates, and at the same time promote the metabolic activities of microorganisms, and promote the increase of the nutrient content of the compound soil of soft rock and sand. The experimental fertilizer were composed of urea, diammonium phosphate and potassium chloride. The quantity of fertilizer applied was N 300 kg ha -1 , P 2 O 5 375 kg ha -1 and K 2 O 180 kg ha -1 per year. After the potatoes were harvested in September 2020, soil samples from 0 to 60 cm (0–30 cm, 30–60 cm) were taken from each plot. In every field, three mixed soil samples were taken, and all of them were collected and mixed with five points. The soil samples were separated into two sections with the removal of plant and animal residues. One of them was natural air dried and screened with a sample of 1 mm and 0. 149 mm, and the other one was kept in a freezer at -80°C for microbiological analysis. Determination of soil organic carbon (SOC) by external heating of potassium dichromate . Determination of total nitrogen (TN) by Kjeldahl digestion, the molybdate blue colorimetric method for determination of available phosphorus (AP), and atomic absorption spectrometry for determination of available potassium (AK) . NO 3 − -N and NH 4 + -N were extracted at a ratio of 10 g fresh soil to 100 mL 2 M KCl. After shaking for 1 h, the extracts were filtered and analyzed by continuous flow analytical system (San++ System, Skalar, Holland) for NO 3 − -N and NH 4 + -N . pH was measured using a pH meter (PHS-3E, INESA, China), and the soil-to-water ratio was 1:5 . The E.Z.N.A. ® Soil DNA Kit (Omega, Inc., USA) was used to extract the entire DNA from the soil sample. Then, the concentration and purity of the DNA were measured with the Nitragon 2000 spectrophotometer, and the results were measured with 1% agarose gel electrophoresis. The PCR amplification was performed with V3-V4 region specific primers 338F ( 5’-ACTCCTACGGGAGGCAGCAG-3’ ) and 806R ( 5’-GGACTACHVGGGTWTCTAAT-3’ ) based on the total microorganism DNA in every soil specimen . Fluorescent quantitative PCR was carried out with the same primers as the above-mentioned high throughput sequencing . Amplification was performed with a fluorescent PCR apparatus (Applied Biosystems, USA). Three copies were established for each specimen, and the final genetic abundance was calculated according to the dry mass of the soil. The experimental data was analyzed with SPSS 20.0 for variance analysis. The original reading was obtained by sequencing. The quality of the high-throughput sequencing raw data was checked by FastQC software, and the low-quality reading was filtered. Then, FLASH (V1.2.7) was used for assembly and QIME re-filtering. The 97% similarity was used as the basis to ensure that the most effective data was used for clustering into operational taxonomic units (OTU). Sample composition was performed by using QIME (version 1.9.1) software, obtain the data of the bacterial community composition and relative abundance of the sample at different taxonomic levels, and make the relative abundance maps of the species at the Phylum and Genus level. The dilution curve analysis of OTU was performed using QIME (version 1.9.1) and the species diversity index was calculated. The β diversity is based on the distance matrix between the two samples to reflect the differences between the samples. The principal component analysis (PCA) diagram of soil bacterial community structure was drawn using R language (version 3.3.1). The redundancy analysis (RDA) between bacterial community composition and environmental factors was conducted using Canoco. The Spearman correlation coefficient was used to analyze the correlation between environmental factors and species, and the Heatmap was drawn with the aid of R software. The correlation between species was analyzed by Network. Soil physical and chemical properties The SOC content in 0–30 cm soil layer was higher in P1 and P2 treatments, and P2 significantly increased the SOC content by 59.74% compared with P1 treatment. The TN content in the P1 treatment was higher in the 0–30 cm soil layer, and there was no gradient difference in the other treatments. This is because crop roots are mainly distributed in 0-30cm soil layer, and root residues and their exudates increase nutrient content . The AP content of all treatments had no significant difference in the soil layer, and the AP content of the 30–60 cm soil layer P3 treatment was significantly higher than that of other treatments. The AK content of the P3 and CK treatments was higher in the 30–60 cm soil layer, which increased by 27.75% and 40.14% than surface layer, respectively. Because the soil structure becomes loose, porous and cementitious with the increase of the proportion of soft rock, and all the layers below 30 cm are sand layers, soil nutrients migrate downward due to leaching . At the same time, the AK content of P1 treatment was higher at 0–30 cm and 30–60 cm. The contents of NO 3 - -N and NH 4 + -N in the 0–30 cm soil layer had no significant difference among treatments P1, P2 and P3, but increased compared with those of CK treatments. In the 30–60 cm soil layer, the contents of NH 4 + -N and NH 4 + -N were higher in the P3 treatment, and the increase was significantly higher than that in the CK treatment. pH was not significantly different among all treatments and soil layers ( and ). The results of two-factor test showed that compound ratio, soil layer and their interaction all had significant effects on SOC, TN, AP, AK, NO 3 - -N and NH 4 + -N, indicating that they had synchronous effects on soil properties, but had no effect on pH. Kang et al . showed that the spatial structure of soil and the thickness of different soil layers had significant effects on soil nutrients, which was similar to the results of this study. 16S rRNA gene abundance of soil bacteria Using fluorescence real-time quantitative PCR analysis, the abundance of 16S rRNA genes of the four mixed soil bacteria was between 0.03×10 9 −0.21×10 9 copies g -1 dry soil ( ). In the 0–30 cm soil layer, the bacterial gene copy number in the P1 treatment was the largest, which was significantly increased by 65.77%-512.56% compared with other treatments. In addition, the gene copy number treated with P3 was larger, and there was no significant difference between P2 and CK treatments. In the 30–60 cm soil layer, there was no significant difference in gene copy number between P1 and P3 treatments, but it was significantly increased compared with CK and P2 treatments. The 30–60 cm soil layer showed a significant difference compared with the 0–30 cm soil layer only in the P3 treatment. The results of the two-factor test showed that the compound ratio, soil layer and their interaction had significant effects on the bacterial gene copy number, indicating that both the compound ratio and the thickness of the constructed soil layer had an important impact on the change of bacterial number. Bacterial community composition According to the Phylum classification level, the bacteria community composition of the compound soil under different soil layers was studied, and the results showed the abundance of the top 12 bacteria. Others classified the relative abundance less than 0.01 into one category ( ). The results of Steven et al . showed that soil type did not affect the diversity of subsurface soil microbial communities. The results of this study showed that the three dominant bacteria were Phylum Actinobacteriota , Phylum Proteobacteria , and Phylum Chloroflexi in different soil layers. Among many wetlands, Phylum Proteobacteria has the highest relative abundance because of their strong adaptability to the environment . The Phylum Actinobaciota has the highest abundance in this study, followed by Phylum Proteobacteria , indicating that Phylum Proteobacteria has high abundance in both dry land and wetland. Compared with CK, the compound ratio treatments in different soil layers increased the relative abundance of Phylum Actinobacteriota . In the 0–30 cm and 30–60 cm soil layers, the P1 and P3 treatments increased the relative abundance by 46.49% and 44.35%, respectively. Compared with CK, the relative abundance of Phylum Proteobacteria of compound soils under different soil layers showed a decreasing trend. The relative abundance of Phylum Proteobacteria under the soil layers of 0–30 cm and 30–60 cm all decreased significantly by P1 and P2. Compared with CK, the compound ratios both increased the abundance of Phylum Chloroflexi in the 0–30 cm and 30–60 cm soil layers, with the greatest increase in the P2 treatment, which were 47.26% and 58.39%, respectively. Compared with CK treatment, soft rock treatment reduced the relative abundance of non-dominant bacteria Phylum Firmicutes , Phylum Bacteroidota , Phylum Cyanobacteria , Phylum Patescibacteria and Phylum Verrucomicrobiota , but increased the abundance of Phylum Acidobacteriota and Phylum Myxococcota . It can be seen that at the level of Phylum classification, the bacterial species composition of soft rock and sand compound soil does not show obvious agglomeration at the vertical scale. At the level of Genus classification, the differences of bacteria in different treatments increased and there were more endemic genera in each soil layer ( ). In the 0–30 cm soil layer, the dominant bacteria of CK were Genu Arthrobacter (6.33%), Genu norank_f__JG30-KF-CM45 (5.48%), and Genu Lysobacter (4.54%). The dominant bacteria of P1 were Genu Arthrobacter (12.75%), Genu norank_f__JG30-KF-CM45 (3.39%), and Genu Bacillus (2.25%). The dominant bacteria of P2 were Genu Arthrobacter (11.93%), Genu norank_f__JG30-KF-CM45 (3.96%), and Genu norank_f__norank_o__norank_c__KD4-96 (3.18%). The dominant bacteria of P3 were Genu Arthrobacter (10.36%), Genu norank_f__JG30-KF-CM45 (3.33%), and Genu Bacillus (2.25%). With the deepening of the soil layer, Genu Arthrobacter and Genu Norank_F__JG30-KF-CM45 were still two dominant bacteria with high abundance, while other dominant bacteria were quite different. And in the 30–60 cm soil layer, Genu Microvirga (CK, 2.95%), Genu Domibacillus (P1, 3.50%), Genu norank_f__norank_o__norank_c__KD4-96 (P2, 3.07%), and Genu Brucella (P3, 4.05%) were most abundant. This difference may be caused by the gradient difference in nutrient content of the mixed soil ( and ) or the adaptability of the mixed soil to the new environment is different . Bacterial α diversity Coverage refers to the sequencing accuracy of the sample library, and the higher the value, the higher the probability of the sequence in the sample being measured. The Coverage values in this study were all greater than 97%, indicating that the sequencing results were highly reliable and cover most of the sequencing information in the samples ( ). Chao and Ace indexes represent the abundance of bacterial communities, and the higher the value, the higher the abundance of community species. The results showed that the Chao index of different soil layers had no significant difference, and the Chao index of 0–30 cm soil layer was larger. In the 0–30 cm soil layer, the Chao index of P1 and P3 treatment increased significantly, while in the 30–60 cm soil layer, the Chao index of P1 and P2 treatment increased significantly. The change trend of Ace index was consistent with that of Chao index. Shannon index represented the diversity of bacterial community. The results showed that the addition of soft rock promoted the increase of bacterial diversity in sandy soil, but there was no significant difference between different treatments. Moreover, the increase of diversity in the surface layer was greater than that in the bottom layer. It may be due to the addition of soft rock clay minerals in the surface soil, which was greatly affected by the soil parent material . Bacterial community β diversity PCA analysis results showed that CK was clearly distinguished from other processed samples on the PC1 axis, and other samples were located to the right of CK ( ). In the 0–30 cm soil layer, the PC1 and PC2 axes explained 20.91% and 17.40% of the total variation, respectively, and the distance between the P1 and P3 soil samples was relatively close ( ). In the 30–60 cm soil layer, PC1 and PC2 accounted for 21.34% and 17.19% of the total variation, respectively, where the small distance between soil samples P1 and P2 indicated similar bacterial community composition between them ( ). Among the four treatments, P1 and P3 had similar community structure in 0–30 cm soil layer, because the diversity and richness of bacteria also changed in the same way, and the abundance of Phylum Myxococcota in P1 and P3 treatments were higher than that in P2 treatments. In the 30–60 cm soil layer, the community structures of the P1 and P2 treatments were similar, because the species composition and abundance of the two treatments were not significantly different, resulting in a high degree of similarity between the two treatments. Bacteria have the highest diversity and the most stable community structure in medium-alkaline soil, but small changes in pH value may lead to the formation of different community structures . In this study, the pH values of P1 and P3 in 0–30 cm soil layer were basically the same, and the pH values of P1 and P2 in 30–60 cm soil layer were basically the same, showing the similarity of community structure. The relationship between soil properties and bacterial communities In the 0–30 cm soil layer, the interpretation of the RDA1 axis and RDA2 axis was 54.08% and 14.75%, respectively, and the sum of the two axes was 68.83%. AK, SOC and AN had the greatest influence on bacterial community composition; NN, AP and TN were followed; pH had the least effect ( ). In the 30–60 cm soil layer, the interpretation of the RDA1 axis and RDA2 axis was 60.25% and 17.45%, respectively, and the sum of the two axes was 77.70%. The degree of influence of various environmental factors on the composition of bacterial communities in soil samples was AK, TN, and NN with the greatest impact; AN, SOC and pH had the second most impact; AP had the least impact ( ). However, the study of Kong et al . was inconsistent with the results of this paper, arguing that pH was the main reason for affecting the bacterial community structure of surface soil, and ammonium nitrogen was the main reason for affecting the bacterial community of deep soil. This was because deep soil was more stable, while surface soil was susceptible to temperature, humidity and human activity. The results of RDA showed that AK content in different soil layers had a greater impact on the bacterial community composition of the samples, which confirmed that there was a synergistic change between soil properties and microbial communities in the process of sandy land improvement. Heat map of the correlation between soil properties and bacterial communities The top 15 relative abundance species at the Genus level were selected for correlation analysis with soil properties. In the soil layer of 0–30 cm, soil properties had a great influence on bacterial community. Existing studies have suggested that the main nutrient sources of soil bacteria were root exudates and litters, and the quality and amount of nutrients provided by roots and litters for microorganisms were different, resulting in different soil bacterial community composition under different treatments . The Genu Streptomyces (belonging to Phylum Actinobacteria ) was significantly negatively correlated with soil pH and positively correlated with TN and AK. The Genu Gaiella (belonging to Phylum Actinobacteria ) was positively correlated with SOC and AN, and was significantly different from AK. The Genu Norank_f__norank_o__norank_c__gitt-gs-136 was significantly positively correlated with SOC. The Genu Arthrobacter (belonging to Phylum Actinobacteria ) and Genu norank_f__Gemmatimonadanceae were significantly positively correlated with AK. The Genu Norank_f__Roseiflexaceac showed significant positive correlation with NN and AN, and extremely significant positive correlation with AK. The Genu Blastococcus (Belonging to Phylum Actinobacteria) was significantly positively correlated with AN. The dominant bacterium The Genu norank_f__norank_o__norank_c__kD4-96 had a significant positive correlation with SOC ( ). In the 30–60 cm soil layer, the Genu Arthrobacter (belonging to Phylum Actinobacteria ) was positively correlated with AK, the Genu Brucella (belonging to Phylum Proteobacteria ) was significantly negatively correlated with TN, and the Genu Sphingomonas (belonging to Phylum Proteobacteria ) was significantly positively correlated with TN ( ). It can be seen that Phylum Actinobacia was the first dominant group in the composite soil, with high relative abundance in each soil layer, and was the main source of soil nutrient supply, which was related to that Phylum Actinobacia was suitable for growing in the neutral alkaline pH soil. The SOC content in 0–30 cm soil layer was higher in P1 and P2 treatments, and P2 significantly increased the SOC content by 59.74% compared with P1 treatment. The TN content in the P1 treatment was higher in the 0–30 cm soil layer, and there was no gradient difference in the other treatments. This is because crop roots are mainly distributed in 0-30cm soil layer, and root residues and their exudates increase nutrient content . The AP content of all treatments had no significant difference in the soil layer, and the AP content of the 30–60 cm soil layer P3 treatment was significantly higher than that of other treatments. The AK content of the P3 and CK treatments was higher in the 30–60 cm soil layer, which increased by 27.75% and 40.14% than surface layer, respectively. Because the soil structure becomes loose, porous and cementitious with the increase of the proportion of soft rock, and all the layers below 30 cm are sand layers, soil nutrients migrate downward due to leaching . At the same time, the AK content of P1 treatment was higher at 0–30 cm and 30–60 cm. The contents of NO 3 - -N and NH 4 + -N in the 0–30 cm soil layer had no significant difference among treatments P1, P2 and P3, but increased compared with those of CK treatments. In the 30–60 cm soil layer, the contents of NH 4 + -N and NH 4 + -N were higher in the P3 treatment, and the increase was significantly higher than that in the CK treatment. pH was not significantly different among all treatments and soil layers ( and ). The results of two-factor test showed that compound ratio, soil layer and their interaction all had significant effects on SOC, TN, AP, AK, NO 3 - -N and NH 4 + -N, indicating that they had synchronous effects on soil properties, but had no effect on pH. Kang et al . showed that the spatial structure of soil and the thickness of different soil layers had significant effects on soil nutrients, which was similar to the results of this study. Using fluorescence real-time quantitative PCR analysis, the abundance of 16S rRNA genes of the four mixed soil bacteria was between 0.03×10 9 −0.21×10 9 copies g -1 dry soil ( ). In the 0–30 cm soil layer, the bacterial gene copy number in the P1 treatment was the largest, which was significantly increased by 65.77%-512.56% compared with other treatments. In addition, the gene copy number treated with P3 was larger, and there was no significant difference between P2 and CK treatments. In the 30–60 cm soil layer, there was no significant difference in gene copy number between P1 and P3 treatments, but it was significantly increased compared with CK and P2 treatments. The 30–60 cm soil layer showed a significant difference compared with the 0–30 cm soil layer only in the P3 treatment. The results of the two-factor test showed that the compound ratio, soil layer and their interaction had significant effects on the bacterial gene copy number, indicating that both the compound ratio and the thickness of the constructed soil layer had an important impact on the change of bacterial number. According to the Phylum classification level, the bacteria community composition of the compound soil under different soil layers was studied, and the results showed the abundance of the top 12 bacteria. Others classified the relative abundance less than 0.01 into one category ( ). The results of Steven et al . showed that soil type did not affect the diversity of subsurface soil microbial communities. The results of this study showed that the three dominant bacteria were Phylum Actinobacteriota , Phylum Proteobacteria , and Phylum Chloroflexi in different soil layers. Among many wetlands, Phylum Proteobacteria has the highest relative abundance because of their strong adaptability to the environment . The Phylum Actinobaciota has the highest abundance in this study, followed by Phylum Proteobacteria , indicating that Phylum Proteobacteria has high abundance in both dry land and wetland. Compared with CK, the compound ratio treatments in different soil layers increased the relative abundance of Phylum Actinobacteriota . In the 0–30 cm and 30–60 cm soil layers, the P1 and P3 treatments increased the relative abundance by 46.49% and 44.35%, respectively. Compared with CK, the relative abundance of Phylum Proteobacteria of compound soils under different soil layers showed a decreasing trend. The relative abundance of Phylum Proteobacteria under the soil layers of 0–30 cm and 30–60 cm all decreased significantly by P1 and P2. Compared with CK, the compound ratios both increased the abundance of Phylum Chloroflexi in the 0–30 cm and 30–60 cm soil layers, with the greatest increase in the P2 treatment, which were 47.26% and 58.39%, respectively. Compared with CK treatment, soft rock treatment reduced the relative abundance of non-dominant bacteria Phylum Firmicutes , Phylum Bacteroidota , Phylum Cyanobacteria , Phylum Patescibacteria and Phylum Verrucomicrobiota , but increased the abundance of Phylum Acidobacteriota and Phylum Myxococcota . It can be seen that at the level of Phylum classification, the bacterial species composition of soft rock and sand compound soil does not show obvious agglomeration at the vertical scale. At the level of Genus classification, the differences of bacteria in different treatments increased and there were more endemic genera in each soil layer ( ). In the 0–30 cm soil layer, the dominant bacteria of CK were Genu Arthrobacter (6.33%), Genu norank_f__JG30-KF-CM45 (5.48%), and Genu Lysobacter (4.54%). The dominant bacteria of P1 were Genu Arthrobacter (12.75%), Genu norank_f__JG30-KF-CM45 (3.39%), and Genu Bacillus (2.25%). The dominant bacteria of P2 were Genu Arthrobacter (11.93%), Genu norank_f__JG30-KF-CM45 (3.96%), and Genu norank_f__norank_o__norank_c__KD4-96 (3.18%). The dominant bacteria of P3 were Genu Arthrobacter (10.36%), Genu norank_f__JG30-KF-CM45 (3.33%), and Genu Bacillus (2.25%). With the deepening of the soil layer, Genu Arthrobacter and Genu Norank_F__JG30-KF-CM45 were still two dominant bacteria with high abundance, while other dominant bacteria were quite different. And in the 30–60 cm soil layer, Genu Microvirga (CK, 2.95%), Genu Domibacillus (P1, 3.50%), Genu norank_f__norank_o__norank_c__KD4-96 (P2, 3.07%), and Genu Brucella (P3, 4.05%) were most abundant. This difference may be caused by the gradient difference in nutrient content of the mixed soil ( and ) or the adaptability of the mixed soil to the new environment is different . Coverage refers to the sequencing accuracy of the sample library, and the higher the value, the higher the probability of the sequence in the sample being measured. The Coverage values in this study were all greater than 97%, indicating that the sequencing results were highly reliable and cover most of the sequencing information in the samples ( ). Chao and Ace indexes represent the abundance of bacterial communities, and the higher the value, the higher the abundance of community species. The results showed that the Chao index of different soil layers had no significant difference, and the Chao index of 0–30 cm soil layer was larger. In the 0–30 cm soil layer, the Chao index of P1 and P3 treatment increased significantly, while in the 30–60 cm soil layer, the Chao index of P1 and P2 treatment increased significantly. The change trend of Ace index was consistent with that of Chao index. Shannon index represented the diversity of bacterial community. The results showed that the addition of soft rock promoted the increase of bacterial diversity in sandy soil, but there was no significant difference between different treatments. Moreover, the increase of diversity in the surface layer was greater than that in the bottom layer. It may be due to the addition of soft rock clay minerals in the surface soil, which was greatly affected by the soil parent material . PCA analysis results showed that CK was clearly distinguished from other processed samples on the PC1 axis, and other samples were located to the right of CK ( ). In the 0–30 cm soil layer, the PC1 and PC2 axes explained 20.91% and 17.40% of the total variation, respectively, and the distance between the P1 and P3 soil samples was relatively close ( ). In the 30–60 cm soil layer, PC1 and PC2 accounted for 21.34% and 17.19% of the total variation, respectively, where the small distance between soil samples P1 and P2 indicated similar bacterial community composition between them ( ). Among the four treatments, P1 and P3 had similar community structure in 0–30 cm soil layer, because the diversity and richness of bacteria also changed in the same way, and the abundance of Phylum Myxococcota in P1 and P3 treatments were higher than that in P2 treatments. In the 30–60 cm soil layer, the community structures of the P1 and P2 treatments were similar, because the species composition and abundance of the two treatments were not significantly different, resulting in a high degree of similarity between the two treatments. Bacteria have the highest diversity and the most stable community structure in medium-alkaline soil, but small changes in pH value may lead to the formation of different community structures . In this study, the pH values of P1 and P3 in 0–30 cm soil layer were basically the same, and the pH values of P1 and P2 in 30–60 cm soil layer were basically the same, showing the similarity of community structure. In the 0–30 cm soil layer, the interpretation of the RDA1 axis and RDA2 axis was 54.08% and 14.75%, respectively, and the sum of the two axes was 68.83%. AK, SOC and AN had the greatest influence on bacterial community composition; NN, AP and TN were followed; pH had the least effect ( ). In the 30–60 cm soil layer, the interpretation of the RDA1 axis and RDA2 axis was 60.25% and 17.45%, respectively, and the sum of the two axes was 77.70%. The degree of influence of various environmental factors on the composition of bacterial communities in soil samples was AK, TN, and NN with the greatest impact; AN, SOC and pH had the second most impact; AP had the least impact ( ). However, the study of Kong et al . was inconsistent with the results of this paper, arguing that pH was the main reason for affecting the bacterial community structure of surface soil, and ammonium nitrogen was the main reason for affecting the bacterial community of deep soil. This was because deep soil was more stable, while surface soil was susceptible to temperature, humidity and human activity. The results of RDA showed that AK content in different soil layers had a greater impact on the bacterial community composition of the samples, which confirmed that there was a synergistic change between soil properties and microbial communities in the process of sandy land improvement. The top 15 relative abundance species at the Genus level were selected for correlation analysis with soil properties. In the soil layer of 0–30 cm, soil properties had a great influence on bacterial community. Existing studies have suggested that the main nutrient sources of soil bacteria were root exudates and litters, and the quality and amount of nutrients provided by roots and litters for microorganisms were different, resulting in different soil bacterial community composition under different treatments . The Genu Streptomyces (belonging to Phylum Actinobacteria ) was significantly negatively correlated with soil pH and positively correlated with TN and AK. The Genu Gaiella (belonging to Phylum Actinobacteria ) was positively correlated with SOC and AN, and was significantly different from AK. The Genu Norank_f__norank_o__norank_c__gitt-gs-136 was significantly positively correlated with SOC. The Genu Arthrobacter (belonging to Phylum Actinobacteria ) and Genu norank_f__Gemmatimonadanceae were significantly positively correlated with AK. The Genu Norank_f__Roseiflexaceac showed significant positive correlation with NN and AN, and extremely significant positive correlation with AK. The Genu Blastococcus (Belonging to Phylum Actinobacteria) was significantly positively correlated with AN. The dominant bacterium The Genu norank_f__norank_o__norank_c__kD4-96 had a significant positive correlation with SOC ( ). In the 30–60 cm soil layer, the Genu Arthrobacter (belonging to Phylum Actinobacteria ) was positively correlated with AK, the Genu Brucella (belonging to Phylum Proteobacteria ) was significantly negatively correlated with TN, and the Genu Sphingomonas (belonging to Phylum Proteobacteria ) was significantly positively correlated with TN ( ). It can be seen that Phylum Actinobacia was the first dominant group in the composite soil, with high relative abundance in each soil layer, and was the main source of soil nutrient supply, which was related to that Phylum Actinobacia was suitable for growing in the neutral alkaline pH soil. The soil nutrient content and microbial diversity in Mu Us Sandy land can be increased effectively by combining soft rock with sand through land engineering measures. The compound ratio and different soil layers have significant differences in soil physical and chemical properties. The content of soil organic carbon and total nitrogen in the surface layer is higher, and the content of available phosphorus and available potassium in the bottom layer is higher. Under different soil layers, the three dominant bacteria in the mixed soil were the same, namely Phylum Actinobacteriota , Phylum Proteobacteria and Phylum Chloroflexi , which showed no obvious agglomeration on the vertical scale. Among them, Phylum Actinobacteria is the most closely related to soil nutrient supply. With the deepening of the soil layer, there are more endemic genera in the soil. The bacterial diversity and community structure are higher in 0-30cm treated with 1:5 and 1:1 compound soil, and higher in 30-60cm treated with 1:5 and 1:2 compound soil. Soil factor is the main factor driving the spatial distribution of soil microorganisms. Available potassium, organic carbon, ammonium nitrogen, total nitrogen and nitrate nitrogen are the main factors driving the differentiation of microbial community structure under different mixing ratios and soil layers. The results of this study provide practical significance for the reclamation of sandy land and the increase of cultivated land resources. The improvement of comprehensive properties of aeolian sandy soil will provide a good theoretical basis for the development of green agriculture and carbon emission reduction effect in the next step. Therefore, in the future, the author will continue to study the function and metabolism of microorganisms in sandy land, and carry out the isolation and identification of relevant carbon-fixing microorganisms. S1 Table Analysis of soft rock and sand compound soil physical and chemical properties. (XLSX) Click here for additional data file.
Assessment of digital light processing (DLP) projector stimulators for visual electrophysiology
4c416e94-75e6-4b57-addd-e0d5b46271fa
10082110
Physiology[mh]
Visual display units (VDUs) are essential devices in visual electrophysiology for presenting structured visual stimuli. Typically VDUs generate patterned or multifocal stimuli for clinical visual evoked potential (VEP), pattern electoretinogram (PERG) or multifocal electroretinogram (mfERG) recordings, for which there are international standards . There are prescribed technical requirements of such VDUs to ensure that they have sufficient properties of luminance, contrast, colour, alongside temporal characteristics. These precise measurements ensure that the recorded physiological potentials are predictable and reproducible. There is currently a widespread deficit of adequate commercially available VDUs. Many widely used stimulators are either obsolete, or those available are unsuitable for visual electrophysiology testing. For example, many centres use cathode ray tube (CRT) stimulators despite these being obsolete and parts no longer manufactured, with older models requiring frequent calibration. At the authors institution, plasma display panels (PDP) are used, but are similarly rarely produced and obsolete. Modern VDUs such as liquid crystal display (LCD) screens or organic light emitting diode (OLED) displays can be largely unsuitable for electrophysiology testing. LCD displays unfortunately succumb to a detrimental transient luminance artefact with each pattern element shift to on- or off-states, which in some circumstances can be minimised using low contrast or in-built luminance adjustments, but often are not adequate for testing and risk being non-compliant with ISCEV standards . OLED displays are potential solutions to this issue, however many suffer a detrimental input lag jitter due to resampling of the incoming trigger so risk desynchronisation of the recorded response. Digital light processing (DLP) laser projectors were first developed for the defence industry before being widely used within digital cinema . Developments in technology now mean that these devices are commercially available for personal use and can be used within an ultra-short throw ratio so do not require the large projection distances originally needed for large field sizes. DLP laser projectors involve projection of a light source, the laser, onto a digital micromirror device (DMD). The DMD is comprised of thousands of tiny micromirrors which can be individually controlled into on- or off-states at a rapid rate . Each mirror on the DMD represents a pixel, whereby each mirror reflects the light onto a light absorber or toward a projection lens. The light is typically passed through a high speed colour wheel to achieve a vast range of chromaticity followed by optical correction for the subsequent projector screen. The resultant screen can have very high resolution, luminance, temporal refresh rate and appreciable field size, making it a candidate to replace obsolete VDUs in visual electrophysiology. The purpose of this study was to assess DLP laser projectors for their suitability for pattern visual electrophysiology tests, both through photometric and physiological perspectives. This study comprised of two major sections. The first stage was based on stimulus calibration and properties of two individual DLP laser projection systems. The second stage was comparison of PVEPs and PERGs from a group seven of healthy subjects from a DLP laser projector against PDP stimulators already established within our centre. Two DLP laser projectors were assessed within this study (HiSense model 100L5FTUK and Viewsonic model LS831WU). Both devices had an ultra-short throw projection ratio meaning < 30 cm distance was required from the device to projection screen. These were driven from an Espion Diagnosys (E 3 system) via an integrated graphics processor outputting a 60 Hz signal via a Video Graphics Array (VGA) connection. DLP devices projected onto a white triple-ply fiberglass laminate projection screen with black backing (Sapphire AV Manufacturing Ltd., model SEWS240RWSF-ATR). Section 1—Photometric measurements Photometry measurements were made using an ILT1700 photometer and SED033 barrel with 9.27 mm aperture to block ambient light, providing a large candela measurement range, with a Y photopic correction filter. A checkerboard pattern was created to subtend 30 degrees of visual angle at a viewing distance of 125 cm. Measurements were made at 1 cm distance from the image projected onto a large white projection screen following a 10 min warm-up time. Additional measurements were made during the warm-up period, from turn on and immediate display of a checkerboard pattern, to assess warm-up time requirements directly. A spot photometer (Konica Minolta, model LS-110) was used to measure the individual white check luminance distribution across the 30 degree field. This was performed by displaying check widths subtending 2.5° within the 30 degree field and measured sequentially when checks were white. Mean luminance was calculated alongside contrast using Michelson contrast formula ((L max —L min )/(L max + L min )). The temporal characteristics of a reversing (2.3rev/sec) and onset-offset checkerboard stimulus were measured using a photodiode (Hamamatsu electronics, model S1223). The waveforms were assessed for their profile and scrutinised for response time, rise time, fall time and any transient luminance change, which was additionally assessed using a blank white sheet of paper in front of the stimulus screen to visualise any diffused transient luminance changes . Spectral measurements of white checks on the projection screen were made using an ILT960 spectroradiometer. Results were made in continuous and time-integrated mode (< 4 ms sample) to assess whether mean and transient luminance may alter, particularly due to known ‘colour wheel/rainbow effect’ within DLP laser projection systems . Spatial properties were modified and manual measurements of stimulus field size, element size and viewing distance calculated and optimised to replicate spatial characteristics of PDP stimulator systems established within our unit (Pioneer Electronics Corp., model PDP422MXE). Section 2—Physiological measurements Seven healthy participants (5 female, age range 27–42 years) were recruited from the staff population at the authors institution and tested using an existing laboratory PDP VDU and the ViewSonic DLP laser projector. No participants had any history of ophthalmic or neurological disease apart from refractive error which was optically corrected during testing. Photometric properties of the ViewSonic DLP laser projector were matched to the existing PDP device using the device settings. All procedures performed were in accordance with institutional standards (approval ref. 3352) and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. PVEPs were recorded using a single-channel occipital electrode (Oz) referred to a mid-frontal reference (Fz) with ground electrode placed centrally (Cz). Electrode impedances were maintained below 5kΩ. PERGs were recorded with a corneal fibre electrode referred to an electrode placed laterally to the outer canthus. A range of high contrast black and white check widths (Michelson contrast ≈ 96%), were presented ranging from 200’-6’ in a large field (30°) binocuarly in random order. Luminance measurements were matched to existing PDP devices as per . PVEPs and PERGs were recorded simultaneously to each stimulus with a reversal rate of 3.15/sec. Recordings for both VDU devices were made for each participant within the same session with the same electrodes, to minimise variability. Resultant signals were amplified and sampled at ~ 4000 Hz, with a minimum of 100 sweeps obtained per average and a minimum of two averages taken per check width. Filter settings were 0.3–300 Hz. Resultant signals were measured in terms of N75-P100 trough to peak amplitude and P100 peak-time for PVEPs, and P50 amplitude and peak-time and N95 amplitude for both devices. The N95 peak-time was not used as is this is often broad and variable and not often used clinically . Amplitudes and peak-times of respective PERG and PVEP components recorded from each device were plotted on scatter plots and two-tailed Pearson correlation coefficient calculated to visualise and to assess the relationship between devices. Amplitude and peak-times produced to each check width were then stacked between devices and Bland–Altman plots performed for all components to assess limits of agreement. PERGs for 6’ were not reliably evident for all participants or low amplitude for others so were not included in analysis. These data were assessed for significant differences using a paired sample t -test or Wilcoxon signed-rank test depending on their normality. 1—Photometric measurements Photometry measurements were made using an ILT1700 photometer and SED033 barrel with 9.27 mm aperture to block ambient light, providing a large candela measurement range, with a Y photopic correction filter. A checkerboard pattern was created to subtend 30 degrees of visual angle at a viewing distance of 125 cm. Measurements were made at 1 cm distance from the image projected onto a large white projection screen following a 10 min warm-up time. Additional measurements were made during the warm-up period, from turn on and immediate display of a checkerboard pattern, to assess warm-up time requirements directly. A spot photometer (Konica Minolta, model LS-110) was used to measure the individual white check luminance distribution across the 30 degree field. This was performed by displaying check widths subtending 2.5° within the 30 degree field and measured sequentially when checks were white. Mean luminance was calculated alongside contrast using Michelson contrast formula ((L max —L min )/(L max + L min )). The temporal characteristics of a reversing (2.3rev/sec) and onset-offset checkerboard stimulus were measured using a photodiode (Hamamatsu electronics, model S1223). The waveforms were assessed for their profile and scrutinised for response time, rise time, fall time and any transient luminance change, which was additionally assessed using a blank white sheet of paper in front of the stimulus screen to visualise any diffused transient luminance changes . Spectral measurements of white checks on the projection screen were made using an ILT960 spectroradiometer. Results were made in continuous and time-integrated mode (< 4 ms sample) to assess whether mean and transient luminance may alter, particularly due to known ‘colour wheel/rainbow effect’ within DLP laser projection systems . Spatial properties were modified and manual measurements of stimulus field size, element size and viewing distance calculated and optimised to replicate spatial characteristics of PDP stimulator systems established within our unit (Pioneer Electronics Corp., model PDP422MXE). 2—Physiological measurements Seven healthy participants (5 female, age range 27–42 years) were recruited from the staff population at the authors institution and tested using an existing laboratory PDP VDU and the ViewSonic DLP laser projector. No participants had any history of ophthalmic or neurological disease apart from refractive error which was optically corrected during testing. Photometric properties of the ViewSonic DLP laser projector were matched to the existing PDP device using the device settings. All procedures performed were in accordance with institutional standards (approval ref. 3352) and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. PVEPs were recorded using a single-channel occipital electrode (Oz) referred to a mid-frontal reference (Fz) with ground electrode placed centrally (Cz). Electrode impedances were maintained below 5kΩ. PERGs were recorded with a corneal fibre electrode referred to an electrode placed laterally to the outer canthus. A range of high contrast black and white check widths (Michelson contrast ≈ 96%), were presented ranging from 200’-6’ in a large field (30°) binocuarly in random order. Luminance measurements were matched to existing PDP devices as per . PVEPs and PERGs were recorded simultaneously to each stimulus with a reversal rate of 3.15/sec. Recordings for both VDU devices were made for each participant within the same session with the same electrodes, to minimise variability. Resultant signals were amplified and sampled at ~ 4000 Hz, with a minimum of 100 sweeps obtained per average and a minimum of two averages taken per check width. Filter settings were 0.3–300 Hz. Resultant signals were measured in terms of N75-P100 trough to peak amplitude and P100 peak-time for PVEPs, and P50 amplitude and peak-time and N95 amplitude for both devices. The N95 peak-time was not used as is this is often broad and variable and not often used clinically . Amplitudes and peak-times of respective PERG and PVEP components recorded from each device were plotted on scatter plots and two-tailed Pearson correlation coefficient calculated to visualise and to assess the relationship between devices. Amplitude and peak-times produced to each check width were then stacked between devices and Bland–Altman plots performed for all components to assess limits of agreement. PERGs for 6’ were not reliably evident for all participants or low amplitude for others so were not included in analysis. These data were assessed for significant differences using a paired sample t -test or Wilcoxon signed-rank test depending on their normality. Section 1—Photometric measurements Both devices were capable of displaying patterned stimuli across a very large fields (up to 100 inches/254 cm diagonally) from the Espion E 3 system. It became evident in early testing using the photodiode that the Hisense DLP device was detrimentally affected by an input lag jitter (supplemental Fig. 1). Despite modification of synchronisation speeds and input frame frequency (60–120 Hz), the resultant signals shifted unpredictably between 0 and 30 ms, and it was not possible to time-lock the electrographic signals accurately. Accordingly, no further analysis took place for the Hisense DLP laser projector. Fortunately, the Viewsonic DLP laser projector did not experience this issue and was used for all subsequent analysis and tests. Device settings were modified to alter luminance and contrast, respectively (Fig. A–C). The Viewsonic DLP laser projector was capable of very high luminance levels, with white checks measuring up to 594.9 cd/m 2 (Fig. A–B). It was found that maintaining luminance (device software setting of 30) and altering contrast, maintained contrast above 90% for a range of mean luminance levels up to 302.2 cd/m 2 . Whilst contrast was maintained above 93% for mean luminances up to 100 cd/m 2 (Fig. C), at high device luminance settings black checks became brighter between 1.4 and 22.2 cd/m 2 (Fig. A). No specific reference is made to black check luminance in the ISCEV standard other than the relative contrast must remain high, although one must consider that dark checks are ideally a minimally stimulated area and increasing this may affect reproducibility and theoretically affect clinical applications. The measured luminance of white checks across the visual field ranged from 114.8 to 125 cd/m 2 , with the maximum deviation from maximal luminance being 8.2%. This tended to show a spatial distribution with maximal luminance in the inferior central portion of the field and minimal luminance in superior areas laterally (Fig. D). Spectral measurements Spectral measurements of the white checks demonstrated that the spectral profile of the DLP laser projection system has a large peak within the ‘blue’ range (457 nm), with subsequent broad spectra between 470 and 700 nm, respectively, with subtle broader peaks at 542 nm and 595 nm, respectively (Fig. ). The CIE (1931) coordinates of the stimulus were x = 0.310 and y = 0.349, with a correlated colour temperature of 6495K. One additional observation during these assessments was with fast eye movements (i.e. rapid saccades), one could observe a stroboscopic ‘rainbow effect’ (supplementary Fig. 2). Warm-up times One consideration we took for assessing luminance of VDUs was warm-up times, which reflects the time a device takes once turned on to reach a constant and stable luminance ready for testing. We assessed the warm-up time of the DLP laser projector and PDP systems based at the unit, measuring how long each device took to reach their maximum luminance of a checkerboard (Fig. ). We found that PDP devices require no warm-up time, whereas the DLP device took around 2 min to reach > 95% luminance thereafter remaining stable. Response time of the checkerboard reversal was also recorded for 10 min from turning on and was stable throughout this period of measurement (Fig. ). Temporal profile The temporal profile of the reversing checkerboard pattern signal was very fast with the DLP laser projector. The measured rise and fall times were equal and were 0.5–1 ms in duration (Fig. ). A constant luminance was evident for the duration of the reversal phase. The signal from the photodiode, in addition to the 2.3rev/sec reversal frequency, also comprised of three fundamental high frequency components of 60 Hz, 120 Hz and ~ 480 Hz. These correspond to the output frame rate from the Espion system, colour wheel frequency (× 2 of framerate) and individual colour wheel segment changes, respectively (Fig. zoomed panel). No noise associated with mains electrical frequency (50 Hz) was observed. The input lag (i.e. the time taken from signal output to onset of a stimulus change) was fixed at 50 ms with no jitter. Importantly, no transient changes in mean luminance for reversal or onset-offset stimulation were seen. Spatial properties The Viewsonic DLP laser projector is capable of very large field sizes, outputting to a 100 inch (254 cm) screen in 4 k resolution, which would equate to 90 degree field at our working distance of 125 cm. Whilst very large field stimulation may be advantageous in some circumstances, we replicated a field size subtending 30 degrees to match the existing PDP stimulators at our centre and to minimise phase cancellation from large paramacular PVEP components . The DLP device was capable of projecting a range of check widths, and maintained high resolution for small check widths (12.5’ and 6’) without blur or pixelation of check edges which is typically observed in CRT or PDP VDUs. Section 2—Electrophysiological measurements Simultaneous pattern VEP and PERGs were recorded to a range of check widths presented to the participant binocularly viewing the DLP laser projector or an existing laboratory PDP VDU. Response peak-times and amplitude were all significantly correlated ( p =  < 0.05 all results), showing strongly positive correlations relationships for all components, P50 amplitude ( r = 0.841), P50 peak-time ( r = 0.905), N95 amplitude ( r = 0.766), P100 amplitude ( r = 0.829) and P100 peak-time ( r = 0.805) as observed in Table and Fig. . Scatter plots of these data (Fig. ) illustrate the high correlation between two devices. However, this demonstrated that DLP response amplitudes were linearly larger than PDP response amplitudes, represented as data points above than the total agreement line ( y = x , Fig. ). This appeared to be increase linearly with check width, so that the increase in amplitude was predictable with increased check width. This not observed for PERG or PVEP peak-time. This observation was further evidenced by significant group differences observed in P50 amplitude, N95 amplitude and P100 amplitude (Table ). PERG P50 and N95 measures were larger from the DLP device by a mean of 2.16 µV and 2.60 µV, respectively. A less clinically significant, but statistically significantly, larger P100 component was observed from the DLP device, with the mean amplitude larger by 1.8 µV. P50 and P100 peak-times were not significantly different, with a mean difference of only 0.39 ms and 0.38 ms, respectively, although the P100 peak time appeared slightly earlier for smaller check widths from the DLP device than PDP device. These data are summarised in Table . Bland–Altman plots were plotted for each respective major component for all check widths combined to assess the limits of agreement for the two devices (Fig. ). This showed high agreement for all PERG and PVEP components, however for amplitude measurements a significant skew was observed for the mean from zero as expected from our scatterplots. These data suggest that there is high agreement between devices, but there is a systematic bias of data whereby the mean amplitude is larger from the DLP device than the PDP device, consistent with our other findings. 1—Photometric measurements Both devices were capable of displaying patterned stimuli across a very large fields (up to 100 inches/254 cm diagonally) from the Espion E 3 system. It became evident in early testing using the photodiode that the Hisense DLP device was detrimentally affected by an input lag jitter (supplemental Fig. 1). Despite modification of synchronisation speeds and input frame frequency (60–120 Hz), the resultant signals shifted unpredictably between 0 and 30 ms, and it was not possible to time-lock the electrographic signals accurately. Accordingly, no further analysis took place for the Hisense DLP laser projector. Fortunately, the Viewsonic DLP laser projector did not experience this issue and was used for all subsequent analysis and tests. Device settings were modified to alter luminance and contrast, respectively (Fig. A–C). The Viewsonic DLP laser projector was capable of very high luminance levels, with white checks measuring up to 594.9 cd/m 2 (Fig. A–B). It was found that maintaining luminance (device software setting of 30) and altering contrast, maintained contrast above 90% for a range of mean luminance levels up to 302.2 cd/m 2 . Whilst contrast was maintained above 93% for mean luminances up to 100 cd/m 2 (Fig. C), at high device luminance settings black checks became brighter between 1.4 and 22.2 cd/m 2 (Fig. A). No specific reference is made to black check luminance in the ISCEV standard other than the relative contrast must remain high, although one must consider that dark checks are ideally a minimally stimulated area and increasing this may affect reproducibility and theoretically affect clinical applications. The measured luminance of white checks across the visual field ranged from 114.8 to 125 cd/m 2 , with the maximum deviation from maximal luminance being 8.2%. This tended to show a spatial distribution with maximal luminance in the inferior central portion of the field and minimal luminance in superior areas laterally (Fig. D). Spectral measurements Spectral measurements of the white checks demonstrated that the spectral profile of the DLP laser projection system has a large peak within the ‘blue’ range (457 nm), with subsequent broad spectra between 470 and 700 nm, respectively, with subtle broader peaks at 542 nm and 595 nm, respectively (Fig. ). The CIE (1931) coordinates of the stimulus were x = 0.310 and y = 0.349, with a correlated colour temperature of 6495K. One additional observation during these assessments was with fast eye movements (i.e. rapid saccades), one could observe a stroboscopic ‘rainbow effect’ (supplementary Fig. 2). Warm-up times One consideration we took for assessing luminance of VDUs was warm-up times, which reflects the time a device takes once turned on to reach a constant and stable luminance ready for testing. We assessed the warm-up time of the DLP laser projector and PDP systems based at the unit, measuring how long each device took to reach their maximum luminance of a checkerboard (Fig. ). We found that PDP devices require no warm-up time, whereas the DLP device took around 2 min to reach > 95% luminance thereafter remaining stable. Response time of the checkerboard reversal was also recorded for 10 min from turning on and was stable throughout this period of measurement (Fig. ). Temporal profile The temporal profile of the reversing checkerboard pattern signal was very fast with the DLP laser projector. The measured rise and fall times were equal and were 0.5–1 ms in duration (Fig. ). A constant luminance was evident for the duration of the reversal phase. The signal from the photodiode, in addition to the 2.3rev/sec reversal frequency, also comprised of three fundamental high frequency components of 60 Hz, 120 Hz and ~ 480 Hz. These correspond to the output frame rate from the Espion system, colour wheel frequency (× 2 of framerate) and individual colour wheel segment changes, respectively (Fig. zoomed panel). No noise associated with mains electrical frequency (50 Hz) was observed. The input lag (i.e. the time taken from signal output to onset of a stimulus change) was fixed at 50 ms with no jitter. Importantly, no transient changes in mean luminance for reversal or onset-offset stimulation were seen. Spatial properties The Viewsonic DLP laser projector is capable of very large field sizes, outputting to a 100 inch (254 cm) screen in 4 k resolution, which would equate to 90 degree field at our working distance of 125 cm. Whilst very large field stimulation may be advantageous in some circumstances, we replicated a field size subtending 30 degrees to match the existing PDP stimulators at our centre and to minimise phase cancellation from large paramacular PVEP components . The DLP device was capable of projecting a range of check widths, and maintained high resolution for small check widths (12.5’ and 6’) without blur or pixelation of check edges which is typically observed in CRT or PDP VDUs. Spectral measurements of the white checks demonstrated that the spectral profile of the DLP laser projection system has a large peak within the ‘blue’ range (457 nm), with subsequent broad spectra between 470 and 700 nm, respectively, with subtle broader peaks at 542 nm and 595 nm, respectively (Fig. ). The CIE (1931) coordinates of the stimulus were x = 0.310 and y = 0.349, with a correlated colour temperature of 6495K. One additional observation during these assessments was with fast eye movements (i.e. rapid saccades), one could observe a stroboscopic ‘rainbow effect’ (supplementary Fig. 2). One consideration we took for assessing luminance of VDUs was warm-up times, which reflects the time a device takes once turned on to reach a constant and stable luminance ready for testing. We assessed the warm-up time of the DLP laser projector and PDP systems based at the unit, measuring how long each device took to reach their maximum luminance of a checkerboard (Fig. ). We found that PDP devices require no warm-up time, whereas the DLP device took around 2 min to reach > 95% luminance thereafter remaining stable. Response time of the checkerboard reversal was also recorded for 10 min from turning on and was stable throughout this period of measurement (Fig. ). The temporal profile of the reversing checkerboard pattern signal was very fast with the DLP laser projector. The measured rise and fall times were equal and were 0.5–1 ms in duration (Fig. ). A constant luminance was evident for the duration of the reversal phase. The signal from the photodiode, in addition to the 2.3rev/sec reversal frequency, also comprised of three fundamental high frequency components of 60 Hz, 120 Hz and ~ 480 Hz. These correspond to the output frame rate from the Espion system, colour wheel frequency (× 2 of framerate) and individual colour wheel segment changes, respectively (Fig. zoomed panel). No noise associated with mains electrical frequency (50 Hz) was observed. The input lag (i.e. the time taken from signal output to onset of a stimulus change) was fixed at 50 ms with no jitter. Importantly, no transient changes in mean luminance for reversal or onset-offset stimulation were seen. The Viewsonic DLP laser projector is capable of very large field sizes, outputting to a 100 inch (254 cm) screen in 4 k resolution, which would equate to 90 degree field at our working distance of 125 cm. Whilst very large field stimulation may be advantageous in some circumstances, we replicated a field size subtending 30 degrees to match the existing PDP stimulators at our centre and to minimise phase cancellation from large paramacular PVEP components . The DLP device was capable of projecting a range of check widths, and maintained high resolution for small check widths (12.5’ and 6’) without blur or pixelation of check edges which is typically observed in CRT or PDP VDUs. 2—Electrophysiological measurements Simultaneous pattern VEP and PERGs were recorded to a range of check widths presented to the participant binocularly viewing the DLP laser projector or an existing laboratory PDP VDU. Response peak-times and amplitude were all significantly correlated ( p =  < 0.05 all results), showing strongly positive correlations relationships for all components, P50 amplitude ( r = 0.841), P50 peak-time ( r = 0.905), N95 amplitude ( r = 0.766), P100 amplitude ( r = 0.829) and P100 peak-time ( r = 0.805) as observed in Table and Fig. . Scatter plots of these data (Fig. ) illustrate the high correlation between two devices. However, this demonstrated that DLP response amplitudes were linearly larger than PDP response amplitudes, represented as data points above than the total agreement line ( y = x , Fig. ). This appeared to be increase linearly with check width, so that the increase in amplitude was predictable with increased check width. This not observed for PERG or PVEP peak-time. This observation was further evidenced by significant group differences observed in P50 amplitude, N95 amplitude and P100 amplitude (Table ). PERG P50 and N95 measures were larger from the DLP device by a mean of 2.16 µV and 2.60 µV, respectively. A less clinically significant, but statistically significantly, larger P100 component was observed from the DLP device, with the mean amplitude larger by 1.8 µV. P50 and P100 peak-times were not significantly different, with a mean difference of only 0.39 ms and 0.38 ms, respectively, although the P100 peak time appeared slightly earlier for smaller check widths from the DLP device than PDP device. These data are summarised in Table . Bland–Altman plots were plotted for each respective major component for all check widths combined to assess the limits of agreement for the two devices (Fig. ). This showed high agreement for all PERG and PVEP components, however for amplitude measurements a significant skew was observed for the mean from zero as expected from our scatterplots. These data suggest that there is high agreement between devices, but there is a systematic bias of data whereby the mean amplitude is larger from the DLP device than the PDP device, consistent with our other findings. This study aimed to examine DLP laser projectors as potential VDUs for routine pattern use in visual electrophysiology tests. Our findings confirm that the Viewsonic DLP laser projector tested in this study is very suitable for these purposes, providing high luminance, high contrast and fast temporal profiles required of visual stimulators. Importantly the patterns are produced with temporally identical and balanced luminance on (rise) and off (fall) timings. Furthermore, we demonstrate that physiological responses recorded from the tested device is similar to those from existing, established VDUs at our centre. The tested DLP laser projector produced responses of comparable peak-times to existing validated systems, though response amplitudes were larger from the DLP device. The confirmation that some DLP laser projectors are suitable for electrophysiology testing is particularly important at the time of writing, given the increasing difficulty in sourcing suitable reliable VDUs and decreasing availability of remaining obsolete devices. To our knowledge, the reported literature evaluating DLP technology in this setting is very limited with only one study assessing its use for chromatic VEPs . Alternative solutions such as LCD VDUs have been well documented, but highlight their significant limitations in terms of transient luminance artefacts and input lag . Whilst CRT and PDP technology is robust and suitable for use, these devices are now obsolete and modern solutions are required. Although some OLED displays show useful properties as VDUs , it is the authors experience that a proportion of these devices exhibit an input lag jitter, similar to that observed with the HiSense device in this study and are therefore unsuitable for use. The main DLP device assessed in this study appears to be a robust, fast and capable stimulator for visual electrophysiology testing. Whilst we found that the assessed DLP laser projector is highly suitable for clinical testing, the individual model specification is evidently critical. We discovered a detrimental input lag and jitter in the second Hisence DLP laser projector device, which prevented any further appraisal of this device. It is likely that this jitter was caused by resampling the incoming signal within the projector system, which caused a frame shift or desynchronisation of the resultant signal output. All digital processing settings within the projector had been turned off for testing, but it is possible that some devices retain an inherent processing of incoming signals which makes them unsuitable for electrophysiology testing. The authors personal observations are that some OLED devices suffer a similar input lag jitter, but this is similarly model dependent. Of note, the Hisense device was advertised as an ‘entertainment’-based DLP projection system, whereas the Viewsonic device was advertised as an office/work-based projector. It is possible that entertainment-based DLP devices may process video signals to enhance performance, which evidently may preclude their use for visual electrophysiology. Based on these observations, we strongly recommend that anyone considering use of DLP projectors should assess individual model feasibility before clinical implementation. We found response times to be very fast for the DLP device, with rise and fall times of 0.5–1 ms. This is comparable or faster than that observed for CRT and LCD stimulators, respectively . The manufacturer data for the DMD chips (Texas Instruments) suggest that response times may far exceed that recorded in this study (movement speeds up to 10,000 Hz), suggesting that these response times may reflect a simplification or limitation of the graphical output from the Espion E 3 system . During set up, a 60 Hz calibration file was generated, this could theoretically be increased to 100 Hz with this system but comes at the compromise of resolution due to the pixel clock rate. Overall, the DLP’s luminance and contrast ratios were widely sufficient for clinical testing and far exceeded the minimum standards required for PERG and PVEP testing . Importantly, we found a significant input lag for the DLP device, taking 50 ms from trigger to stimulus change. Whilst significant, this was very stable and adjustments can be easily made for this input lag by adjusting time zero to coincide with the onset or half-point of reversal change, as indicated in clinical standards . We found that warm-up time (i.e. time from a ‘cold start’ turn on to being fully operational) was immediate for the PDP device, but took around two minutes for the Viewsonic DLP device. This time is certainly an acceptable level for clinical circumstances, particularly since LCD and CRT VDUs can take or exceed a 60 min warm-up , after this LCDs are also sensitive to changes in ambient temperature . Furthermore, we found no delay in response time over this period, suggesting the device is fully operational within two minutes warm-up. This is in further contrast to LCD devices which have slow response time warm-up periods, with some devices taking up to one hour until reaching optimal response time . Certainly, based on the mechanical properties of stimulus presentation which is based on the DMD chip speed, we would not have expected any delay in response time over this period. Luminance properties of the device were very advantageous, capable of maintaining high contrast at mean luminance around 300 cd/m 2 . We found mild luminance variance across the projection screen, but this was within 91.8% of maximum so within acceptable recording standards . Nevertheless, the luminance distribution appeared to follow a pattern whereby those closest to the DLP laser projector had higher luminance than those furthest away. This may be a feature of the ultra-short throw ratio used of this projector, creating highly oblique angles to the projection screen. It is suspected that DLP laser projectors with longer throw ratios may show less of this spatial variance in luminance. We found that the spectral properties of the stimulus showed a large peak in the ‘blue’ wavelength with broader energy at longer wavelengths (Fig. D). There is no specific reference to the spectral properties of stimuli for clinical PERGs or PVEPs . Existing visual stimulators have widely varying spectral properties, so the spectral profile observed for the DLP device is likely insignificant in this context, as the correlated colour temperature was very close to 6500K (6495K). Furthermore, the spectral properties of DLP laser projectors may vary per device depending on the composition of the colour wheel. Perhaps most curious of our observations was the perceived ‘rainbow effect’, which occurred with fast eye movements (supplementary Fig. 2). This is a type of stroboscopic artefact giving a spectral inhomogeneity of the pattern stimulus due to the colour wheel used, and is particularly marked for white checks. It is a result of the colour being rendered sequentially through the colour wheel causing temporal inhomogeneities, typically at 2–4 times the framerate. From our observations, this was only apparent with very rapid eye movements and perception varied according to observer. Nevertheless, the influence of this finding on patients with unstable fixation (i.e. children) or involuntary eye movements (i.e. nystagmus) is uncertain. We suspect that the rainbow effect would have little significant influence on the PERG or PVEP, as the colour wheel frequency was measured here to be 120 Hz (twice the 60 Hz framerate) which is far faster than the time-locked presented visual stimuli, temporal resolution of visual contrast systems and is around the temporal resolution limit of cone photocurrents . Furthermore these artefacts are not constant inhomogeneities, instead are rapidly changing temporally, so any resultant physiological differences would likely average to noise levels. Nevertheless, this may be dependent on device used, as some newer or more expensive DLP devices may use × 4 or higher colour wheel frequencies relative to framerate, which would minimise this effect. Early DLP projection devices used a 1 × colour wheel making the rainbow effect markedly evident for most observers, but are now rarely used. Furthermore, technical developments in this area are continuing and are likely to further minimise this effect, such as development of the 3-chip DLP (comprising of three DMD chips for red, green and blue lasers) or three laser DLPs (removing the need for the colour wheel). Considering these points, this effect is considered to have a negligible influence on the PERG or PVEP. In the physiological experiments, we did not find any significant differences in peak-time between P50 and P100 components between devices, suggesting the tested DLP device performed similarly to our existing systems in this respect. The largest discrepancy in our results were larger responses for PERGs and PVEPs and earlier peak-times for small check width PVEPs from the DLP laser projector than the existing PDP VDU, despite spatial and photometric matching. This is an interesting finding, which suggests that the DLP device may perform better than existing PDP devices. The explanation for this difference in view of photometric and spatial matching may, we suspect, result from originate from two possible mechanisms. Firstly response times from the DLP device are faster than PDP VDUs, or secondly the improved resolution of DLP laser projection systems which affects different or enhanced physiological properties. Response times observed from the DLP stimulator assessed in this study were fast, in the order of 0.5–1 ms. It is possible, that this faster response time of DLP relative to existing PDP VDUs may allow better temporal synchronisation of the physiological substrate of interest. An abrupt response change may theoretically cause more simultaneous activation of retinal and neural cells which may therefore improve response amplitude as observed in this study. It has been demonstrated that response times do not significantly alter the PVEP below 10 ms, although likely alter between 8 and 16 ms to affect the PVEP which may explain our findings . This is supported by the upper limit of frequency–response curves of the pattern VEP being 15-20 Hz . Therefore, faster rise times are a likely cause for the larger PERG and PVEP amplitudes observed in our study, which is likely advantageous for clinical testing but highlights a need for locally derived reference data for implementation of these new devices. It is fairly well known that whilst CRT and PDP systems are suitable VDUs, they have relatively poor spatial resolution due to pixel size and therefore edge contrast can be low. Reducing edge contrast can have a direct effect on the PVEP amplitude, as the pattern stimulus waveform becomes more sinusoidal similar to a change in modulation transfer function . Therefore, the relative higher resolution and sharpness of a DLP stimulus may therefore improve the respective retinal contrast, which would be particularly evident to small check widths as observed in PERG and PVEP data of our study. We suspect these changes may, at least in part, be responsible for the differences in amplitude between devices observed. A very beneficial feature of DLP laser projectors is that they are capable of extra-large field stimulation within ultra-short throw ratios, meaning very large field sizes can be achieved without the need for large laboratory space. We calculate that the Viewsonic projector as used in this study, at a working distance of 125 cm, could present stimuli in visual fields of up to 90 degrees. Whilst large field sizes are particularly useful for paediatric practice, there comes a point whereby larger field size becomes detrimental to the PVEP P100 component. With increasingly large field sizes the paramacular PVEP components become more pronounced, and if large enough they can degrade the macular driven P100 component of interest . There may be some applications for large field sizes which are beneficial to avoid short viewing distances, such as for the mfERG or mfVEP, but for routine clinical PERG and PVEP testing, it seems that exceeding a 30 degree field may not yield any significant benefit, hence our aim to spatially match existing PDP system dimensions. Lastly, whilst our study assessed the DLP device in front-projection mode, it is likely that in clinical circumstances a back-projection would be far more beneficial to avoid any potential interference of the projection beam by patients, staff or equipment. In conclusion, we demonstrate that the DLP laser projector assessed in this study is a suitable VDU for use in visual electrophysiology testing. This DLP laser projector was easily used with commercially available visual electrophysiology systems and provided stimuli compatible with ISCEV standards for the PERG and PVEP. We observe similar PVEP and PERG values compared to an already established VDU, with some amplitude values better than existing systems. For other centres considering DLP laser projection systems, it is important to carefully appraise the manufacturer specifications and model of each DLP device to avoid one which suffers from the detrimental jitter observed in one of the devices tested in this study. Future research is needed to assess the test–retest repeatability of PERGs and PVEPs recorded to a DLP stimulus, alongside photometric measurements over long time periods to assess for any age-related changes in stimulus parameters. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 1302 kb)
Topographical mapping of catecholaminergic axon innervation in the flat-mounts of the mouse atria: a quantitative analysis
973b59ef-9b1f-45fe-b67a-d19a61713cea
10082215
Anatomy[mh]
The sympathetic nervous system (SNS) plays a pivotal role in regulating cardiac functions including heart rate, contractility, and conduction velocity, which are essential for our survival , . Contrary to conventional belief, not only does the SNS play a role in the “fight or flight” integrated response, but it also regulates heart rate and contractility in both resting and non-resting conditions . In fact, new emerging roles of cardiac sympathetic innervation were revealed including the regulation of cardiomyocyte size and providing a neurotrophic signal to the heart . Furthermore, any disturbance of the SNS functions, including structural remodeling and overactivity, may promote progression of various cardiovascular diseases . Although the functional roles of the SNS have been well established, a comprehensive organization map of the sympathetic postganglionic innervation of the atria remains insufficiently delineated. In addition, the regional density of the sympathetic innervation of the heart has yet to be quantified. There are numerous unanswered questions related to the detailed anatomy of the heart's sympathetic nervous system and how it is modified by disease states, such as atrial fibrillation, arrhythmia, and heart failure . For example, a complete understanding of the morphology and morphometry to explain the complexity of sympatho-cardiac communication and the differential regional distribution of the atrial nerve plexus remains to be elucidated , , . Previous studies investigated the structure and function of sympathetic neurons and axons in different species – using sectioned heart preparations or focused only on specific regions of the atria, which disrupted the continuity of axons and terminals, preventing large scale morphological characterization of these structures. Great effort has been made to better characterize the intrinsic cardiac plexus in the whole-mount mouse heart, which increased our knowledge on the distribution of noradrenergic innervation of the mouse heart , . Nevertheless, the complete fine details of TH-IR axon terminals and varicosities were not fully visualized in the whole-mount. Additionally, thick regions of the auricle and other structures were partially or completely removed. These structures include right cranial vein (RCV), left cranial vein (LCV), and caudal vein (CV) , which we refer to in this study and our previous work as superior vena cava (SVC), left precaval vein (LPCV), and inferior vena cava (IVC) , , ; respectively. Moreover, the topology of sympathetic neurons and their local communication with the heart, which influence cardiac functions were characterized , . In those studies, it was shown that sympathetic neurons directly communicate with cardiomyocytes in the ventricles and the density of innervation correlates with the size of cardiomyocytes, all of which emphasize the need to determine the differential regional innervation of the heart. Recently, researchers were able to generate two- and three-dimensional reconstructions of the sympathetic innervation of the myocardium. However, these studies provided imaging from only a few myocardial sections and a small segment of the heart . Alternatively, they revealed the big bundles without a clear visualization of the fine axons and terminals or cardiac targets . Both studies used tyrosine hydroxylase (TH) as a sympathetic marker and showed that sympathetic nerves and intrinsic cardiac ganglia were distributed in both atria of the heart, predominantly near the SAN, AVN and around the junction of left and right atria , . Despite substantial advances in knowledge on the anatomy and physiology of cardiac nerves that contribute to therapeutic responses, there are still many gaps that need to be filled as neuromodulation treatments move away from pharmaceuticals and non-specific treatments to more guided and specific therapeutic targets for cardiovascular diseases. To facilitate these transitions, the architecture of cardiac sympathetic nerves needs to be carefully and precisely determined. More studies are needed to determine the structural organization of the sympathetic postganglionic innervation of whole-mount preparations of the heart (atria and ventricles) to improve understanding of sympathetic control of the heart. Previously, we have determined the distribution and morphology of parasympathetic afferent and efferent axons in the atria in wild-type rat and mouse preparations – as well as in disease models (e.g., aging, sleep apnea, and diabetes) , , . Collectively, the present work provides a comprehensive topographical map of the catecholaminergic efferent axon distribution, density, and morphology of the atria at the single cell/axon/varicosity resolution. This anatomical map will provide a foundation for future functional studies of sympathetic control of the heart and its remodeling in pathological conditions. Animals and ethical statement All procedures were approved by the University of Central Florida Animal Care and Use Committee (HURON PROTO202000150) and strictly followed the guidelines established by the National Institutes of Health (NIH) and the ARRIVE 2.0 guidelines. This study was performed on healthy male C57Bl/6 J mice (RRID: IMSR_JAX000664, The Jackson Laboratory, Bar Harbor, ME) (n = 20, age 2–3 months, weighing 20–30 g). Mice were housed in a plastic cage (n = 5/cage) with sawdust bedding (changed three times a week) in a room with controlled environmental conditions of humidity and temperature in which light/dark cycles were set to 12/12 h (6:00 AM to 6:00 PM light cycle) and provided food and water ad libitum. Mice were divided into 3 groups. Connected atria TH-IR axon innervation mapping group (n = 5) were used to show topographical innervation and reconstruction of nerves. Quantification analysis of separate right and left atria group (n = 6) were used to perform regional density analysis. Control group (n = 4) were used to ensure there were not any nonspecific labeling and that labelled structures represent neuronal and axonal structures. This was performed by omitting the primary antibody (n = 1) or omitting the secondary antibody (n = 1) and labelling with PGP9.5 (3). Additional animals were used to counter-stain neurons with Fluorogold (n = 4). All efforts were made to minimize the number of mice and their suffering. Tissue preparation Mice were deeply anesthetized with isoflurane (4%) induction in an anesthetic chamber. Absence of the hind paw pinch withdrawal reflex was used as an indicator of sufficient depth of anesthesia. Mice were injected with 0.2 mL heparin into the left ventricle followed by a cut to the inferior vena cava to drain the blood. After 2 min, a needle was inserted into the left ventricle and the mice were perfused with 0.9% saline at 38–40 °C for 5 min, followed by fixation with 4% paraformaldehyde. Hearts along with the lungs and trachea were removed from the chest and postfixed overnight in 4% paraformaldehyde at 4 °C. The heart was placed and pinned into a dissecting dish lined with Sylgard and containing PBS (0.1 M, pH = 7.4), and the specimen was further dissected using a Leica Stereo microscope as described previously , , , , . To reveal the intact network of sympathetic postganglionic atrial innervation, we removed the heart from the surrounding tissues (lungs, aortic arch and trachea). Then, the atria (both right and left atrium connected at the interatrial septum on the ventral side) were separated from the ventricles (n = 5). The whole atria were processed as a montage of several hundred (~ 260) maximal projections of image stacks. To gain more insight into TH-IR axon innervation and regional density, the right and left atria (RA and LA) were separated. The auricles were cut along the boundary into two halves. The part of the auricle facing more exteriorly and connected to the big vessels is referred to in this study as the outer auricle and the other half is referred to as the inner auricle. Then, flat-mounts were scanned using the confocal microscope at higher magnification (40X oil lens). The separation of the atria was necessary to avoid areas of overlap between RA and LA. Montages of the maximal projections of the right and left atria were prepared (n = 6/group). A detailed experimental protocol is available through Protocol.io: 10.17504/protocols.io.n92ldzbmxv5b/v2. Immunohistochemistry (IHC) Tissue processing and immunolabeling were performed as described previously . Following dissection, the tissues were washed 6 × 5 min in 0.1 M PBS (pH = 7.4), then immersed for 48 h in a blocking reagent (2% bovine serum albumin, 10% normal donkey serum, 2% Triton X-100, 0.08% NaN 3 in 0.1 M PBS, pH = 7.4) to reduce nonspecific binding of the primary antibody and to promote increased antibody penetration. Primary antibodies (1:100) were added to the primary solution (2% bovine serum albumin, 4% normal donkey serum, 0.5% Triton X-100, 0.08% NaN 3 in 0.1 M PBS, pH = 7.4) and incubated for 48 h. Unbound primary antibodies were removed by 6 × 5 min tissue washes in PBST (0.5% Triton X-100 in 0.1 M PBS, pH = 7.4). Secondary antibodies (1:50 in PBST) were then applied for 24 h. Unbound secondary antibodies were removed by 6 × 5 min tissue washes in PBS. Negative control tests (in which primary antibodies were omitted) were also performed, and these preparations presented no labeling, confirming that nonspecific binding of secondary antibodies did not occur. Lastly, we verified the accuracy of our TH labeling by using PGP 9.5 (ubiquitin carboxyl-terminal hydrolase-1), a general neuronal marker that visualizes different populations and subtypes of nerves. A list of the antibodies used in this study is summarized in Table . Flat-mounts were placed on a microscope slide with their dorsal side against the glass, coverslipped, crushed for 2 days with lead weights, and air-dried under a fume hood for 1 day. Slides were dehydrated by immersion for 2 min in each of 4 ascending concentrations of ethanol (75, 95, 100 and 100%), followed by 2 × 10 min washes in xylene. Slides were then covered with coverslips and DEPEX mounting medium (Electron Microscopy Sciences #13514) and allowed to dry overnight. Fluoro-Gold (FG) counterstaining To evaluate the location of immunolabeled structures relative to cardiac ganglia, FG was used to to counterstain neurons in four additional animals. Fluoro-Gold (0.3 mL of 3 mg/mL per mouse; Fluorochrome, LLC, FG 50 mg) was injected (i.p.) to counter stain neurons in the peripheral ganglia. Mice were perfused 3–5 days after FG injection and the hearts were removed and dual labeled with TH. Image acquisition The Nikon 80i fluorescence microscope (Lens: 20X and 40X) was first used to survey the TH labeling in the whole flat-mounts of the atria. Then, a Leica TCS SP5 laser scanning confocal microscope (Lens: 20X and 40X oil) was used to acquire images and assemble image montages of whole connected atria, including left atrium and right atrium flat-mounts. An argon-krypton laser (excitation 488 nm) was used to image TH-IR axons, a helium-neon (HeNe) laser (excitation 543 nm) was used to image PGP9.5-IR axons, and UV laser was used to detect FG or background autofluorescence of the tissues. The connected atria were scanned using a 20X oil immersion objective lens (Z-step 1.5 μm), to produce approximately 400 confocal image stacks per montage. The confocal projection images of these stacks were used to assemble montages of whole atria flat-mounts using either Mosaic J or photoshop. To better visualize the topographical distribution and morphology of TH-IR innervation in the atria, the separate whole left atrium and right atrium and regions of interest were scanned at high magnification (40X oil immersion objective lens, Zoom X1 or X 1.5, Z-step 1.5 μm). The higher magnification resulted in approximately 800 frames for each atrium. We were able to overcome the thickness of the flat-mount whole atria with our optimized tissue processing techniques and flattening of the tissue which allowed us to visualize fine details of TH-IR axon innervation. We also used a Zeiss M2 Imager microscope with an autostage (20X NA 0.8) to scan the samples which produced images with high quality that were comparable to the images obtained with confocal microscopy (20X objective lens). This approach will make future methodology less laborious and more efficient. Tracing of TH-IR axons was performed using Neurolucida 360 (MBF Bioscience). Additionally, Neurolucida Explorer (MBF Bioscience), an analytical software built within Neurolucida 360, was used to perform morphometric analysis on traced axon reconstructions. Branched structure analysis was performed, and parameters (number of trees, nodes, terminals, total length and surface area) were selected for all connected atria tracings (n = 6). Density and size quantification To quantitate the regional density of TH-IR fibers in the atria, we segregated images into specific regions of interest (ROIs): SAN, AVN, SVC, IVC, right outer and inner auricle, LA-PV junction, left PV, middle PV, right PV, left outer and inner auricle using Fiji . The steps of density quantification were as follows (Fig. ): (1) Subtracted the background with radius of 80 pixels to reduce noise and enhance contrast. (2) Applied particle removal to remove small debris. (3) Applied a binary threshold (Otsu method) to isolate immunoreactive structures. (4) Quantified the signal above the threshold. (5) Averaged the signal of different ROIs windows using six counting frames. (6) Ran the Shapiro–Wilk normality test. Axontracer algorithm was used to trace and confirm axon quantification . Axon density was represented as total axon length per ROI. Total axon length in pixels was converted to μm using appropriate conversion factors. Statistical significance of the difference between the means was performed using one-way ANOVA and Tukey’s HSD (Honestly Significant Difference). Data are expressed as means + / − SEM. Significance is accepted at P < 0.05. Heatmaps were created after applying a modified version of the freely available open-source automated software algorithm that trace and quantify axons (Axon tracer plugin, ImageJ) . The percentage of TH-IR neurons was counted using all single optical sections of different ICG image stacks. All procedures were approved by the University of Central Florida Animal Care and Use Committee (HURON PROTO202000150) and strictly followed the guidelines established by the National Institutes of Health (NIH) and the ARRIVE 2.0 guidelines. This study was performed on healthy male C57Bl/6 J mice (RRID: IMSR_JAX000664, The Jackson Laboratory, Bar Harbor, ME) (n = 20, age 2–3 months, weighing 20–30 g). Mice were housed in a plastic cage (n = 5/cage) with sawdust bedding (changed three times a week) in a room with controlled environmental conditions of humidity and temperature in which light/dark cycles were set to 12/12 h (6:00 AM to 6:00 PM light cycle) and provided food and water ad libitum. Mice were divided into 3 groups. Connected atria TH-IR axon innervation mapping group (n = 5) were used to show topographical innervation and reconstruction of nerves. Quantification analysis of separate right and left atria group (n = 6) were used to perform regional density analysis. Control group (n = 4) were used to ensure there were not any nonspecific labeling and that labelled structures represent neuronal and axonal structures. This was performed by omitting the primary antibody (n = 1) or omitting the secondary antibody (n = 1) and labelling with PGP9.5 (3). Additional animals were used to counter-stain neurons with Fluorogold (n = 4). All efforts were made to minimize the number of mice and their suffering. Mice were deeply anesthetized with isoflurane (4%) induction in an anesthetic chamber. Absence of the hind paw pinch withdrawal reflex was used as an indicator of sufficient depth of anesthesia. Mice were injected with 0.2 mL heparin into the left ventricle followed by a cut to the inferior vena cava to drain the blood. After 2 min, a needle was inserted into the left ventricle and the mice were perfused with 0.9% saline at 38–40 °C for 5 min, followed by fixation with 4% paraformaldehyde. Hearts along with the lungs and trachea were removed from the chest and postfixed overnight in 4% paraformaldehyde at 4 °C. The heart was placed and pinned into a dissecting dish lined with Sylgard and containing PBS (0.1 M, pH = 7.4), and the specimen was further dissected using a Leica Stereo microscope as described previously , , , , . To reveal the intact network of sympathetic postganglionic atrial innervation, we removed the heart from the surrounding tissues (lungs, aortic arch and trachea). Then, the atria (both right and left atrium connected at the interatrial septum on the ventral side) were separated from the ventricles (n = 5). The whole atria were processed as a montage of several hundred (~ 260) maximal projections of image stacks. To gain more insight into TH-IR axon innervation and regional density, the right and left atria (RA and LA) were separated. The auricles were cut along the boundary into two halves. The part of the auricle facing more exteriorly and connected to the big vessels is referred to in this study as the outer auricle and the other half is referred to as the inner auricle. Then, flat-mounts were scanned using the confocal microscope at higher magnification (40X oil lens). The separation of the atria was necessary to avoid areas of overlap between RA and LA. Montages of the maximal projections of the right and left atria were prepared (n = 6/group). A detailed experimental protocol is available through Protocol.io: 10.17504/protocols.io.n92ldzbmxv5b/v2. Tissue processing and immunolabeling were performed as described previously . Following dissection, the tissues were washed 6 × 5 min in 0.1 M PBS (pH = 7.4), then immersed for 48 h in a blocking reagent (2% bovine serum albumin, 10% normal donkey serum, 2% Triton X-100, 0.08% NaN 3 in 0.1 M PBS, pH = 7.4) to reduce nonspecific binding of the primary antibody and to promote increased antibody penetration. Primary antibodies (1:100) were added to the primary solution (2% bovine serum albumin, 4% normal donkey serum, 0.5% Triton X-100, 0.08% NaN 3 in 0.1 M PBS, pH = 7.4) and incubated for 48 h. Unbound primary antibodies were removed by 6 × 5 min tissue washes in PBST (0.5% Triton X-100 in 0.1 M PBS, pH = 7.4). Secondary antibodies (1:50 in PBST) were then applied for 24 h. Unbound secondary antibodies were removed by 6 × 5 min tissue washes in PBS. Negative control tests (in which primary antibodies were omitted) were also performed, and these preparations presented no labeling, confirming that nonspecific binding of secondary antibodies did not occur. Lastly, we verified the accuracy of our TH labeling by using PGP 9.5 (ubiquitin carboxyl-terminal hydrolase-1), a general neuronal marker that visualizes different populations and subtypes of nerves. A list of the antibodies used in this study is summarized in Table . Flat-mounts were placed on a microscope slide with their dorsal side against the glass, coverslipped, crushed for 2 days with lead weights, and air-dried under a fume hood for 1 day. Slides were dehydrated by immersion for 2 min in each of 4 ascending concentrations of ethanol (75, 95, 100 and 100%), followed by 2 × 10 min washes in xylene. Slides were then covered with coverslips and DEPEX mounting medium (Electron Microscopy Sciences #13514) and allowed to dry overnight. To evaluate the location of immunolabeled structures relative to cardiac ganglia, FG was used to to counterstain neurons in four additional animals. Fluoro-Gold (0.3 mL of 3 mg/mL per mouse; Fluorochrome, LLC, FG 50 mg) was injected (i.p.) to counter stain neurons in the peripheral ganglia. Mice were perfused 3–5 days after FG injection and the hearts were removed and dual labeled with TH. The Nikon 80i fluorescence microscope (Lens: 20X and 40X) was first used to survey the TH labeling in the whole flat-mounts of the atria. Then, a Leica TCS SP5 laser scanning confocal microscope (Lens: 20X and 40X oil) was used to acquire images and assemble image montages of whole connected atria, including left atrium and right atrium flat-mounts. An argon-krypton laser (excitation 488 nm) was used to image TH-IR axons, a helium-neon (HeNe) laser (excitation 543 nm) was used to image PGP9.5-IR axons, and UV laser was used to detect FG or background autofluorescence of the tissues. The connected atria were scanned using a 20X oil immersion objective lens (Z-step 1.5 μm), to produce approximately 400 confocal image stacks per montage. The confocal projection images of these stacks were used to assemble montages of whole atria flat-mounts using either Mosaic J or photoshop. To better visualize the topographical distribution and morphology of TH-IR innervation in the atria, the separate whole left atrium and right atrium and regions of interest were scanned at high magnification (40X oil immersion objective lens, Zoom X1 or X 1.5, Z-step 1.5 μm). The higher magnification resulted in approximately 800 frames for each atrium. We were able to overcome the thickness of the flat-mount whole atria with our optimized tissue processing techniques and flattening of the tissue which allowed us to visualize fine details of TH-IR axon innervation. We also used a Zeiss M2 Imager microscope with an autostage (20X NA 0.8) to scan the samples which produced images with high quality that were comparable to the images obtained with confocal microscopy (20X objective lens). This approach will make future methodology less laborious and more efficient. Tracing of TH-IR axons was performed using Neurolucida 360 (MBF Bioscience). Additionally, Neurolucida Explorer (MBF Bioscience), an analytical software built within Neurolucida 360, was used to perform morphometric analysis on traced axon reconstructions. Branched structure analysis was performed, and parameters (number of trees, nodes, terminals, total length and surface area) were selected for all connected atria tracings (n = 6). To quantitate the regional density of TH-IR fibers in the atria, we segregated images into specific regions of interest (ROIs): SAN, AVN, SVC, IVC, right outer and inner auricle, LA-PV junction, left PV, middle PV, right PV, left outer and inner auricle using Fiji . The steps of density quantification were as follows (Fig. ): (1) Subtracted the background with radius of 80 pixels to reduce noise and enhance contrast. (2) Applied particle removal to remove small debris. (3) Applied a binary threshold (Otsu method) to isolate immunoreactive structures. (4) Quantified the signal above the threshold. (5) Averaged the signal of different ROIs windows using six counting frames. (6) Ran the Shapiro–Wilk normality test. Axontracer algorithm was used to trace and confirm axon quantification . Axon density was represented as total axon length per ROI. Total axon length in pixels was converted to μm using appropriate conversion factors. Statistical significance of the difference between the means was performed using one-way ANOVA and Tukey’s HSD (Honestly Significant Difference). Data are expressed as means + / − SEM. Significance is accepted at P < 0.05. Heatmaps were created after applying a modified version of the freely available open-source automated software algorithm that trace and quantify axons (Axon tracer plugin, ImageJ) . The percentage of TH-IR neurons was counted using all single optical sections of different ICG image stacks. Topographical projections of TH-IR axons in the flat-mount of the whole left and right atria (connected): Neurolucida tracing and digitization Four major extrinsic TH-IR axon bundles entered the atria (short yellow arrows in Fig. ), branched into the smaller bundles, and finally ramified into individual axons which covered the entire atria (Fig. ). Across animals, the number of large TH-IR bundles and their entry locations and innervation fields of the atria were quite consistent. In all atrial tissue preparations, most TH-IR bundles were identified consistently at the medial side of superior vena cava (SVC), entrance of the pulmonary veins (PVs) to the left atrium, and left precaval vein (LPCV) (Fig. ). The tracing of TH-IR axons using the Neurolucida system highlighted the trajectory of major bundles effectively. These bundles innervated different regions with a certain degree of overlap (Fig. a). TH-IR bundles projected their axons towards the atria via four main topographical pathways: ● Bundle 1 entered the atria at the medial side of the SVC and branched into smaller bundles that proceeded towards the SAN, conductive fibers, AVN region, right PV and the lower part of the right auricle (Fig. b). ● Bundle 2 formed a loop around the origin of the SVC (probably folded during dissection) and sent projections mainly to the upper part of the right auricle and junction of LA and RA (Fig. c). ● Bundle 3 entered the atria at the LPCV and ramified into individual axons that projected towards the entire left auricle (Fig. d). ● Bundle 4 entered the atria at the lower edge of the LPCV and projected towards the LA-PV junction, left and middle PVs and junction of LA and RA (Fig. e). Most animals showed a similar trend of TH-IR axon distribution. Some of the variations observed could be due to unintentional folding of bundles during dissection and interindividual variation. To confirm TH-IR axons and neurons were accurately representing neural processes, pan-neuronal marker PGP 9.5 was used. All TH-IR axons and neurons were also PGP 9.5-IR (Fig. ), indicating that TH-IR fibers (Fig. a–c) and neurons (Fig. d–f) were indeed neural processes. Additionally, negative controls further confirmed the labeling specificity. TH-IR axon innervation of the right and left atrium: density, distribution and morphology The distribution of TH-IR axons in the whole right atrium was consistent in all animals . A couple of large TH-IR bundles entered the right atrium through the SVC and LPCV (Fig. ). These large bundles branched into smaller bundles that either passed through the intrinsic cardiac ganglia (ICG) or extended directly to other cardiac targets and ramified into individual axons. The overall density heatmap (Fig. a) revealed that TH-IR axon innervation was significantly higher within the region of the SAN compared to other areas ( P < 0.05, n = 6). The steps for the quantification of TH-IR axon density were delineated in Fig. . TH-IR axon density at several regions of interest) in the RA is shown in Fig. b–g. The inner and outer walls of the auricles were separated due to their thickness. The density of TH-IR axon innervation in these regions was in the following order from high to low: SAN (687.3 μm/mm 2 ± 21.63), AVN region (401.7 μm/mm 2 ± 51.03,), inner auricle (303.1 μm/mm 2 ± 36.78) and outer auricle (243.4 μm/mm 2 ± 27.22), SVC (239.5 μm/mm 2 ± 33.09), IVC (113.6 μm/mm 2 ± 14.19) (Fig. h). The distribution of TH-IR bundles and axons in the flat-mount of whole left atrium was determined (Fig. ). A couple of TH-IR bundles entered the left atrium through the LA-PV junction then bifurcated into smaller bundles. These bundles either extended towards the ICG or directly to other cardiac targets and eventually ramified into numerous axon terminals covering the entire left atrium. This montage clearly showed a holistic view of the sympathetic innervation of the left atrium at single axon/cell/varicosity scale. The overall heatmap of a representative mouse (Fig. a) showed the highest density of TH-IR immunoreactivity in the regions of the left atrium within the LA-PV junctions and the roots of pulmonary veins. Regional density analysis of ROIs in the LA (Fig. b–g) showed the density of TH-IR axon innervation as following from high to low: LA-PV junction (mean 348.2 μm/mm 2 ± 26), inner auricle (217 μm/mm 2 ± 19.17), outer auricle (197 μm/mm 2 ± 17.42), and pulmonary veins (left PV 179 μm/mm 2 ± 5.25, middle PV 165 μm/mm 2 ± 28.44, right PV 144.8 μm/mm 2 ± 11.85) (Fig. h). There was a significantly higher density of TH-IR axons in the middle area of the left atrium represented as LA-PV junction than the auricle or pulmonary vein ( P < 0.05, n = 6). A comparison of the TH-IR axon density in the RA and LA showed the highest density of innervation was at the SAN. Of note, TH-IR bundles and ICG were excluded from the density calculations and ROIs selected contained only TH-IR axons to avoid any bias in the quantitative analysis. In the LA, the junction of LA-PVs showed very dense innervation of TH-IR axons in most samples (Fig. h). Interestingly, even though the density of TH-IR axons in the PVs were less than that at the LA-PV junction, the axons in the PVs were more continuous and had a more defined pattern. The bundles seen on LA are most likely branches of the large TH-IR bundles on the RA that were dislocated during the separation of RA and LA. TH-IR neurons and SIF cells and TH-IR axons in ICG In the whole atrial flat-mounts, several intrinsic cardiac ganglia were distributed in the epicardium. The majority of these ganglia were identified near the SAN region, AVN region, and interatrial groove in the connected atria (Fig. ). When separated, the left atrium had the majority of intrinsic cardiac ganglia in the middle area of the left atrium at the attachment points with the right atrium in the SAN and AVN regions and the entrance of the pulmonary veins (Fig. ). Some ganglia were also located in the right atrium around the SA region and the epicardial bundles on the LPCV (Fig. ). ICG were mostly located on the dorsal surface of the mice LA and TH-IR neurons comprised 18–30% of total ICG neurons in maximal intensity projections (Fig. a–c) and optical sections(Fig. a’–c’). TH-IR fibers were mostly observed passing through the individual ICG (Fig. ). Even though maximal projection images showed TH-IR axons near the ICG (Fig. a), a more detailed evaluation of single optical sections (Fig. a’) or partial projections of different ICG (Fig. b–e) showed that no TH-IR axon terminals wrapped tightly around the individual ICG neurons. Additionally, small intensely fluorescent cells (SIF) cells were strongly TH-IR (Fig. ) and were observed in clusters of 3–8 cells, usually dispersed within ICG or near big TH-IR bundles. Optical sections of SIF cells in selected clusters (Fig. a’,a”) showed that they have a smaller diameter (< 10 μm) compared to TH-IR neurons in the ICG (~ 20 μm). TH-IR axon innervation of vasculature and fat cells In addition to the major veins (SVC, IVC, PVs and LPCV) we identified clearly contoured blood vessels (arterioles) in the left and right atria with TH-IR axons running in parallel to the blood vessel walls or across them (Fig. ). In the montages, the blood vessels were much less apparent due to the overlays of multiple layers in the maximal projection masking the detailed vascular structures. TH-IR fibers also densely innervated the fat tissues at different layers of the atrial wall. White adipose tissue (WAT) and brown adipose tissue (BAT) were identified by their morphological characteristics using brightfield (Fig. a,b) or autofluorescence (Fig. d,e). Figure c showed TH-IR axons innervated the fat cells in a cluster with numerous varicose terminals. Additionally, the optical sections of the same region showed that TH-IR axons specifically targeted individual adipocytes (Fig. c’). TH-IR axon terminals were observed around the boundaries and in between WAT recognized by spherical cells with most of the volume occupied by cytoplasmic lipid droplets and peripherally located nucleus (Fig. a’,d). On the other hand, BAT was recognized by multiple vacuoles and darker shade and showed higher innervation by TH-IR axon terminals compared to WAT (Fig. b’,e). Four major extrinsic TH-IR axon bundles entered the atria (short yellow arrows in Fig. ), branched into the smaller bundles, and finally ramified into individual axons which covered the entire atria (Fig. ). Across animals, the number of large TH-IR bundles and their entry locations and innervation fields of the atria were quite consistent. In all atrial tissue preparations, most TH-IR bundles were identified consistently at the medial side of superior vena cava (SVC), entrance of the pulmonary veins (PVs) to the left atrium, and left precaval vein (LPCV) (Fig. ). The tracing of TH-IR axons using the Neurolucida system highlighted the trajectory of major bundles effectively. These bundles innervated different regions with a certain degree of overlap (Fig. a). TH-IR bundles projected their axons towards the atria via four main topographical pathways: ● Bundle 1 entered the atria at the medial side of the SVC and branched into smaller bundles that proceeded towards the SAN, conductive fibers, AVN region, right PV and the lower part of the right auricle (Fig. b). ● Bundle 2 formed a loop around the origin of the SVC (probably folded during dissection) and sent projections mainly to the upper part of the right auricle and junction of LA and RA (Fig. c). ● Bundle 3 entered the atria at the LPCV and ramified into individual axons that projected towards the entire left auricle (Fig. d). ● Bundle 4 entered the atria at the lower edge of the LPCV and projected towards the LA-PV junction, left and middle PVs and junction of LA and RA (Fig. e). Most animals showed a similar trend of TH-IR axon distribution. Some of the variations observed could be due to unintentional folding of bundles during dissection and interindividual variation. To confirm TH-IR axons and neurons were accurately representing neural processes, pan-neuronal marker PGP 9.5 was used. All TH-IR axons and neurons were also PGP 9.5-IR (Fig. ), indicating that TH-IR fibers (Fig. a–c) and neurons (Fig. d–f) were indeed neural processes. Additionally, negative controls further confirmed the labeling specificity. The distribution of TH-IR axons in the whole right atrium was consistent in all animals . A couple of large TH-IR bundles entered the right atrium through the SVC and LPCV (Fig. ). These large bundles branched into smaller bundles that either passed through the intrinsic cardiac ganglia (ICG) or extended directly to other cardiac targets and ramified into individual axons. The overall density heatmap (Fig. a) revealed that TH-IR axon innervation was significantly higher within the region of the SAN compared to other areas ( P < 0.05, n = 6). The steps for the quantification of TH-IR axon density were delineated in Fig. . TH-IR axon density at several regions of interest) in the RA is shown in Fig. b–g. The inner and outer walls of the auricles were separated due to their thickness. The density of TH-IR axon innervation in these regions was in the following order from high to low: SAN (687.3 μm/mm 2 ± 21.63), AVN region (401.7 μm/mm 2 ± 51.03,), inner auricle (303.1 μm/mm 2 ± 36.78) and outer auricle (243.4 μm/mm 2 ± 27.22), SVC (239.5 μm/mm 2 ± 33.09), IVC (113.6 μm/mm 2 ± 14.19) (Fig. h). The distribution of TH-IR bundles and axons in the flat-mount of whole left atrium was determined (Fig. ). A couple of TH-IR bundles entered the left atrium through the LA-PV junction then bifurcated into smaller bundles. These bundles either extended towards the ICG or directly to other cardiac targets and eventually ramified into numerous axon terminals covering the entire left atrium. This montage clearly showed a holistic view of the sympathetic innervation of the left atrium at single axon/cell/varicosity scale. The overall heatmap of a representative mouse (Fig. a) showed the highest density of TH-IR immunoreactivity in the regions of the left atrium within the LA-PV junctions and the roots of pulmonary veins. Regional density analysis of ROIs in the LA (Fig. b–g) showed the density of TH-IR axon innervation as following from high to low: LA-PV junction (mean 348.2 μm/mm 2 ± 26), inner auricle (217 μm/mm 2 ± 19.17), outer auricle (197 μm/mm 2 ± 17.42), and pulmonary veins (left PV 179 μm/mm 2 ± 5.25, middle PV 165 μm/mm 2 ± 28.44, right PV 144.8 μm/mm 2 ± 11.85) (Fig. h). There was a significantly higher density of TH-IR axons in the middle area of the left atrium represented as LA-PV junction than the auricle or pulmonary vein ( P < 0.05, n = 6). A comparison of the TH-IR axon density in the RA and LA showed the highest density of innervation was at the SAN. Of note, TH-IR bundles and ICG were excluded from the density calculations and ROIs selected contained only TH-IR axons to avoid any bias in the quantitative analysis. In the LA, the junction of LA-PVs showed very dense innervation of TH-IR axons in most samples (Fig. h). Interestingly, even though the density of TH-IR axons in the PVs were less than that at the LA-PV junction, the axons in the PVs were more continuous and had a more defined pattern. The bundles seen on LA are most likely branches of the large TH-IR bundles on the RA that were dislocated during the separation of RA and LA. In the whole atrial flat-mounts, several intrinsic cardiac ganglia were distributed in the epicardium. The majority of these ganglia were identified near the SAN region, AVN region, and interatrial groove in the connected atria (Fig. ). When separated, the left atrium had the majority of intrinsic cardiac ganglia in the middle area of the left atrium at the attachment points with the right atrium in the SAN and AVN regions and the entrance of the pulmonary veins (Fig. ). Some ganglia were also located in the right atrium around the SA region and the epicardial bundles on the LPCV (Fig. ). ICG were mostly located on the dorsal surface of the mice LA and TH-IR neurons comprised 18–30% of total ICG neurons in maximal intensity projections (Fig. a–c) and optical sections(Fig. a’–c’). TH-IR fibers were mostly observed passing through the individual ICG (Fig. ). Even though maximal projection images showed TH-IR axons near the ICG (Fig. a), a more detailed evaluation of single optical sections (Fig. a’) or partial projections of different ICG (Fig. b–e) showed that no TH-IR axon terminals wrapped tightly around the individual ICG neurons. Additionally, small intensely fluorescent cells (SIF) cells were strongly TH-IR (Fig. ) and were observed in clusters of 3–8 cells, usually dispersed within ICG or near big TH-IR bundles. Optical sections of SIF cells in selected clusters (Fig. a’,a”) showed that they have a smaller diameter (< 10 μm) compared to TH-IR neurons in the ICG (~ 20 μm). In addition to the major veins (SVC, IVC, PVs and LPCV) we identified clearly contoured blood vessels (arterioles) in the left and right atria with TH-IR axons running in parallel to the blood vessel walls or across them (Fig. ). In the montages, the blood vessels were much less apparent due to the overlays of multiple layers in the maximal projection masking the detailed vascular structures. TH-IR fibers also densely innervated the fat tissues at different layers of the atrial wall. White adipose tissue (WAT) and brown adipose tissue (BAT) were identified by their morphological characteristics using brightfield (Fig. a,b) or autofluorescence (Fig. d,e). Figure c showed TH-IR axons innervated the fat cells in a cluster with numerous varicose terminals. Additionally, the optical sections of the same region showed that TH-IR axons specifically targeted individual adipocytes (Fig. c’). TH-IR axon terminals were observed around the boundaries and in between WAT recognized by spherical cells with most of the volume occupied by cytoplasmic lipid droplets and peripherally located nucleus (Fig. a’,d). On the other hand, BAT was recognized by multiple vacuoles and darker shade and showed higher innervation by TH-IR axon terminals compared to WAT (Fig. b’,e). Here, we show that several TH-IR axon bundles (presumably sympathetic postganglionic efferent projections) entered the atria from the right and left sides, branched out into individual axons and projected to different fields of the atria with a certain degree of overlap. There was a clear lateralization with the right bundles projecting mainly to the right atrium, whereas the left bundles preferably projected to the left atrium. Asymmetry and regional differences in the cardiac sympathetic distribution were observed in many physiological studies in mice , pig , and humans . Our study provides anatomical evidence for differential regional distribution in mice atria. TH-IR axon bundles were distributed in the epicardium, then bifurcated and formed a terminal network in the myocardium. Moreover, TH-IR axons were observed along/encircling small blood vessels and around WAT and BAT. Regional density analysis showed that the SAN had the highest TH-IR axon innervation. To our knowledge, this work, for the first time, provides a topographical map with quantitative assessment of the TH-IR axon innervation of the mouse whole atria at single cell/axon/varicosity scale. Topographical distribution of TH-IR axon innervation in the flat-mount of the whole atria at single cell/axon/varicosity scale Innervation field of TH-IR axons Several studies have reported the distribution of catecholaminergic nerve fibers utilizing sectioned or whole mounts of partial atrial preparations , – . The main limitation of such approaches is that the experimental approach damaged the intricate three-dimensional structures of axons and terminals in these tissues. Additionally, sections or partial flat mounts did not provide a comprehensive topographical map to assess the distribution and morphology of sympathetic postganglionic efferent axons and terminals across the entire atria. Recently, tissue clearing procedures have permitted an enhanced 3D view of the whole heart innervation . However, visibility of fine axons and terminals in the whole heart remained restricted with tissue clearing procedures. In addition, tissue clearance diminished the visibility of other cardiac targets such as ganglion cells, muscles, blood vessels, and adipocytes. In order to highlight the complex patterns of TH-IR axons and their terminal networks in atrial and targets, greater resolution imaging is required. Our study has addressed these limitations by providing a comprehensive topographical map of the distribution, and morphology of TH-IR axons and terminals in the atria of mice using flat-mounts of the whole atria. Consistent with previous studies on mouse and other species , – , we found a very dense TH-IR axon innervation in the atria. Additionally, the entrance points of the major TH-IR bundles to the atria, which were determined in our study, are similar to those that were ascertained previously , , . Different from prior reports, our study provided a complete, comprehensive map of TH-IR axons in the atria at single cell/axon/varicosity scale. In the connected atria, we observed that several TH-IR axon bundles (4–5) entered the atria through the SVC and LPCV and bifurcated into smaller bundles that eventually ramified into individual axons forming different projection fields with a certain degree of overlap. Presumably, these bundles were mostly from the left and right sympathetic stellate ganglia. Previous studies using retrograde tracer and stellate ganglionectomy showed that the majority of sympathetic postganglionic innervation originates from the stellate ganglia , . Our tracing of TH-IR axons showed clear lateralization as bundles from the right mainly projected towards the right atrium and SAN, while bundles from the left side showed preferential innervation of the left atrium. Our findings reveal detailed regional differences of TH-IR innervation in the entire atria, which enriches our knowledge regarding the differential sympathetic control over distinct regions. Quantitative analysis of TH-IR regional density Catecholaminergic axon innervation of the atria displays significant anatomical heterogeneity and several studies have attempted to assess the density of cardiac sympathetic nerves at different sites of the heart , . Although previous studies quantified the density of TH-IR axons at specific sites, they only utilized sections or partial atrial preparations. Thus, a more complete quantitative analysis of TH-IR axon density in the whole heart has not been determined. In our study, we addressed the mentioned shortcomings and analyzed the distribution and density of TH-IR axons in the flat-mount of the whole RA and LA at a high resolution (40X oil lens). The density of TH-IR axons showed regional differences across the atrial wall. In the RA, TH-IR axons and terminals were the densest in the SAN region, followed by the AVN region and other regions, which is similar to what was found in other studies – . In the LA, the density of TH-IR axons was the highest at the LA-PV junction which was pointed out to be an area richly innervated with sympathetic nerves . The auricles, one of the most prominent structural features of the right and left atrium, play an important role in pumping the blood within the heart with its capacity to expand during each heartbeat . The differential regional distribution of TH-IR axon innervation indicated by our density assessment gives insight to localized effects of catecholaminergic innervation of the atria. Our results could set the foundation for future physiological studies of anatomical remodeling in pathological conditions. TH-IR ICG neurons and TH-IR axons Traditionally, it was thought that all ICG neurons in guinea pigs and rats were exclusively cholinergic , . However, recent studies demonstrated that ICG neurons exhibit diverse neurochemical phenotypes (including TH, ChAT, nNOS, VIP, NPY) , that extend beyond the traditional concept of cholinergic neurons. A subpopulation of the ICG neurons were also found to be TH-IR in mice which aligns with our findings , . Similar to previous studies, we have observed the ICG being located primarily on the outer surface of the atria near the entrance of the pulmonary veins to the LA and near the SAN and AVN , . Our work in mice showed TH-IR neurons in the ICG with TH-IR axons going through the ganglia without apparent innervation. This differs from what was found in guinea pigs and rats where some TH-IR varicosities were seen around ICG neurons , . These previous findings may be somewhat overestimated by their use of partial preparations that cannot be extrapolated to all ICG neurons. In this study we aimed to assess TH-IR axons that cross through all ICG located on the RA and LA. We found only a few TH-IR axons (if any) were in close contact with ICG neurons. Higher magnification should be used in the future to ensure there is no underestimation of TH-IR axons presence around ICG neurons. In support of this finding in mice, our recent study in pigs showed that TH-IR axons traveled through the ICG without forming varicosities surrounding the principal neurons (PNs) . The lack of TH-IR varicosities wrapping tightly around TH-IR neurons in the ICG contrasts with what was observed in the gastrointestinal tract where TH-IR varicosities tightly surround the PNs in the myenteric ganglia . Prior research indicated that mice ICG are immunoreactive to dopamine-beta-hydroxylase (DBH) and norepinephrine transporter (NET), but they lack vesicular monoamine transporter 2 (VMAT2) . This is in contrast to the nerve fibers and stellate neurons which are positive for DBH, NET, and VMAT2. The lack of VMAT2 renders the neurons in the mice ICG functionally non-noradrenergic due to their inability to transport dopamine and norepinephrine into synaptic vesicles . However, there were limited studies on the function of TH-IR neurons in the ICG, and further studies are needed to explore the functions of TH-IR neurons in the ICG of different species. TH-IR innervation of fat cells and vasculature The sympathetic nervous system plays a crucial role in BAT thermogenesis and WAT lipolysis through its direct innervation of peripheral fat depots – . Epicardial adipose tissue is an unusual visceral fat depot and has been shown to express its own specific transcriptomic signature . Epicardial fat was described as white adipose tissue with brown-fat-like features , . We noticed the presence of both types of adipose tissue at multiple locations with predominance of WAT on the atrial epicardium. Similar to our study, recent work that utilized iDISCO tissue clearance, confocal and light sheet microscopy showed a differential density of TH-IR axonal varicosities in BAT and WAT . Further functional studies to investigate the physiological effects of sympathetic innervation of both BAT and WAT in the atria would be highly valuable. As expected, TH-IR axons were observed in close proximity (running parallel or wrapping around) the vasculature . Identification of the ultrastructure to confirm TH-IR axons formed contacts with the blood vessels using electron microscopy or physiological studies will be needed. It has been demonstrated that the sympathetic nerves have a major influence on the control of blood flow, blood pressure, and total vascular resistance via its innervation of small arteries . In particular, the sympathetic nervous system has an essential role in maintaining cardiovascular homeostasis and normal physiological activities, including vascular tone and blood pressure. Functional implications Although several studies have described the atrial sympathetic innervation, comprehensive studies that delineate the topographical TH-IR axon innervation of the whole atria and regional differences are currently lacking . Our tracing of the TH-IR axon innervation of the whole atria unraveled the complex axonal network and preferential innervation of distinct regions. The mapping data could be utilized to understand the sympathetic specific control of different regions of the atria and their autonomic responses. In our map, the bundles entering the right side of the atria provided the majority of the sympathetic innervation to the right auricle, right PV, SAN and conductive fibers while the left bundles provided the majority of the sympathetic innervation to the left auricle, interatrial groove (junction of LA and RA) and PVs. Regional and lateral differences in the function of the heart have been indicated previously via the functional studies (mainly in humans) of cardiac sympathetic innervation by the right and left stellate ganglia (SG) . SG block revealed that the right SG is largely responsible for increasing heart rate, slowing atrioventricular conduction, and primarily affects the right atrium as opposed to the left atrium. In contrast, the left SG has a lesser effect on heart rate and atrioventricular conduction and primarily affects the left atrium as opposed to the right atrium , . Modulating the sympathetic innervation of the atria is becoming an increasingly important therapeutic approach , for example, neuromodulation therapy by electrical stimulation or renal denervation has shown great success in treating diseases like atrial fibrillation via remodeling of stellate ganglion and reducing sympathetic output . Therefore, selective targeting of sympathetic innervation of either side of the heart can have different effects. Our topographical map of TH-IR axon innervation in the atria could be used as a cardiac sympathetic atlas to navigate more precise control of different heart regions. Knowledge of cardiac sympathetic postganglionic innervation location and density may also help to elucidate the normal physiology and abnormal patterns in certain pathological conditions. Our quantitative analysis shed light onto the atrial regions that received the highest TH-IR axon innervation that could potentially indicate a more precise control in these areas. In the RA we found the highest innervation density of TH-IR axons in the SAN, which supports the fact that the sympathetic nervous system has a role in the fine tuning of heart rate. This could also indicate potential therapeutic targets as blockade of neuronal input with propranolol (beta blocker) leads to a decrease in heart rate – . In the LA, the highest density of TH-IR axons were observed at the entrance of PV to the LA. The junction of the left atrium and pulmonary veins has been indicated to be a focal source which is responsible for the initiation of atrial fibrillation . Therefore, further functional studies of these great vein-atrial junction regions, which were the most dense with TH-IR axons in our quantitative analysis, are valuable to better understand the physiology and pathology of atrial fibrillation. Considering that understanding how sympathetic neurons communicate to their cardiac targets is essential for understanding how the heart works , our results provide a basis for understanding the role that TH-IR axons specific innervation play in the control of the normal heart as well as in the diseased heart. Limitations A couple of limitations must be acknowledged: Neurolucida 360 TH-IR axon tracing: Despite our effort to trace TH-IR axon bundles and their projection field, it was not feasible to trace the smallest branches and individual axons in the whole atria. Our continuous collaboration with MBF Bioscience in SPARC MAP-CORE to improve the customized settings for autotracing of our labeled axons should ensure more precise and faster tracing. Density of single or double layers: Due to great differences in thickness of atria in different regions, some areas had to be separated into single layers to ensure fair comparison of the density. Moreover, our regional density analysis of TH-IR axon innervation in the axon was performed using 2D projection images that present the dense structures along the z-axis in a single bidimensional image. To gain a more accurate representation of the innervation considering the depth of the tissue, 3D representation of the entire image stack of the atria should be reconstructed to quantify the density for each image stack. Summary and future directions We have determined the topographical innervation of TH-IR axons in the flat-mount of the whole atria at single cell/axon/varicosity scale. Several TH-IR axon bundles entered the atria through the SVC and LPCV, and these bundles had different projection fields. A clear lateralization preference was found: the right and left bundles preferably innervated the right and left atrium, respectively. In addition, the regional density analysis showed that TH-IR axon innervation in the RA was more abundant than in the LA. In the RA, The SAN, AVN region and internodal conducting fibers showed higher density than the other regions. LA-PV junction had the densest TH-IR axon innervation in the LA. Furthermore, TH-IR bundles and axons passed through the ICG with very limited innervation around ICG neurons, but densely innervated the blood vessels and fat cells. A schematic diagram that summarizes our main findings is shown in Fig. . This work contributes to the cardiac-sympathetic brain connectome. However, anterograde tracer injections into the stellate ganglia to specifically map sympathetic postganglionic projections to the heart should be conducted in the future to address some limitations, including identifying the source of postganglionic TH-IR axons and characterizing terminal structures. In addition, our work provides an anatomical foundation for functional mapping of sympathetic control for the heart as well as evaluation of the remodeling of cardiac sympathetic innervation in chronic disease models (hypertension, diabetes, sleep apnea, heart failure, aging). Innervation field of TH-IR axons Several studies have reported the distribution of catecholaminergic nerve fibers utilizing sectioned or whole mounts of partial atrial preparations , – . The main limitation of such approaches is that the experimental approach damaged the intricate three-dimensional structures of axons and terminals in these tissues. Additionally, sections or partial flat mounts did not provide a comprehensive topographical map to assess the distribution and morphology of sympathetic postganglionic efferent axons and terminals across the entire atria. Recently, tissue clearing procedures have permitted an enhanced 3D view of the whole heart innervation . However, visibility of fine axons and terminals in the whole heart remained restricted with tissue clearing procedures. In addition, tissue clearance diminished the visibility of other cardiac targets such as ganglion cells, muscles, blood vessels, and adipocytes. In order to highlight the complex patterns of TH-IR axons and their terminal networks in atrial and targets, greater resolution imaging is required. Our study has addressed these limitations by providing a comprehensive topographical map of the distribution, and morphology of TH-IR axons and terminals in the atria of mice using flat-mounts of the whole atria. Consistent with previous studies on mouse and other species , – , we found a very dense TH-IR axon innervation in the atria. Additionally, the entrance points of the major TH-IR bundles to the atria, which were determined in our study, are similar to those that were ascertained previously , , . Different from prior reports, our study provided a complete, comprehensive map of TH-IR axons in the atria at single cell/axon/varicosity scale. In the connected atria, we observed that several TH-IR axon bundles (4–5) entered the atria through the SVC and LPCV and bifurcated into smaller bundles that eventually ramified into individual axons forming different projection fields with a certain degree of overlap. Presumably, these bundles were mostly from the left and right sympathetic stellate ganglia. Previous studies using retrograde tracer and stellate ganglionectomy showed that the majority of sympathetic postganglionic innervation originates from the stellate ganglia , . Our tracing of TH-IR axons showed clear lateralization as bundles from the right mainly projected towards the right atrium and SAN, while bundles from the left side showed preferential innervation of the left atrium. Our findings reveal detailed regional differences of TH-IR innervation in the entire atria, which enriches our knowledge regarding the differential sympathetic control over distinct regions. Quantitative analysis of TH-IR regional density Catecholaminergic axon innervation of the atria displays significant anatomical heterogeneity and several studies have attempted to assess the density of cardiac sympathetic nerves at different sites of the heart , . Although previous studies quantified the density of TH-IR axons at specific sites, they only utilized sections or partial atrial preparations. Thus, a more complete quantitative analysis of TH-IR axon density in the whole heart has not been determined. In our study, we addressed the mentioned shortcomings and analyzed the distribution and density of TH-IR axons in the flat-mount of the whole RA and LA at a high resolution (40X oil lens). The density of TH-IR axons showed regional differences across the atrial wall. In the RA, TH-IR axons and terminals were the densest in the SAN region, followed by the AVN region and other regions, which is similar to what was found in other studies – . In the LA, the density of TH-IR axons was the highest at the LA-PV junction which was pointed out to be an area richly innervated with sympathetic nerves . The auricles, one of the most prominent structural features of the right and left atrium, play an important role in pumping the blood within the heart with its capacity to expand during each heartbeat . The differential regional distribution of TH-IR axon innervation indicated by our density assessment gives insight to localized effects of catecholaminergic innervation of the atria. Our results could set the foundation for future physiological studies of anatomical remodeling in pathological conditions. TH-IR ICG neurons and TH-IR axons Traditionally, it was thought that all ICG neurons in guinea pigs and rats were exclusively cholinergic , . However, recent studies demonstrated that ICG neurons exhibit diverse neurochemical phenotypes (including TH, ChAT, nNOS, VIP, NPY) , that extend beyond the traditional concept of cholinergic neurons. A subpopulation of the ICG neurons were also found to be TH-IR in mice which aligns with our findings , . Similar to previous studies, we have observed the ICG being located primarily on the outer surface of the atria near the entrance of the pulmonary veins to the LA and near the SAN and AVN , . Our work in mice showed TH-IR neurons in the ICG with TH-IR axons going through the ganglia without apparent innervation. This differs from what was found in guinea pigs and rats where some TH-IR varicosities were seen around ICG neurons , . These previous findings may be somewhat overestimated by their use of partial preparations that cannot be extrapolated to all ICG neurons. In this study we aimed to assess TH-IR axons that cross through all ICG located on the RA and LA. We found only a few TH-IR axons (if any) were in close contact with ICG neurons. Higher magnification should be used in the future to ensure there is no underestimation of TH-IR axons presence around ICG neurons. In support of this finding in mice, our recent study in pigs showed that TH-IR axons traveled through the ICG without forming varicosities surrounding the principal neurons (PNs) . The lack of TH-IR varicosities wrapping tightly around TH-IR neurons in the ICG contrasts with what was observed in the gastrointestinal tract where TH-IR varicosities tightly surround the PNs in the myenteric ganglia . Prior research indicated that mice ICG are immunoreactive to dopamine-beta-hydroxylase (DBH) and norepinephrine transporter (NET), but they lack vesicular monoamine transporter 2 (VMAT2) . This is in contrast to the nerve fibers and stellate neurons which are positive for DBH, NET, and VMAT2. The lack of VMAT2 renders the neurons in the mice ICG functionally non-noradrenergic due to their inability to transport dopamine and norepinephrine into synaptic vesicles . However, there were limited studies on the function of TH-IR neurons in the ICG, and further studies are needed to explore the functions of TH-IR neurons in the ICG of different species. Several studies have reported the distribution of catecholaminergic nerve fibers utilizing sectioned or whole mounts of partial atrial preparations , – . The main limitation of such approaches is that the experimental approach damaged the intricate three-dimensional structures of axons and terminals in these tissues. Additionally, sections or partial flat mounts did not provide a comprehensive topographical map to assess the distribution and morphology of sympathetic postganglionic efferent axons and terminals across the entire atria. Recently, tissue clearing procedures have permitted an enhanced 3D view of the whole heart innervation . However, visibility of fine axons and terminals in the whole heart remained restricted with tissue clearing procedures. In addition, tissue clearance diminished the visibility of other cardiac targets such as ganglion cells, muscles, blood vessels, and adipocytes. In order to highlight the complex patterns of TH-IR axons and their terminal networks in atrial and targets, greater resolution imaging is required. Our study has addressed these limitations by providing a comprehensive topographical map of the distribution, and morphology of TH-IR axons and terminals in the atria of mice using flat-mounts of the whole atria. Consistent with previous studies on mouse and other species , – , we found a very dense TH-IR axon innervation in the atria. Additionally, the entrance points of the major TH-IR bundles to the atria, which were determined in our study, are similar to those that were ascertained previously , , . Different from prior reports, our study provided a complete, comprehensive map of TH-IR axons in the atria at single cell/axon/varicosity scale. In the connected atria, we observed that several TH-IR axon bundles (4–5) entered the atria through the SVC and LPCV and bifurcated into smaller bundles that eventually ramified into individual axons forming different projection fields with a certain degree of overlap. Presumably, these bundles were mostly from the left and right sympathetic stellate ganglia. Previous studies using retrograde tracer and stellate ganglionectomy showed that the majority of sympathetic postganglionic innervation originates from the stellate ganglia , . Our tracing of TH-IR axons showed clear lateralization as bundles from the right mainly projected towards the right atrium and SAN, while bundles from the left side showed preferential innervation of the left atrium. Our findings reveal detailed regional differences of TH-IR innervation in the entire atria, which enriches our knowledge regarding the differential sympathetic control over distinct regions. Catecholaminergic axon innervation of the atria displays significant anatomical heterogeneity and several studies have attempted to assess the density of cardiac sympathetic nerves at different sites of the heart , . Although previous studies quantified the density of TH-IR axons at specific sites, they only utilized sections or partial atrial preparations. Thus, a more complete quantitative analysis of TH-IR axon density in the whole heart has not been determined. In our study, we addressed the mentioned shortcomings and analyzed the distribution and density of TH-IR axons in the flat-mount of the whole RA and LA at a high resolution (40X oil lens). The density of TH-IR axons showed regional differences across the atrial wall. In the RA, TH-IR axons and terminals were the densest in the SAN region, followed by the AVN region and other regions, which is similar to what was found in other studies – . In the LA, the density of TH-IR axons was the highest at the LA-PV junction which was pointed out to be an area richly innervated with sympathetic nerves . The auricles, one of the most prominent structural features of the right and left atrium, play an important role in pumping the blood within the heart with its capacity to expand during each heartbeat . The differential regional distribution of TH-IR axon innervation indicated by our density assessment gives insight to localized effects of catecholaminergic innervation of the atria. Our results could set the foundation for future physiological studies of anatomical remodeling in pathological conditions. Traditionally, it was thought that all ICG neurons in guinea pigs and rats were exclusively cholinergic , . However, recent studies demonstrated that ICG neurons exhibit diverse neurochemical phenotypes (including TH, ChAT, nNOS, VIP, NPY) , that extend beyond the traditional concept of cholinergic neurons. A subpopulation of the ICG neurons were also found to be TH-IR in mice which aligns with our findings , . Similar to previous studies, we have observed the ICG being located primarily on the outer surface of the atria near the entrance of the pulmonary veins to the LA and near the SAN and AVN , . Our work in mice showed TH-IR neurons in the ICG with TH-IR axons going through the ganglia without apparent innervation. This differs from what was found in guinea pigs and rats where some TH-IR varicosities were seen around ICG neurons , . These previous findings may be somewhat overestimated by their use of partial preparations that cannot be extrapolated to all ICG neurons. In this study we aimed to assess TH-IR axons that cross through all ICG located on the RA and LA. We found only a few TH-IR axons (if any) were in close contact with ICG neurons. Higher magnification should be used in the future to ensure there is no underestimation of TH-IR axons presence around ICG neurons. In support of this finding in mice, our recent study in pigs showed that TH-IR axons traveled through the ICG without forming varicosities surrounding the principal neurons (PNs) . The lack of TH-IR varicosities wrapping tightly around TH-IR neurons in the ICG contrasts with what was observed in the gastrointestinal tract where TH-IR varicosities tightly surround the PNs in the myenteric ganglia . Prior research indicated that mice ICG are immunoreactive to dopamine-beta-hydroxylase (DBH) and norepinephrine transporter (NET), but they lack vesicular monoamine transporter 2 (VMAT2) . This is in contrast to the nerve fibers and stellate neurons which are positive for DBH, NET, and VMAT2. The lack of VMAT2 renders the neurons in the mice ICG functionally non-noradrenergic due to their inability to transport dopamine and norepinephrine into synaptic vesicles . However, there were limited studies on the function of TH-IR neurons in the ICG, and further studies are needed to explore the functions of TH-IR neurons in the ICG of different species. The sympathetic nervous system plays a crucial role in BAT thermogenesis and WAT lipolysis through its direct innervation of peripheral fat depots – . Epicardial adipose tissue is an unusual visceral fat depot and has been shown to express its own specific transcriptomic signature . Epicardial fat was described as white adipose tissue with brown-fat-like features , . We noticed the presence of both types of adipose tissue at multiple locations with predominance of WAT on the atrial epicardium. Similar to our study, recent work that utilized iDISCO tissue clearance, confocal and light sheet microscopy showed a differential density of TH-IR axonal varicosities in BAT and WAT . Further functional studies to investigate the physiological effects of sympathetic innervation of both BAT and WAT in the atria would be highly valuable. As expected, TH-IR axons were observed in close proximity (running parallel or wrapping around) the vasculature . Identification of the ultrastructure to confirm TH-IR axons formed contacts with the blood vessels using electron microscopy or physiological studies will be needed. It has been demonstrated that the sympathetic nerves have a major influence on the control of blood flow, blood pressure, and total vascular resistance via its innervation of small arteries . In particular, the sympathetic nervous system has an essential role in maintaining cardiovascular homeostasis and normal physiological activities, including vascular tone and blood pressure. Although several studies have described the atrial sympathetic innervation, comprehensive studies that delineate the topographical TH-IR axon innervation of the whole atria and regional differences are currently lacking . Our tracing of the TH-IR axon innervation of the whole atria unraveled the complex axonal network and preferential innervation of distinct regions. The mapping data could be utilized to understand the sympathetic specific control of different regions of the atria and their autonomic responses. In our map, the bundles entering the right side of the atria provided the majority of the sympathetic innervation to the right auricle, right PV, SAN and conductive fibers while the left bundles provided the majority of the sympathetic innervation to the left auricle, interatrial groove (junction of LA and RA) and PVs. Regional and lateral differences in the function of the heart have been indicated previously via the functional studies (mainly in humans) of cardiac sympathetic innervation by the right and left stellate ganglia (SG) . SG block revealed that the right SG is largely responsible for increasing heart rate, slowing atrioventricular conduction, and primarily affects the right atrium as opposed to the left atrium. In contrast, the left SG has a lesser effect on heart rate and atrioventricular conduction and primarily affects the left atrium as opposed to the right atrium , . Modulating the sympathetic innervation of the atria is becoming an increasingly important therapeutic approach , for example, neuromodulation therapy by electrical stimulation or renal denervation has shown great success in treating diseases like atrial fibrillation via remodeling of stellate ganglion and reducing sympathetic output . Therefore, selective targeting of sympathetic innervation of either side of the heart can have different effects. Our topographical map of TH-IR axon innervation in the atria could be used as a cardiac sympathetic atlas to navigate more precise control of different heart regions. Knowledge of cardiac sympathetic postganglionic innervation location and density may also help to elucidate the normal physiology and abnormal patterns in certain pathological conditions. Our quantitative analysis shed light onto the atrial regions that received the highest TH-IR axon innervation that could potentially indicate a more precise control in these areas. In the RA we found the highest innervation density of TH-IR axons in the SAN, which supports the fact that the sympathetic nervous system has a role in the fine tuning of heart rate. This could also indicate potential therapeutic targets as blockade of neuronal input with propranolol (beta blocker) leads to a decrease in heart rate – . In the LA, the highest density of TH-IR axons were observed at the entrance of PV to the LA. The junction of the left atrium and pulmonary veins has been indicated to be a focal source which is responsible for the initiation of atrial fibrillation . Therefore, further functional studies of these great vein-atrial junction regions, which were the most dense with TH-IR axons in our quantitative analysis, are valuable to better understand the physiology and pathology of atrial fibrillation. Considering that understanding how sympathetic neurons communicate to their cardiac targets is essential for understanding how the heart works , our results provide a basis for understanding the role that TH-IR axons specific innervation play in the control of the normal heart as well as in the diseased heart. A couple of limitations must be acknowledged: Neurolucida 360 TH-IR axon tracing: Despite our effort to trace TH-IR axon bundles and their projection field, it was not feasible to trace the smallest branches and individual axons in the whole atria. Our continuous collaboration with MBF Bioscience in SPARC MAP-CORE to improve the customized settings for autotracing of our labeled axons should ensure more precise and faster tracing. Density of single or double layers: Due to great differences in thickness of atria in different regions, some areas had to be separated into single layers to ensure fair comparison of the density. Moreover, our regional density analysis of TH-IR axon innervation in the axon was performed using 2D projection images that present the dense structures along the z-axis in a single bidimensional image. To gain a more accurate representation of the innervation considering the depth of the tissue, 3D representation of the entire image stack of the atria should be reconstructed to quantify the density for each image stack. We have determined the topographical innervation of TH-IR axons in the flat-mount of the whole atria at single cell/axon/varicosity scale. Several TH-IR axon bundles entered the atria through the SVC and LPCV, and these bundles had different projection fields. A clear lateralization preference was found: the right and left bundles preferably innervated the right and left atrium, respectively. In addition, the regional density analysis showed that TH-IR axon innervation in the RA was more abundant than in the LA. In the RA, The SAN, AVN region and internodal conducting fibers showed higher density than the other regions. LA-PV junction had the densest TH-IR axon innervation in the LA. Furthermore, TH-IR bundles and axons passed through the ICG with very limited innervation around ICG neurons, but densely innervated the blood vessels and fat cells. A schematic diagram that summarizes our main findings is shown in Fig. . This work contributes to the cardiac-sympathetic brain connectome. However, anterograde tracer injections into the stellate ganglia to specifically map sympathetic postganglionic projections to the heart should be conducted in the future to address some limitations, including identifying the source of postganglionic TH-IR axons and characterizing terminal structures. In addition, our work provides an anatomical foundation for functional mapping of sympathetic control for the heart as well as evaluation of the remodeling of cardiac sympathetic innervation in chronic disease models (hypertension, diabetes, sleep apnea, heart failure, aging).
Population, Clinical, and Scientific Impact of National Cancer Institute's National Clinical Trials Network Treatment Studies
6adf8184-1016-42db-ac8f-3edce84a2b8f
10082246
Internal Medicine[mh]
Cancer is a devastating group of diseases with enormous adverse impacts on population health. Cancer remains the leading cause of lost life-years in the United States, with 9.3 million years of life lost in 2019 alone. For an individual with cancer, the estimated average number of life-years lost is 15.2. Fortunately, through the combined efforts of better early detection, prevention, and improved cancer treatments, cancer mortality has begun to decrease. The annual percentage reduction in cancer mortality in the United States was 2.1% from 2015 to 2019, twice the annual rate of reduction of 1.0% from 1992 to 2001. Decreasing mortality over the past 2 decades has resulted in the reduction of more than 3 million cancer-related deaths. - These combined efforts are vital given the aging of the US population and the fact that most new cancer cases occur in individuals 65 years or older. CONTEXT Key Objective The National Cancer Institute's National Cancer Clinical Trials Network (NCTN) groups have conducted publicly funded oncology research for more than 50 years. In a collaboration among the four large adult NCTN groups, we systematically evaluated the combined impact of positive randomized trials since 1980. Knowledge Generated The 162 trials that were analyzed comprised 108,334 patients. These trials were cited 165,336 times through 2020, with 87.7% of trials cited in cancer care guidelines in favor of the recommended treatment. The experimental therapies from the trials were estimated to have generated 14.2 million additional life-years to patients with cancer through 2020. Relevance (J.W. Friedberg) The impact of US NCTN trials on adult cancer outcomes cannot be overstated; this evidence should compel sustained financial investment and continued academic contributions to this valuable resource.* *Relevance section written by JCO Editor‐in‐Chief Jonathan W. Friedberg, MD. The year 2021 marked the 50th anniversary of the National Cancer Act, signed into law in 1971 with the express purpose to “more effectively … carry out the national effort against cancer.” The act launched a decades-long effort to combat cancer under the guidance of the National Cancer Institute (NCI) within the National Institutes of Health. A key part of the NCI's mandate is the sponsorship of a set of large, national adult cancer network research groups that combine the efforts of physician-researchers, laboratory scientists, biostatisticians, nurses, clinical research associates, and patient advocates across academic and community cancer centers to conduct clinical trials. This NCI-sponsored National Clinical Trials Network (NCTN) coordinates and supports trials at more than 2,200 sites across the United States and internationally. These groups have been conducting research paid for by the US government for more than 5 decades, with the goal to identify new, effective treatments for patients with cancer. Significant work is also conducted in the realm of oncology population science, including cancer control and prevention. The groups' main research shares with pharmaceutical company trials the goal of identifying treatments with the potential to improve overall survival. The network groups also compare combinations of agents, test regimens in rare diseases, and assess different modalities, such as surgery and radiation. While widely thought to conduct high-quality research with potentially meaningful results, the actual impact of all adult NCTN trials has never been systematically assessed. In a first-time collaboration combining data on positive trials conducted by the NCTN adult cancer groups, our aim was to systematically examine and characterize population, clinical, and scientific impact of the NCTN over the most recent 4 decades. Key Objective The National Cancer Institute's National Cancer Clinical Trials Network (NCTN) groups have conducted publicly funded oncology research for more than 50 years. In a collaboration among the four large adult NCTN groups, we systematically evaluated the combined impact of positive randomized trials since 1980. Knowledge Generated The 162 trials that were analyzed comprised 108,334 patients. These trials were cited 165,336 times through 2020, with 87.7% of trials cited in cancer care guidelines in favor of the recommended treatment. The experimental therapies from the trials were estimated to have generated 14.2 million additional life-years to patients with cancer through 2020. Relevance (J.W. Friedberg) The impact of US NCTN trials on adult cancer outcomes cannot be overstated; this evidence should compel sustained financial investment and continued academic contributions to this valuable resource.* *Relevance section written by JCO Editor‐in‐Chief Jonathan W. Friedberg, MD. Data We identified randomized phase III trials from the four adult network groups: the SWOG Cancer Research Network, the Alliance for Clinical Trials in Oncology, the ECOG-ACRIN Cancer Research Group, and NRG Oncology. Primary study findings must have been reported from 1980 onward and demonstrated statistically significant results for one or more clinical, time-dependent outcomes (such as overall or progression-free survival) in favor of experimental treatment. Experimental treatments identified as beneficial but that were too toxic for study authors to recommend in the primary publication were excluded. Information on ethical review and informed consent of participants for each trial were included in study publications. This study relied on previously published trial reports for which patient-level data were not identifiable; thus, institutional review board approval of the study was not required. Statistical Methods Population impact. Population impact—defined by gains in population life-years—was estimated for all trials for which overall survival favored the experimental treatment arm, regardless of whether the benefit was statistically significant. Thus, in several cases, experimental treatment was observed to provide a statistically significant benefit for an intermediate end point (eg, progression-free survival), but a nonsignificant beneficial trend for overall survival. Life-year gains based on such trials were included to provide an empirical translation of intermediate end points into life-years. Life-year gains were also calculated for noninferiority trials if there was improved overall survival for the experimental treatment. On the basis of a previously published method, for each trial-proven new treatment for a given type of cancer, life-years gained (LYG) at the population (Pop) level was calculated as the product of model-estimated additional life accrued to the average patient (Pt) and multiplied by the number of patients in the cancer population (N CaPop ) who would benefit from the new treatment (ie, LYG Pop = LYG Pt × N CaPop ). Life-years gained for the average patient (LYG Pt ) was estimated by deriving trial-specific survival function parameters depicting the difference in survival between standard and experimental treatments and mapping the benefit of the experimental treatment onto the US cancer population using national cancer registry and life-table data. For improved representativeness, rather than using the survival outcomes for the control arm for patients enrolled in the trial, the hazard rate for the control arm was estimated using cancer population survival data for incident cases during the trial enrollment period that met trial eligibility criteria. To derive the survival function for the experimental arm, we obtained the hazard ratio for the benefit of new treatment from the trial publication. The benefit of experimental treatment increased average survival during the treatment benefit period as the product of the hazard rate for the control arm and the hazard ratio. In the post-treatment benefit period, average survival for both the control and experimental arms was assumed to extend under a pattern of exponential decay until mean age-specific half-life on the basis of life-table data, a conservative assumption. Average life-years gained on the basis of the new treatment for a given individual was then calculated as the difference in the area under the survival functions (ie, Kaplan-Meier curves) between control and experimental arms from diagnosis to mean half-life. To derive the number of patients in the cancer population to whom the new treatment would apply (N CaPop ), we matched the major cancer type, stage, tumor characteristic, prior cancer, surgery, sex, and age (ie, ≥ 18 years) eligibility criteria from the clinical trial to corresponding cancer population data using the Surveillance, Epidemiology, and End Results (SEER) program. The number of corresponding patients in the SEER data set was inflated by a factor of 1/ P SEER , where P SEER is the proportion of the US population the SEER data set represented. Calculations were stratified by 14 five-year age intervals (20-25, …, 81-85, and > 85 years), since life-years vary by age, and were conducted for each year from publication of trial results through 2020. To derive a 95% confidence limit, we iteratively sampled (using 400 iterations) the coefficient for the treatment effect from each trial, drawing from distributions on the basis of the observed point estimate and its variation under a normal distribution. In the base-case model, we assumed that the treatment benefit period was the first 5 years after diagnosis, that the overall survival treatment effect translated fully (with 100% effectiveness) to the corresponding cancer treatment population defined by the trial eligibility criteria, and that uptake of new treatments into clinical practice occurred in conjunction with the year of primary article publication. In a sensitivity analysis, we allowed the duration of treatment benefit to range from 3 to 7 years in 1-year intervals, the effectiveness parameter to vary from 80% to 120% (since generalizability may be incomplete for all patient groups, or conversely, newly proven treatments may be effectively used off-label) in 10% increments, and the year of adoption to vary from 2 years before trial publication (if adoption occurs early in conjunction with a conference presentation) to 2 years after trial publication (if uptake is delayed, especially for medically disadvantaged groups). - In an additional sensitivity analysis, to derive the hazard rate for the control arm, we used observed survival outcomes from the clinical trial rather than from cancer population data. Clinical impact. Clinical impact was defined by whether trial findings were included as evidence in favor of a recommended treatment in a major clinical guideline or in package inserts for US Food and Drug Administration (FDA) new drug approvals. The primary source was the National Comprehensive Cancer Network (NCCN) clinical practice guidelines from 1996 onward. Trials that supported other major guidelines (ie, ASCO and ESMO) were included to account for earlier years for which NCCN guidelines were not available. To identify whether a trial supported an FDA new drug approval, we generated a catalog of FDA-approved anticancer drugs and obtained the package inserts for any trial for which the experimental agent was included in the catalog. , - Trials cited as pivotal in the package inserts were categorized as practice influential. All determinations were made independently by two authors (R.V. and J.M.U.) with disagreements resolved by a third author (C.D.B.). Scientific impact. The primary article for a trial was the article reporting the results of the analysis for the primary protocol-specified end point. Using a bibliometric approach, scientific impact was defined by how often the primary trial report was cited through Google Scholar. , Totals were summed by year and over time. Additionally, we reported the frequency with which the primary articles were published in high impact (2-year impact factor > 10) journals on the basis of contemporary rankings. Cost Analysis Total federal investment funding to conduct the trials and costs per life-year gained were calculated as the sum of estimated funding for all four NCTN groups using publicly available data (Data Supplement, online only). Impact estimates for all domains were assessed through December 31, 2020. We identified randomized phase III trials from the four adult network groups: the SWOG Cancer Research Network, the Alliance for Clinical Trials in Oncology, the ECOG-ACRIN Cancer Research Group, and NRG Oncology. Primary study findings must have been reported from 1980 onward and demonstrated statistically significant results for one or more clinical, time-dependent outcomes (such as overall or progression-free survival) in favor of experimental treatment. Experimental treatments identified as beneficial but that were too toxic for study authors to recommend in the primary publication were excluded. Information on ethical review and informed consent of participants for each trial were included in study publications. This study relied on previously published trial reports for which patient-level data were not identifiable; thus, institutional review board approval of the study was not required. Population impact. Population impact—defined by gains in population life-years—was estimated for all trials for which overall survival favored the experimental treatment arm, regardless of whether the benefit was statistically significant. Thus, in several cases, experimental treatment was observed to provide a statistically significant benefit for an intermediate end point (eg, progression-free survival), but a nonsignificant beneficial trend for overall survival. Life-year gains based on such trials were included to provide an empirical translation of intermediate end points into life-years. Life-year gains were also calculated for noninferiority trials if there was improved overall survival for the experimental treatment. On the basis of a previously published method, for each trial-proven new treatment for a given type of cancer, life-years gained (LYG) at the population (Pop) level was calculated as the product of model-estimated additional life accrued to the average patient (Pt) and multiplied by the number of patients in the cancer population (N CaPop ) who would benefit from the new treatment (ie, LYG Pop = LYG Pt × N CaPop ). Life-years gained for the average patient (LYG Pt ) was estimated by deriving trial-specific survival function parameters depicting the difference in survival between standard and experimental treatments and mapping the benefit of the experimental treatment onto the US cancer population using national cancer registry and life-table data. For improved representativeness, rather than using the survival outcomes for the control arm for patients enrolled in the trial, the hazard rate for the control arm was estimated using cancer population survival data for incident cases during the trial enrollment period that met trial eligibility criteria. To derive the survival function for the experimental arm, we obtained the hazard ratio for the benefit of new treatment from the trial publication. The benefit of experimental treatment increased average survival during the treatment benefit period as the product of the hazard rate for the control arm and the hazard ratio. In the post-treatment benefit period, average survival for both the control and experimental arms was assumed to extend under a pattern of exponential decay until mean age-specific half-life on the basis of life-table data, a conservative assumption. Average life-years gained on the basis of the new treatment for a given individual was then calculated as the difference in the area under the survival functions (ie, Kaplan-Meier curves) between control and experimental arms from diagnosis to mean half-life. To derive the number of patients in the cancer population to whom the new treatment would apply (N CaPop ), we matched the major cancer type, stage, tumor characteristic, prior cancer, surgery, sex, and age (ie, ≥ 18 years) eligibility criteria from the clinical trial to corresponding cancer population data using the Surveillance, Epidemiology, and End Results (SEER) program. The number of corresponding patients in the SEER data set was inflated by a factor of 1/ P SEER , where P SEER is the proportion of the US population the SEER data set represented. Calculations were stratified by 14 five-year age intervals (20-25, …, 81-85, and > 85 years), since life-years vary by age, and were conducted for each year from publication of trial results through 2020. To derive a 95% confidence limit, we iteratively sampled (using 400 iterations) the coefficient for the treatment effect from each trial, drawing from distributions on the basis of the observed point estimate and its variation under a normal distribution. In the base-case model, we assumed that the treatment benefit period was the first 5 years after diagnosis, that the overall survival treatment effect translated fully (with 100% effectiveness) to the corresponding cancer treatment population defined by the trial eligibility criteria, and that uptake of new treatments into clinical practice occurred in conjunction with the year of primary article publication. In a sensitivity analysis, we allowed the duration of treatment benefit to range from 3 to 7 years in 1-year intervals, the effectiveness parameter to vary from 80% to 120% (since generalizability may be incomplete for all patient groups, or conversely, newly proven treatments may be effectively used off-label) in 10% increments, and the year of adoption to vary from 2 years before trial publication (if adoption occurs early in conjunction with a conference presentation) to 2 years after trial publication (if uptake is delayed, especially for medically disadvantaged groups). - In an additional sensitivity analysis, to derive the hazard rate for the control arm, we used observed survival outcomes from the clinical trial rather than from cancer population data. Clinical impact. Clinical impact was defined by whether trial findings were included as evidence in favor of a recommended treatment in a major clinical guideline or in package inserts for US Food and Drug Administration (FDA) new drug approvals. The primary source was the National Comprehensive Cancer Network (NCCN) clinical practice guidelines from 1996 onward. Trials that supported other major guidelines (ie, ASCO and ESMO) were included to account for earlier years for which NCCN guidelines were not available. To identify whether a trial supported an FDA new drug approval, we generated a catalog of FDA-approved anticancer drugs and obtained the package inserts for any trial for which the experimental agent was included in the catalog. , - Trials cited as pivotal in the package inserts were categorized as practice influential. All determinations were made independently by two authors (R.V. and J.M.U.) with disagreements resolved by a third author (C.D.B.). Scientific impact. The primary article for a trial was the article reporting the results of the analysis for the primary protocol-specified end point. Using a bibliometric approach, scientific impact was defined by how often the primary trial report was cited through Google Scholar. , Totals were summed by year and over time. Additionally, we reported the frequency with which the primary articles were published in high impact (2-year impact factor > 10) journals on the basis of contemporary rankings. Population impact—defined by gains in population life-years—was estimated for all trials for which overall survival favored the experimental treatment arm, regardless of whether the benefit was statistically significant. Thus, in several cases, experimental treatment was observed to provide a statistically significant benefit for an intermediate end point (eg, progression-free survival), but a nonsignificant beneficial trend for overall survival. Life-year gains based on such trials were included to provide an empirical translation of intermediate end points into life-years. Life-year gains were also calculated for noninferiority trials if there was improved overall survival for the experimental treatment. On the basis of a previously published method, for each trial-proven new treatment for a given type of cancer, life-years gained (LYG) at the population (Pop) level was calculated as the product of model-estimated additional life accrued to the average patient (Pt) and multiplied by the number of patients in the cancer population (N CaPop ) who would benefit from the new treatment (ie, LYG Pop = LYG Pt × N CaPop ). Life-years gained for the average patient (LYG Pt ) was estimated by deriving trial-specific survival function parameters depicting the difference in survival between standard and experimental treatments and mapping the benefit of the experimental treatment onto the US cancer population using national cancer registry and life-table data. For improved representativeness, rather than using the survival outcomes for the control arm for patients enrolled in the trial, the hazard rate for the control arm was estimated using cancer population survival data for incident cases during the trial enrollment period that met trial eligibility criteria. To derive the survival function for the experimental arm, we obtained the hazard ratio for the benefit of new treatment from the trial publication. The benefit of experimental treatment increased average survival during the treatment benefit period as the product of the hazard rate for the control arm and the hazard ratio. In the post-treatment benefit period, average survival for both the control and experimental arms was assumed to extend under a pattern of exponential decay until mean age-specific half-life on the basis of life-table data, a conservative assumption. Average life-years gained on the basis of the new treatment for a given individual was then calculated as the difference in the area under the survival functions (ie, Kaplan-Meier curves) between control and experimental arms from diagnosis to mean half-life. To derive the number of patients in the cancer population to whom the new treatment would apply (N CaPop ), we matched the major cancer type, stage, tumor characteristic, prior cancer, surgery, sex, and age (ie, ≥ 18 years) eligibility criteria from the clinical trial to corresponding cancer population data using the Surveillance, Epidemiology, and End Results (SEER) program. The number of corresponding patients in the SEER data set was inflated by a factor of 1/ P SEER , where P SEER is the proportion of the US population the SEER data set represented. Calculations were stratified by 14 five-year age intervals (20-25, …, 81-85, and > 85 years), since life-years vary by age, and were conducted for each year from publication of trial results through 2020. To derive a 95% confidence limit, we iteratively sampled (using 400 iterations) the coefficient for the treatment effect from each trial, drawing from distributions on the basis of the observed point estimate and its variation under a normal distribution. In the base-case model, we assumed that the treatment benefit period was the first 5 years after diagnosis, that the overall survival treatment effect translated fully (with 100% effectiveness) to the corresponding cancer treatment population defined by the trial eligibility criteria, and that uptake of new treatments into clinical practice occurred in conjunction with the year of primary article publication. In a sensitivity analysis, we allowed the duration of treatment benefit to range from 3 to 7 years in 1-year intervals, the effectiveness parameter to vary from 80% to 120% (since generalizability may be incomplete for all patient groups, or conversely, newly proven treatments may be effectively used off-label) in 10% increments, and the year of adoption to vary from 2 years before trial publication (if adoption occurs early in conjunction with a conference presentation) to 2 years after trial publication (if uptake is delayed, especially for medically disadvantaged groups). - In an additional sensitivity analysis, to derive the hazard rate for the control arm, we used observed survival outcomes from the clinical trial rather than from cancer population data. Clinical impact was defined by whether trial findings were included as evidence in favor of a recommended treatment in a major clinical guideline or in package inserts for US Food and Drug Administration (FDA) new drug approvals. The primary source was the National Comprehensive Cancer Network (NCCN) clinical practice guidelines from 1996 onward. Trials that supported other major guidelines (ie, ASCO and ESMO) were included to account for earlier years for which NCCN guidelines were not available. To identify whether a trial supported an FDA new drug approval, we generated a catalog of FDA-approved anticancer drugs and obtained the package inserts for any trial for which the experimental agent was included in the catalog. , - Trials cited as pivotal in the package inserts were categorized as practice influential. All determinations were made independently by two authors (R.V. and J.M.U.) with disagreements resolved by a third author (C.D.B.). The primary article for a trial was the article reporting the results of the analysis for the primary protocol-specified end point. Using a bibliometric approach, scientific impact was defined by how often the primary trial report was cited through Google Scholar. , Totals were summed by year and over time. Additionally, we reported the frequency with which the primary articles were published in high impact (2-year impact factor > 10) journals on the basis of contemporary rankings. Total federal investment funding to conduct the trials and costs per life-year gained were calculated as the sum of estimated funding for all four NCTN groups using publicly available data (Data Supplement, online only). Impact estimates for all domains were assessed through December 31, 2020. Study Characteristics Overall, 544 trials were assessed to have been conducted during the study period, and 189 trials were considered for inclusion in the analysis on the basis of determination by the study team (Data Supplement). Twenty-seven trials were excluded for the following reasons: not a treatment trial (6), positive result but too toxic to recommend (6), not a positive trial (4), positive trial but not for a time-to-event end point (ie, tumor response only; 8), and other reasons (3; Data Supplement). Therefore, 162 trials published from 1981 to 2018 comprised of 108,334 patients were analyzed, representing nearly one third (162/544, 29.8%) of trials conducted by the groups (Data Supplement). A wide variety of cancers were studied, with the most common tumors involving breast (34, 21.0%), gynecologic organs (28, 17.3%), and lungs (14, 8.6%; Table ). Nearly all trials (155, 95.7%) had superiority designs, and most (130, 80.2%) included chemotherapy. The majority (113, 69.8%) were conducted between 1990 and 2009. Population Impact Overall, 82.1% (133/161) of trials showed overall survival favoring the experimental arm to some extent, including 92 instances (56.8%) where overall survival for the experimental arm was statistically significantly superior. Through 2020, these trials were estimated to have contributed to 14.2 million (95% CI, 11.5 to 16.5 million) additional life-years (Fig ). For the same trials, the projected estimates for 2025 and 2030 were 19.0 million (95% CI, 16.1 to 22.3 million) and 24.1 million (95% CI, 19.7 to 28.2 million) life-years gained, respectively (Fig ). In the sensitivity analysis varying trial parameters, the range was 7.8-22.9 million life-years gained, with 91.2% of estimates exceeding 10.0 million life-years (Fig ). In sensitivity analysis that used observed outcomes from the clinical trial to derive the control arm hazard function, the estimated life-years gained through 2020 was 15.3 million, greater than our base-case estimate of 14.2 million. The estimated total federal investment cost to conduct the trials was $4.63 billion in US dollars (USD) in 2020, or $326 (USD) per life-year gained through 2020. Clinical Impact Overall, 87.7% (142/162) of trials were found to have had documented influence on cancer care guidelines, including 26 instances for both gynecologic cancers and breast cancer (Fig ). The proportion of trials that influenced guidelines was 95.6% (108/113) for trials published after NCCN guidelines were available in 1996 and 69.4% (34/49; P < .001) before NCCN guidelines were available. Scientific Impact Primary trial results were cited 165,336 times through 2020 (mean, 62.2 citations/trial/year). More than half had 500 or more citations through 2020 (Fig ). Trial results were frequently published in high-impact journals (146/162, 90.1%), including the Journal of Clinical Oncology (77, 47.5%), the New England Journal of Medicine (49, 30.2%), The Lancet (6, 3.7%), Blood (5, 3.1%), JAMA (3, 1.9%), Journal of the National Cancer Institute (3, 1.9%), The Lancet Oncology (2, 1.2%), and JAMA Oncology (1, 0.6%). Overall, 544 trials were assessed to have been conducted during the study period, and 189 trials were considered for inclusion in the analysis on the basis of determination by the study team (Data Supplement). Twenty-seven trials were excluded for the following reasons: not a treatment trial (6), positive result but too toxic to recommend (6), not a positive trial (4), positive trial but not for a time-to-event end point (ie, tumor response only; 8), and other reasons (3; Data Supplement). Therefore, 162 trials published from 1981 to 2018 comprised of 108,334 patients were analyzed, representing nearly one third (162/544, 29.8%) of trials conducted by the groups (Data Supplement). A wide variety of cancers were studied, with the most common tumors involving breast (34, 21.0%), gynecologic organs (28, 17.3%), and lungs (14, 8.6%; Table ). Nearly all trials (155, 95.7%) had superiority designs, and most (130, 80.2%) included chemotherapy. The majority (113, 69.8%) were conducted between 1990 and 2009. Overall, 82.1% (133/161) of trials showed overall survival favoring the experimental arm to some extent, including 92 instances (56.8%) where overall survival for the experimental arm was statistically significantly superior. Through 2020, these trials were estimated to have contributed to 14.2 million (95% CI, 11.5 to 16.5 million) additional life-years (Fig ). For the same trials, the projected estimates for 2025 and 2030 were 19.0 million (95% CI, 16.1 to 22.3 million) and 24.1 million (95% CI, 19.7 to 28.2 million) life-years gained, respectively (Fig ). In the sensitivity analysis varying trial parameters, the range was 7.8-22.9 million life-years gained, with 91.2% of estimates exceeding 10.0 million life-years (Fig ). In sensitivity analysis that used observed outcomes from the clinical trial to derive the control arm hazard function, the estimated life-years gained through 2020 was 15.3 million, greater than our base-case estimate of 14.2 million. The estimated total federal investment cost to conduct the trials was $4.63 billion in US dollars (USD) in 2020, or $326 (USD) per life-year gained through 2020. Overall, 87.7% (142/162) of trials were found to have had documented influence on cancer care guidelines, including 26 instances for both gynecologic cancers and breast cancer (Fig ). The proportion of trials that influenced guidelines was 95.6% (108/113) for trials published after NCCN guidelines were available in 1996 and 69.4% (34/49; P < .001) before NCCN guidelines were available. Primary trial results were cited 165,336 times through 2020 (mean, 62.2 citations/trial/year). More than half had 500 or more citations through 2020 (Fig ). Trial results were frequently published in high-impact journals (146/162, 90.1%), including the Journal of Clinical Oncology (77, 47.5%), the New England Journal of Medicine (49, 30.2%), The Lancet (6, 3.7%), Blood (5, 3.1%), JAMA (3, 1.9%), Journal of the National Cancer Institute (3, 1.9%), The Lancet Oncology (2, 1.2%), and JAMA Oncology (1, 0.6%). This study, representing the first time that the cumulative survival benefits of phase III trials across all adult cooperative groups has been examined, demonstrates that NCI-sponsored NCTN research has contributed meaningfully to extending the lives of patients with cancer, at a low cost. The 162 trials we examined contributed to gains of an estimated 14.2 million life-years to patients with cancer in the United States at a federal investment cost of $326 (USD) per life-year gained. The same studies are projected to contribute 24.1 million life-years by 2030. Most of the trials (87.7%) influenced guideline care recommendations, and the trials contributed enormously to the scientific literature, with primary trial reports nearly all published in high-impact journals and cited more than 165,000 times through 2020. The NCTN groups are a vital component of the scientific infrastructure of the United States. Their genesis—accelerated by the 1973 National Cancer Act—has been a key driver in reducing the mortality rate from cancer. Since 1991, the mortality rate due to cancer in the United States has decreased by 31%. This reduction is partly attributable to improved screening and early detection, improvements in diagnosis, and the development of prevention strategies and interventions, but advances in treatment have also been critical and have been documented in recent assessments. From 2013 through 2017, cancer death rates decreased 1.5% on average, including 2.0% for Black persons. Recent improvements have been particularly apparent for certain diseases such as melanoma and lung cancer. - To contextualize the findings of this study, a gain of 14.2 million life-years through 2020 due to the contributions of NCTN trials has returned 4.2% of the 336.8 million years of life lost due to cancer from 1980 to 2020 in the US population (Data Supplement). These life-year gains were derived from a minimal federal investment. A study by Islami et al showed that the years of life lost in 2015 (8.7 million) for persons age 16-84 years in the United States resulted in an estimated $94.4 billion (USD) in lost future earnings. Another study showed that the national expenditure for cancer care in the United States in 2015 was $183 billion (USD) and is projected to increase to $246 billion (USD) by 2030. Set against these estimates, the investment of $4.63 billion (USD) in the conduct of clinical trials by the NCTN groups over 40 years seems comparatively small. The mission of the NCTN groups is to change clinical practice and to improve outcomes for patients. As shown, most NCTN trials with positive clinical end points (87.7%) informed guideline care. Importantly, nearly all trials (95.6%) reported since 1996—when comprehensive NCCN guidelines became available for review—were identified as having documented influence on cancer care guidelines, suggesting the true underlying rate may be even higher than our overall estimate of 87.7%. Trials conducted by the NCTN program impact patients with cancer in ways not included in our study. For instance, survivors from cancer can suffer lifelong consequences including morbidity, reduced quality of life, and economic hardship. NCTN trials provide improved access to protocol treatments for vulnerable patient populations that may not be routinely offered by pharmaceutical company trials. Further, NCTN trial databases are vital resources for conducting secondary data analyses that generate new hypotheses and important insights into the mechanisms of malignancy. A key element of the mission of the NCTN is to mentor the next generation of clinical researchers and to translate research into evidence-based practice. , Finally, the NCTN groups conduct trials to identify strategies to prevent and control cancer. Although this study represents a first-time comprehensive evaluation of the impact of NCTN trials across important domains, it has limitations. An assessment of the impact of negative trials was not included. Negative trials also routinely guide clinical practice guidelines by identifying which treatments should not be used. In doing so, negative trial results can greatly limit the tremendous human and financial resources that may otherwise be committed to ineffective therapeutic approaches. Additionally, negative trials are key sources for secondary analyses, the scientific impact of which can be substantial. Further, the estimate of life-years gained relied on trials with improved overall survival for the experimental arm. However, trials routinely focus on earlier end points (eg, progression) and thus may not have data to fully characterize overall survival patterns. Also, in some instances, the experimental therapy is so clearly superior to standard care that a trial will be closed early; in others, the use of a cross-over design or a design without a standard-of-care arm might preclude assessment of life-years gained from treatments with demonstrable clinical benefits. - Additionally, projected estimates for 2025 and 2030 were based only on currently identified trials, with no attempt made to model how many studies will be positive in the future. From all of these perspectives, our estimate of life-years gained is likely conservative. Also, our impact metrics did not fully reflect the potential benefits of positive noninferiority or equivalency trials, which can benefit patients in terms of reduced toxicity, more convenient care delivery, and/or reduced costs without clinically meaningful reductions in outcomes. Because our study was based on overall trial findings, we were unable to estimate whether the different measures of impact differed for vulnerable patient populations, such as those with lower socioeconomic status. We recognize that the conduct of a randomized clinical trial represents the culmination of a lengthy discovery process that includes initial drug discovery and early testing, the costs of which were not included in our assessment. Also, cancer guidelines frequently rely on multiple trials to inform guideline care recommendations. Finally, federal investment dollars do not fully cover the costs of conducting trials, including the establishment and support of institutional trial programs and the time and effort of research investigators. In conclusion, the NCI-sponsored NCTN groups represent a vital and durable element of the scientific infrastructure of cancer clinical research in the United States. Randomized trials conducted by NCTN groups have contributed substantial gains in life-years for patients with cancer, and the studies have had a marked impact on cancer treatment guidelines and the scientific literature. Collectively, these findings demonstrate how publicly funded oncology research plays a vital role in informing clinical practice and extending the lives of patients with cancer.
Improving person-centered occupational health care for workers with chronic health conditions: a feasibility study
87772a38-04f1-4dcd-8975-620e7ae2a756
10082533
Patient-Centered Care[mh]
With the increase of the retirement age in most industrialized countries, the number of working-age people with a chronic condition has increased in recent years . In Europe, a quarter of the working population reports suffering from a chronic condition . It is projected that the prevalence of chronic health conditions within the working population will continue to rise in the coming years . The prevalence of chronic conditions in Europe has increased from 19 to 28% between 2010 and 2017 among people at working age . Chronic conditions may have significant impact on work participation due to physical, emotional or social issues . Significantly more workers with a chronic health condition leave paid employment due to unemployment, early retirement or receiving a disability pension compared to workers without a chronic condition . Return-to-work has been recognized as an important indicator for recovery of health and functioning and societal participation . It is, therefore, important to facilitate return-to-work and promote work participation for people with a chronic condition . Person-centered care has been acknowledged to positively influence people with a chronic health condition in terms of occupational performance and satisfaction . Person-centered care aims to provide care that is tailored to an individual person’s preferences, needs and values . Person-centered care does not merely concern the individual person, but takes into account the entire person including the context and surroundings . The body of evidence surrounding person-centered care has increased over the past decade . For instance, a systematic review found that person-centered care contributes to improved quality of care, self-efficacy, psychological and physical health status in patients with long-term chronic health conditions by using personalized care planning . Additionally, to prevent prolonged sickness absence, person-centered care provided by clinicians was found to contribute to higher rates of return-to-work . However, the implementation of person-centered care might be challenging by health care professionals due to the lack of clear professional guidelines, the lack of suitable personnel to deliver person-centered care and challenges in embedding person-centered care in the routine care process . Within the field of occupational health care, person-centered guidance and work ability assessment by occupational physicians (OPs) and insurance physicians (IPs) has gained increasing recognition in recent years. Attention is growing towards enhancing self-control of workers with chronic conditions, understanding workers’ cognitions and perceptions regarding living with a chronic disease and work functioning, and involving significant others in supporting work participation [ – ]. In order to support the changing role of OPs and IPs to deliver more person-centered guidance and assessment, training programs and an e-learning training with accompanying tools have been developed [ – ]. The goal of the developed training programs and e-learning training with accompanying tools is to (1) increase self-control of workers with a chronic health condition by helping OPs to create a supportive work environment , (2) increase the ability of OPs and IPs to involve cognitions and perceptions in the guidance and assessment of workers , and (3) support OPs and IPs to involve significant others in the re-integration process . To support better uptake of the developed training programs, e-learning training, and accompanying tools in practice, it is important to understand the factors affecting the implementation, practicality and integration . The factors affecting the implementation include the degree, possibility and manner in which an intervention can be fully embedded in practice . Practicality focusses on the aspects of the resources, time, commitment or a combination of those needed to deliver an intervention in practice . Integration entails aspects of the changes needed in a system or environment to integrate an intervention into existing infrastructures . In a previous study, determinants for the implementation of person-centered tools were identified . The most important determinant was taking the needs of workers with a chronic health condition into account . The results of this previous study give insight into the required focus for the implementation of person-centered tools in the field of occupational health care, but do not indicate whether it is feasible for professionals to apply the knowledge and skills gained in the training programs and tools in practice, and to embed the accompanying trainings and e-learning training in educational programs for OPs and IPs. Therefore, it is important to investigate the feasibility for the implementation from an educational perspective and professional perspective. Investigation of the feasibility of developed training programs and e-learning training and accompanying tools is important as they lay the basis for broader application of research knowledge in practice . The aim of this study was, therefore, to investigate the feasibility of the training programs and e-learning training with accompanying tools to enhance the supportive and coaching role of OPs and IPs for guidance and assessment of workers with chronic health conditions. The goal was to provide insight into how to facilitate implementation of the training programs and e-learning training into educational structures and practice. In order to investigate the feasibility, a qualitative study with semi-structured interviews was conducted. For this purpose, a qualitative research design was deemed most appropriate as it allows to gain a richer understanding of considerations for sustainable uptake and use of the previously developed training programs and e-learning training and accompanying person-centered tools in practice. The Consolidated Criteria for Reporting Qualitative research (COREQ) checklist was used to report on the interviews process . Research setting and context In the Netherlands, two medical professions constitute the provision of occupational health care: OPs and IPs . The OP is generally involved in the process of vocational support and return-to-work guidance for employees in the first two years of sick leave, as well as taking on preventive tasks such as promotion of healthy working conditions, improving sustainable employability, and early identification and treatment of occupational diseases. After the two-year sick leave period, a sick-listed employee is assessed for eligibility of a disability benefit by an IP. The IP assesses the functional abilities, limitations, and consequences for a person’s work ability. In case of partial work disability the IP can refer the employee for return to work interventions or support. For an employee who has no (longer) an employer, the IP takes on the role of an OP to guide and support return to work in the first two years of sick leave. For both professions after the basic medical education, they receive a four year postgraduate training at dedicated non-profit educational institutes . The resident training to become either an OP or IP is offered at either the Netherlands School of Public & Occupational Health or at the social medicine education department of the Radboud university medical center. In the Netherlands, OPs and IPs, when officially registered as practicing physicians, need to follow continuing professional education. The continuing professional education can be offered by either non-profit educational institutes, the educational department of the Dutch Social Security agency or private suppliers. The development and evaluation of the trainings and e-learning training was part of a larger Dutch research program aimed at contributing to improved worker-focused occupational health care. The research program consists of three research projects which were conducted in parallel to improve the supporting role of OPs and IPs in occupational health care for workers with chronic health conditions. As aforementioned, the topics covered in the training programs and e-learning training were respectively: creating a supportive work environment to enhance self-control of workers , involving cognitions and perceptions of workers in guidance and assessment , and involving significant others in the re-integration process . More information on the content of the training programs, e-learning training and accompanying tools can be found in Table . The training program on creating a supportive work environment was targeted only at OPs. In the Dutch context IPs generally do not stand in contact with an employer and therefore only OPs have the possibility to directly influence the work environment. The three projects previously evaluated the developed training programs and e-learning training in terms of acquired knowledge and skills, and satisfaction with the trainings and e-learning training. With regard to the training on creating a supportive work environment to enhance self-control of workers, participants were asked about their satisfaction with the training and a process evaluation was conducted identifying possible barriers and facilitators for broader implementation . With regard to the training on involving workers’ cognitions and perceptions in guidance and assessment, the effect of the training program on the ability of OPs and IPs to identify workers’ cognitions and perceptions to recommend evidence-based interventions to address at limiting cognitions and perceptions of workers was studied in a randomized controlled trial . In addition, the satisfaction with the training program was evaluated by means of a questionnaire and some feasibility aspects were evaluated during interviews . With regard to the e-learning training on involving significant others in the re-integration process, a randomized controlled trial was conducted to evaluate the efficacy for improving OPs’ and IPs’ knowledge, attitudes, and self-efficacy to involve significant others in the return-to-work process. Furthermore, the OPs’ and IPs’ responses to and satisfaction with the e-learning training were explored . The study was considered not to fall under the Dutch Medical Research Involving Human Subjects Act (WMO) as approved by the local ethics committee of the Amsterdam UMC (Reference number: W19_949#20.012 and W20_024#20.050). The study was conducted according to the guidelines laid down in the Declaration of Helsinki. Data collection and participants Qualitative data on the implementation, practicality and integration of the developed trainings and e-learning training and accompanying person-centered tools were collected based on the feasibility study design recommendations by Bowen et al. (2009) . Following the definitions formulated by Bowen et al., the following feasibility aspects were examined: implementation, practicality and integration . Those focus areas were deemed most important to facilitate future implementation into practice and education. For the data collection, semi-structured interview guides (Additional file ) were used based on the selected focus areas from Bowen et al. (2009) . To gain insight into these focus areas, individual interviews were held from two perspectives: (1) an educational perspective to gain insight into implementation strategies for uptake of the training and e-learning training in existing educational structures and (2) a professional perspective [ – ]. For the educational perspective, representatives from different educational institutions involved medical educational training for OPs and IPs were interviewed (N = 5), including resident trainers from the specialized institutions (Netherlands School of Public & Occupational Health and the social medicine education department of the Radboud university medical center) and the Dutch Social Security Agency. The educational experts were not previously involved in providing the training or e-learning training. As the start of the interview they received a short introduction with the learning goals per training and e-learning. With regard to the e-learning training, the educational experts were given access to the entire e-learning prior to the interviews. The interview questions for the educational perspective were the same for all three projects as the goal was to explore their opinions in general. For the professional perspective, participants from the previous evaluation studies of the three projects were included: project (1) N = 7; project (2) N = 11; project (3) N = 6. The participants were included from the sample of participants that were involved in one of the projects. This means that each interviewee participated in one of the projects previously and had thus only received one of the developed trainings or e-learning training. For the professional perspective, interview questions were each adapted per project to fit the goals and set-up of the specific training program or e-learning training. All participants were included by means of purposive sampling from either the network of the research program or the participant list from the prior evaluation studies of the three projects. All professionals that participated in the previously conducted pilot evaluations were invited. From project (1) 70% (N = 7 from N = 10), from project (2) 19% (N = 11 from N = 57) and from project (3) 9% (N = 6 from N = 62) of the invited professionals agreed to participate in this follow-up study. All participants were invited to participate via e-mail and gave written informed consent upon participation. The interviews were conducted by a minimum of one author, audio recorded and transcribed in Dutch. The interviews for project (1) were conducted in 2019 and 2020 (by AB) as phone or video-call interviews, for project (2) in 2020 (by NZ and MdW) as phone interviews, for project (3) in 2021 (by NZ and NS) as online interviews by video-call. All interviews from the professional perspective lasted approximately 30 min. The interviews from the educational perspective were conducted in 2020 by NZ and SvdB-B and lasted approximately one hour. All researchers are experienced in conducting qualitative research. Authors NZ, MdW, AB and NS are no occupational health professionals. The other authors are experts from the field of occupational health. For the interviews from the professional perspective, authors AB and MdW had earlier contact with the participants in the context of the evaluation studies. Author NZ and NS did not have an established relationship with participants prior to the interviews. Data analysis The interviews were initially transcribed verbatim and analyzed for both perspectives (educational perspective and professional perspective) per training program and e-learning training with accompanying person-centered tool. Each perspective was analyzed for the three focus areas separately as recommended by Bowen et al. . They recommend several sample outcomes of interest for the three focus areas. For implementation the following outcome was included in this study: factors affecting implementation ease or difficulty. For practicality the goal was to gain insight into the following outcomes: positive/negative effects on target participants, ability of participants to carry out intervention and/or educational activities, and cost analysis. For the integration the following outcomes were included: perceived fit with (practice and/or educational) infrastructure, perceived sustainability, and costs to organization and policy bodies. For the data analysis, the following steps were followed: organizing the data, reading and memoing to become familiarized with the data, and forming codes into feasibility factors organized into the three pre-chosen outcomes. In the first step of the analysis open coding based on content analysis was applied to identify feasibility factors, which was followed by deductive coding with thematic analysis based on the focus areas and pre-determined outcomes. After the analysis per perspective for the three projects separately, additional cross-project analysis was conducted to identify feasibility factors per category that appeared across the three projects and those that are unique to the projects. The goal of this analysis was to find overarching feasibility factors to facilitate implementation of the training programs and e-learning training into education structures and practice. All semi-structured interviews were analyzed by two independent researchers (NZ and a research assistant) for project (1), NZ and MdW for project (2), except for the professional perspective from the third project which was analyzed by one researcher (NZ) and checked by a second researcher (NS). Additionally, a third researcher (SvdB-V) checked all codes. Analyses of the interviews were performed in MAXQDA 2020. In the Netherlands, two medical professions constitute the provision of occupational health care: OPs and IPs . The OP is generally involved in the process of vocational support and return-to-work guidance for employees in the first two years of sick leave, as well as taking on preventive tasks such as promotion of healthy working conditions, improving sustainable employability, and early identification and treatment of occupational diseases. After the two-year sick leave period, a sick-listed employee is assessed for eligibility of a disability benefit by an IP. The IP assesses the functional abilities, limitations, and consequences for a person’s work ability. In case of partial work disability the IP can refer the employee for return to work interventions or support. For an employee who has no (longer) an employer, the IP takes on the role of an OP to guide and support return to work in the first two years of sick leave. For both professions after the basic medical education, they receive a four year postgraduate training at dedicated non-profit educational institutes . The resident training to become either an OP or IP is offered at either the Netherlands School of Public & Occupational Health or at the social medicine education department of the Radboud university medical center. In the Netherlands, OPs and IPs, when officially registered as practicing physicians, need to follow continuing professional education. The continuing professional education can be offered by either non-profit educational institutes, the educational department of the Dutch Social Security agency or private suppliers. The development and evaluation of the trainings and e-learning training was part of a larger Dutch research program aimed at contributing to improved worker-focused occupational health care. The research program consists of three research projects which were conducted in parallel to improve the supporting role of OPs and IPs in occupational health care for workers with chronic health conditions. As aforementioned, the topics covered in the training programs and e-learning training were respectively: creating a supportive work environment to enhance self-control of workers , involving cognitions and perceptions of workers in guidance and assessment , and involving significant others in the re-integration process . More information on the content of the training programs, e-learning training and accompanying tools can be found in Table . The training program on creating a supportive work environment was targeted only at OPs. In the Dutch context IPs generally do not stand in contact with an employer and therefore only OPs have the possibility to directly influence the work environment. The three projects previously evaluated the developed training programs and e-learning training in terms of acquired knowledge and skills, and satisfaction with the trainings and e-learning training. With regard to the training on creating a supportive work environment to enhance self-control of workers, participants were asked about their satisfaction with the training and a process evaluation was conducted identifying possible barriers and facilitators for broader implementation . With regard to the training on involving workers’ cognitions and perceptions in guidance and assessment, the effect of the training program on the ability of OPs and IPs to identify workers’ cognitions and perceptions to recommend evidence-based interventions to address at limiting cognitions and perceptions of workers was studied in a randomized controlled trial . In addition, the satisfaction with the training program was evaluated by means of a questionnaire and some feasibility aspects were evaluated during interviews . With regard to the e-learning training on involving significant others in the re-integration process, a randomized controlled trial was conducted to evaluate the efficacy for improving OPs’ and IPs’ knowledge, attitudes, and self-efficacy to involve significant others in the return-to-work process. Furthermore, the OPs’ and IPs’ responses to and satisfaction with the e-learning training were explored . The study was considered not to fall under the Dutch Medical Research Involving Human Subjects Act (WMO) as approved by the local ethics committee of the Amsterdam UMC (Reference number: W19_949#20.012 and W20_024#20.050). The study was conducted according to the guidelines laid down in the Declaration of Helsinki. Qualitative data on the implementation, practicality and integration of the developed trainings and e-learning training and accompanying person-centered tools were collected based on the feasibility study design recommendations by Bowen et al. (2009) . Following the definitions formulated by Bowen et al., the following feasibility aspects were examined: implementation, practicality and integration . Those focus areas were deemed most important to facilitate future implementation into practice and education. For the data collection, semi-structured interview guides (Additional file ) were used based on the selected focus areas from Bowen et al. (2009) . To gain insight into these focus areas, individual interviews were held from two perspectives: (1) an educational perspective to gain insight into implementation strategies for uptake of the training and e-learning training in existing educational structures and (2) a professional perspective [ – ]. For the educational perspective, representatives from different educational institutions involved medical educational training for OPs and IPs were interviewed (N = 5), including resident trainers from the specialized institutions (Netherlands School of Public & Occupational Health and the social medicine education department of the Radboud university medical center) and the Dutch Social Security Agency. The educational experts were not previously involved in providing the training or e-learning training. As the start of the interview they received a short introduction with the learning goals per training and e-learning. With regard to the e-learning training, the educational experts were given access to the entire e-learning prior to the interviews. The interview questions for the educational perspective were the same for all three projects as the goal was to explore their opinions in general. For the professional perspective, participants from the previous evaluation studies of the three projects were included: project (1) N = 7; project (2) N = 11; project (3) N = 6. The participants were included from the sample of participants that were involved in one of the projects. This means that each interviewee participated in one of the projects previously and had thus only received one of the developed trainings or e-learning training. For the professional perspective, interview questions were each adapted per project to fit the goals and set-up of the specific training program or e-learning training. All participants were included by means of purposive sampling from either the network of the research program or the participant list from the prior evaluation studies of the three projects. All professionals that participated in the previously conducted pilot evaluations were invited. From project (1) 70% (N = 7 from N = 10), from project (2) 19% (N = 11 from N = 57) and from project (3) 9% (N = 6 from N = 62) of the invited professionals agreed to participate in this follow-up study. All participants were invited to participate via e-mail and gave written informed consent upon participation. The interviews were conducted by a minimum of one author, audio recorded and transcribed in Dutch. The interviews for project (1) were conducted in 2019 and 2020 (by AB) as phone or video-call interviews, for project (2) in 2020 (by NZ and MdW) as phone interviews, for project (3) in 2021 (by NZ and NS) as online interviews by video-call. All interviews from the professional perspective lasted approximately 30 min. The interviews from the educational perspective were conducted in 2020 by NZ and SvdB-B and lasted approximately one hour. All researchers are experienced in conducting qualitative research. Authors NZ, MdW, AB and NS are no occupational health professionals. The other authors are experts from the field of occupational health. For the interviews from the professional perspective, authors AB and MdW had earlier contact with the participants in the context of the evaluation studies. Author NZ and NS did not have an established relationship with participants prior to the interviews. The interviews were initially transcribed verbatim and analyzed for both perspectives (educational perspective and professional perspective) per training program and e-learning training with accompanying person-centered tool. Each perspective was analyzed for the three focus areas separately as recommended by Bowen et al. . They recommend several sample outcomes of interest for the three focus areas. For implementation the following outcome was included in this study: factors affecting implementation ease or difficulty. For practicality the goal was to gain insight into the following outcomes: positive/negative effects on target participants, ability of participants to carry out intervention and/or educational activities, and cost analysis. For the integration the following outcomes were included: perceived fit with (practice and/or educational) infrastructure, perceived sustainability, and costs to organization and policy bodies. For the data analysis, the following steps were followed: organizing the data, reading and memoing to become familiarized with the data, and forming codes into feasibility factors organized into the three pre-chosen outcomes. In the first step of the analysis open coding based on content analysis was applied to identify feasibility factors, which was followed by deductive coding with thematic analysis based on the focus areas and pre-determined outcomes. After the analysis per perspective for the three projects separately, additional cross-project analysis was conducted to identify feasibility factors per category that appeared across the three projects and those that are unique to the projects. The goal of this analysis was to find overarching feasibility factors to facilitate implementation of the training programs and e-learning training into education structures and practice. All semi-structured interviews were analyzed by two independent researchers (NZ and a research assistant) for project (1), NZ and MdW for project (2), except for the professional perspective from the third project which was analyzed by one researcher (NZ) and checked by a second researcher (NS). Additionally, a third researcher (SvdB-V) checked all codes. Analyses of the interviews were performed in MAXQDA 2020. The results are presented per perspective for (1) the educational perspective concerning the feasibility of embedding the developed trainings and e-learning training in existing educational structures, and (2) the professional perspective for the feasibility of using and applying the knowledge and tools in practice. For readability, only the cross-project feasibility factors are presented (Table ). In case no cross-project feasibility factors were found only the most important results will be presented in the result section below. The detailed results per tool from both perspectives can be found in Additional file . 1) Feasibility factors from an educational perspective For the educational perspective, five interviews were held with trainers from educational institutes (Table ). Two females and three males with a mean age of 54.4 years of age participated. All participants had insight into all available material of the training programs and e-learning training and received a description by the researchers (NZ and SvdB-V). For the analysis of the Bowen et al. outcomes, the cross-project analysis yielded several feasibility factors which are presented below (Table ). However, these presented factors are not exhaustive and the detailed results per training program and e-learning training with accompanying tools can be found in Additional file . Implementation Different ‘factors affecting implementation ease or difficulty’ concerning the training and e-learning training, organization of the education, dissemination and personal factors were identified (Table ). With regard to the training programs, to have an online version available was mentioned by the educational experts as a way to support implementation across all three projects. Specific factors related to the e-learning training included a ‘check if the e-learning training was completed’ and ‘the combination of educational forms towards blended learning’ (Additional file ), as one trainer mentioned: P12: “What we ultimately want to achieve is a form of blended learning in which physical and online education and e-learnings are all integrated into a complete package. And the great thing about this is, that they [students] can do a lot on their own, in their own time.” The ‘check if the e-learning training was completed’ was mentioned by some participants as important. As this is already part of the current e-learning training, they felt this should stay in place as is. Moreover, it was mentioned that ‘sufficient interaction’ between participants during the trainings needs to be ensured for successful implementation (project 2) (Additional file ). For the organization of the trainings, educational experts mentioned that good ‘coordination with the educational managers of the educational institutions’ is needed to ensure better implementation into educational structures (as to project 1 and 2). For both face-to-face training programs (project 1 and 2), ‘a train-the-trainer approach’ was indicated as a factor to support better implementation into educational structures, as well as to ‘make arrangements regarding the ownership of the training and e-learning’: P16: “[…] on my practical experience, for example, […] an organization takes ownership and then it [the training] comes behind a pay-roll.” Across the three projects no overarching feasibility factor related to the dissemination was found. A specific factor mentioned to enhance better dissemination of the e-learning training was ‘the use of role models or frontrunners’(Additional file ): P14: “Yes, my tip is […], implementations become successful because you have someone who is going to promote the product and actually implements it and just does it. Someone that sells it. That’s what it really comes down to.” With regard to personal factors of educational experts that may hinder implementation no feasibility factor was found across all projects. However, it was specified that it is important to be aware that educational experts may be reluctant when it comes to incorporating new training materials from a third party (i.e., researchers) in the curriculum, which underlines the need to create good support from within the educational institutions (project 3) (Additional file ). Practicality Across the projects no common feasibility factors were found for the outcome ‘positive/negative effects on target participants’ and ‘ability of participants to carry out intervention activities’. For project (1), it was indicated that with respect to the practicality outcome ‘positive/negative effects on target participants’, the added-value of the training for OPs needs to be clearly explained to enhance external and internal motivation to follow the training (Additional file ). As to the ‘ability of participants to carry out educational activities’, some participants from project (2) mentioned the importance to match the educational content with the level of pre-existing knowledge and skills of participants as to offering the trainings to registered OPs and IPs or to resident doctors in training (Additional file ). Furthermore, some participants from project (2) stressed the ‘difficulty to translate knowledge and skills into own practice’ which might hinder the practical uptake of the trainings into practice of the OPs and IPs: P13: “What we notice in the training groups is that at least some of the participants say at the end of the day: ‘it was very useful, but I don’t see myself doing it [applying the knowledge in practice] yet’. And, therefore, they have a difficulty in translating it into practice, into their own practice.” With respect to the costs of offering the trainings and e-learning training, some participants mentioned the following important factors to take into account: ‘costs for use of training facility e.g. rental costs’ (project 1 and 2) and ‘costs for accreditation’ of the training programs and e-learning training (project 1 and 3) (Table ). Integration For the integration of both training programs and the e-learning training, the ‘perceived fit with educational infrastructure’ was evaluated (Table ). For project (1) and (3) in terms of suitability within educational structures, participants mentioned that the training and e-learning training was ‘not suitable for the core curriculum of postgraduate medical training for OPs and IPs’ even though the ‘added-value of the training and e-learning training is evident’. Remarks were made regarding the integration in the current curriculum of the postgraduate medical training for OPs and IPs as to that there is ‘no unlimited place to embed new trainings in the current curriculum’ (project 2 and 3). Especially with respect to the training on strengthening self-control of workers with chronic health conditions (project 1), participants stressed that it can best be integrated towards the end of postgraduate medical training for OPs and IPs due the level of required pre-existing knowledge and skills, and they indicated that it predominantly fits the profession of OPs instead of IPs as the training is targeted at OPs (Additional file ). For project (3), in terms of the outcome ‘perceived sustainability’, the ‘continuity after the research project ends’ was mentioned stressing the importance of continuity after an experimental setting of testing training programs. 2) Feasibility factors from the professional perspective on embedding the trainings and tools in educational structures and practice of OPs and IPs For the professional perspective, a total of N = 24 semi-structured interviews were conducted. Participants were N = 18 OPs and N = 6 IPs who participated in the previous evaluation studies [ – ] (see Table ). In total N = 13 males participated and N = 11 females. The mean age of participants for project (1) are unknown. The mean age of participants of project (2) was 48.5 years of age and for project (3) 52.3 years of age. Years of experience in current work function was unknown for project (1). The professionals gave input on their practical experiences after attending the trainings or following the e-learning training, but also gave input regarding the possibilities for the implementation of the trainings and e-learning training in educational structures from the perspective of a potential receiver of the training and e-learning training. For the analysis across the three projects from a professional perspective, common feasibility factors were only found for the feasibility aspect of implementation which entails the Bowen et al. outcome ‘factors affecting implementation ease or difficulty’ (Table ). For the practicality and integration project-specific feasibility factors were found (Additional file ). Implementation In terms of implementation, the outcome ‘factors affecting implementation ease or difficulty’ was evaluated. Based on the semi-structured interviews, the following factors were identified by professionals: personal factors;factors related to training programs and e-learning training and the tools; factors related to the organization of the training or e-learning training, and factors related to the dissemination were identified by the professionals. Factors concerning the training included the ‘use of actual cases from practice’ (project 1 and 2) and the ‘need for a periodic reminder or refresher about the topic’ (project 1 and 3). Specifically for project (1) on strengthening self-control, participants mentioned the necessity for ‘matching the training content with the needs within organizations’ to apply the Participatory Approach, targets organizations where OPs are involved in policy setting regarding support of workers with a chronic health condition (Additional file ). Therefore, OPs need to be involved more in policy setting within organizations. With regard to the organization of the training and e-learning training no common feasibility factors were found across all three projects. For project (1) it was mentioned to ‘involve the researcher of the project’ when offering the training program. The ‘use of a desk manual or summary as a handy memory aid’ was mentioned for the use of the tools with regarding project (2) and 3). In terms of dissemination factors for project (1) and (3), the suggestion was made to ‘embed the knowledge into guidelines’ (Table ). In terms of the personal factors, no across-project factors were found, but project-specific factors important to consider included, for example, ‘sufficient time’ during the consultation to apply the acquired knowledge and skills (project 2) (Additional file ): P3: “Well, you always have to take the time yourself as an OP [during consultation].[…] I can take that [time] by doing longer consultation hours, that’s not the problem. (project 2) For project (1) specific prerequisites for the implementation were mentioned including: organizational support, creating a sense of urgency, and creating recognition of importance for the target group. Concerning impeding factors for the implementation of project (1), the influence of the size and structure of the organization where the tool shall be applied is essential with higher chances for successful implementation in organizations with sufficient resources to invest in workplace improvement. With respect to the suitability, it was also mentioned that project (1) can be easier implemented among self-employed OPs as they have more freedom to make changes to their way of working (Additional file ). Practicality For the outcomes on practicality, concerning the practical uptake of the developed training programs, e-learning training and accompanying tools into practice, the ‘ability of participants to carry out intervention activities’ (e.g. use of conversation tool and supporting material), ‘positive/negative effects on target participants’ and ‘cost analysis’ based on the suggested outcomes by Bowen et al. were evaluated from the professional perspective. As to the Bowen outcome ‘positive/negative effects on target participants’ feasibility factors were only found for project (3) and included the ‘added value for participants’ of the skills they acquire during the e-learning training (Additional file ). No across-project feasibility factors were found for the outcome ‘ability of participants to carry out intervention activities’. As to project (2) about involving person-related factors, participants mentioned ‘not knowing it [the list of cognitions and perceptions] by heart after the training’ as a restrain for applying the knowledge in practice for the Bowen outcome ‘ability of participants to carry out intervention activities’ (Additional file ): P4: “You know, the moment that you are in a consultation hour, you will no longer have all those sample questions in front of you. So, you have to do it a bit by heart. Apparently, the material has not yet sunk in so that I know it all by heart, so to speak.” (project 2) . For project (3), the participants stressed that the theoretical knowledge increased their awareness in practice as it increased their sense of importance as to involving significant others during the consultation with a worker with a chronic health condition (Additional file ). Regarding the practical application of the tool developed in project 1), the participants mentioned the need for ‘more support for unexperienced OPs’. Also no common feasibility factor was found for the Bowen outcome ‘cost analysis’. As to project (1), in terms of the ‘costs analysis’, one participant expressed feelings of ‘uncertainty about cost-effectiveness of the tool’: P20: “Well it also didn’t work on a small-scale, but maybe on a large-scale it would have succeeded, because then the time investment, so the total investment is the same, but perhaps much more profitable for an organization.” (project 1) . Integration Professionals reported on the outcome ‘perceived fit with infrastructure’ for each of the three projects, but no common feasibility factor was found. However, project-specific factors were mentioned (Additional file ). For project (1) a category on the integration at an organizational level was identified and included the following feasibility factors: ‘include in the organization’s annual plan’ and the ‘degree of professional flexibility of the OP’ which may contribute to better integration of the training program. Only for project (1) the outcome ‘perceived sustainability’ was mentioned as to no continuity of implementing the knowledge in practice after following the training can be guaranteed which is needed for the success of the training. For the educational perspective, five interviews were held with trainers from educational institutes (Table ). Two females and three males with a mean age of 54.4 years of age participated. All participants had insight into all available material of the training programs and e-learning training and received a description by the researchers (NZ and SvdB-V). For the analysis of the Bowen et al. outcomes, the cross-project analysis yielded several feasibility factors which are presented below (Table ). However, these presented factors are not exhaustive and the detailed results per training program and e-learning training with accompanying tools can be found in Additional file . Implementation Different ‘factors affecting implementation ease or difficulty’ concerning the training and e-learning training, organization of the education, dissemination and personal factors were identified (Table ). With regard to the training programs, to have an online version available was mentioned by the educational experts as a way to support implementation across all three projects. Specific factors related to the e-learning training included a ‘check if the e-learning training was completed’ and ‘the combination of educational forms towards blended learning’ (Additional file ), as one trainer mentioned: P12: “What we ultimately want to achieve is a form of blended learning in which physical and online education and e-learnings are all integrated into a complete package. And the great thing about this is, that they [students] can do a lot on their own, in their own time.” The ‘check if the e-learning training was completed’ was mentioned by some participants as important. As this is already part of the current e-learning training, they felt this should stay in place as is. Moreover, it was mentioned that ‘sufficient interaction’ between participants during the trainings needs to be ensured for successful implementation (project 2) (Additional file ). For the organization of the trainings, educational experts mentioned that good ‘coordination with the educational managers of the educational institutions’ is needed to ensure better implementation into educational structures (as to project 1 and 2). For both face-to-face training programs (project 1 and 2), ‘a train-the-trainer approach’ was indicated as a factor to support better implementation into educational structures, as well as to ‘make arrangements regarding the ownership of the training and e-learning’: P16: “[…] on my practical experience, for example, […] an organization takes ownership and then it [the training] comes behind a pay-roll.” Across the three projects no overarching feasibility factor related to the dissemination was found. A specific factor mentioned to enhance better dissemination of the e-learning training was ‘the use of role models or frontrunners’(Additional file ): P14: “Yes, my tip is […], implementations become successful because you have someone who is going to promote the product and actually implements it and just does it. Someone that sells it. That’s what it really comes down to.” With regard to personal factors of educational experts that may hinder implementation no feasibility factor was found across all projects. However, it was specified that it is important to be aware that educational experts may be reluctant when it comes to incorporating new training materials from a third party (i.e., researchers) in the curriculum, which underlines the need to create good support from within the educational institutions (project 3) (Additional file ). Practicality Across the projects no common feasibility factors were found for the outcome ‘positive/negative effects on target participants’ and ‘ability of participants to carry out intervention activities’. For project (1), it was indicated that with respect to the practicality outcome ‘positive/negative effects on target participants’, the added-value of the training for OPs needs to be clearly explained to enhance external and internal motivation to follow the training (Additional file ). As to the ‘ability of participants to carry out educational activities’, some participants from project (2) mentioned the importance to match the educational content with the level of pre-existing knowledge and skills of participants as to offering the trainings to registered OPs and IPs or to resident doctors in training (Additional file ). Furthermore, some participants from project (2) stressed the ‘difficulty to translate knowledge and skills into own practice’ which might hinder the practical uptake of the trainings into practice of the OPs and IPs: P13: “What we notice in the training groups is that at least some of the participants say at the end of the day: ‘it was very useful, but I don’t see myself doing it [applying the knowledge in practice] yet’. And, therefore, they have a difficulty in translating it into practice, into their own practice.” With respect to the costs of offering the trainings and e-learning training, some participants mentioned the following important factors to take into account: ‘costs for use of training facility e.g. rental costs’ (project 1 and 2) and ‘costs for accreditation’ of the training programs and e-learning training (project 1 and 3) (Table ). Integration For the integration of both training programs and the e-learning training, the ‘perceived fit with educational infrastructure’ was evaluated (Table ). For project (1) and (3) in terms of suitability within educational structures, participants mentioned that the training and e-learning training was ‘not suitable for the core curriculum of postgraduate medical training for OPs and IPs’ even though the ‘added-value of the training and e-learning training is evident’. Remarks were made regarding the integration in the current curriculum of the postgraduate medical training for OPs and IPs as to that there is ‘no unlimited place to embed new trainings in the current curriculum’ (project 2 and 3). Especially with respect to the training on strengthening self-control of workers with chronic health conditions (project 1), participants stressed that it can best be integrated towards the end of postgraduate medical training for OPs and IPs due the level of required pre-existing knowledge and skills, and they indicated that it predominantly fits the profession of OPs instead of IPs as the training is targeted at OPs (Additional file ). For project (3), in terms of the outcome ‘perceived sustainability’, the ‘continuity after the research project ends’ was mentioned stressing the importance of continuity after an experimental setting of testing training programs. Different ‘factors affecting implementation ease or difficulty’ concerning the training and e-learning training, organization of the education, dissemination and personal factors were identified (Table ). With regard to the training programs, to have an online version available was mentioned by the educational experts as a way to support implementation across all three projects. Specific factors related to the e-learning training included a ‘check if the e-learning training was completed’ and ‘the combination of educational forms towards blended learning’ (Additional file ), as one trainer mentioned: P12: “What we ultimately want to achieve is a form of blended learning in which physical and online education and e-learnings are all integrated into a complete package. And the great thing about this is, that they [students] can do a lot on their own, in their own time.” The ‘check if the e-learning training was completed’ was mentioned by some participants as important. As this is already part of the current e-learning training, they felt this should stay in place as is. Moreover, it was mentioned that ‘sufficient interaction’ between participants during the trainings needs to be ensured for successful implementation (project 2) (Additional file ). For the organization of the trainings, educational experts mentioned that good ‘coordination with the educational managers of the educational institutions’ is needed to ensure better implementation into educational structures (as to project 1 and 2). For both face-to-face training programs (project 1 and 2), ‘a train-the-trainer approach’ was indicated as a factor to support better implementation into educational structures, as well as to ‘make arrangements regarding the ownership of the training and e-learning’: P16: “[…] on my practical experience, for example, […] an organization takes ownership and then it [the training] comes behind a pay-roll.” Across the three projects no overarching feasibility factor related to the dissemination was found. A specific factor mentioned to enhance better dissemination of the e-learning training was ‘the use of role models or frontrunners’(Additional file ): P14: “Yes, my tip is […], implementations become successful because you have someone who is going to promote the product and actually implements it and just does it. Someone that sells it. That’s what it really comes down to.” With regard to personal factors of educational experts that may hinder implementation no feasibility factor was found across all projects. However, it was specified that it is important to be aware that educational experts may be reluctant when it comes to incorporating new training materials from a third party (i.e., researchers) in the curriculum, which underlines the need to create good support from within the educational institutions (project 3) (Additional file ). Across the projects no common feasibility factors were found for the outcome ‘positive/negative effects on target participants’ and ‘ability of participants to carry out intervention activities’. For project (1), it was indicated that with respect to the practicality outcome ‘positive/negative effects on target participants’, the added-value of the training for OPs needs to be clearly explained to enhance external and internal motivation to follow the training (Additional file ). As to the ‘ability of participants to carry out educational activities’, some participants from project (2) mentioned the importance to match the educational content with the level of pre-existing knowledge and skills of participants as to offering the trainings to registered OPs and IPs or to resident doctors in training (Additional file ). Furthermore, some participants from project (2) stressed the ‘difficulty to translate knowledge and skills into own practice’ which might hinder the practical uptake of the trainings into practice of the OPs and IPs: P13: “What we notice in the training groups is that at least some of the participants say at the end of the day: ‘it was very useful, but I don’t see myself doing it [applying the knowledge in practice] yet’. And, therefore, they have a difficulty in translating it into practice, into their own practice.” With respect to the costs of offering the trainings and e-learning training, some participants mentioned the following important factors to take into account: ‘costs for use of training facility e.g. rental costs’ (project 1 and 2) and ‘costs for accreditation’ of the training programs and e-learning training (project 1 and 3) (Table ). For the integration of both training programs and the e-learning training, the ‘perceived fit with educational infrastructure’ was evaluated (Table ). For project (1) and (3) in terms of suitability within educational structures, participants mentioned that the training and e-learning training was ‘not suitable for the core curriculum of postgraduate medical training for OPs and IPs’ even though the ‘added-value of the training and e-learning training is evident’. Remarks were made regarding the integration in the current curriculum of the postgraduate medical training for OPs and IPs as to that there is ‘no unlimited place to embed new trainings in the current curriculum’ (project 2 and 3). Especially with respect to the training on strengthening self-control of workers with chronic health conditions (project 1), participants stressed that it can best be integrated towards the end of postgraduate medical training for OPs and IPs due the level of required pre-existing knowledge and skills, and they indicated that it predominantly fits the profession of OPs instead of IPs as the training is targeted at OPs (Additional file ). For project (3), in terms of the outcome ‘perceived sustainability’, the ‘continuity after the research project ends’ was mentioned stressing the importance of continuity after an experimental setting of testing training programs. For the professional perspective, a total of N = 24 semi-structured interviews were conducted. Participants were N = 18 OPs and N = 6 IPs who participated in the previous evaluation studies [ – ] (see Table ). In total N = 13 males participated and N = 11 females. The mean age of participants for project (1) are unknown. The mean age of participants of project (2) was 48.5 years of age and for project (3) 52.3 years of age. Years of experience in current work function was unknown for project (1). The professionals gave input on their practical experiences after attending the trainings or following the e-learning training, but also gave input regarding the possibilities for the implementation of the trainings and e-learning training in educational structures from the perspective of a potential receiver of the training and e-learning training. For the analysis across the three projects from a professional perspective, common feasibility factors were only found for the feasibility aspect of implementation which entails the Bowen et al. outcome ‘factors affecting implementation ease or difficulty’ (Table ). For the practicality and integration project-specific feasibility factors were found (Additional file ). Implementation In terms of implementation, the outcome ‘factors affecting implementation ease or difficulty’ was evaluated. Based on the semi-structured interviews, the following factors were identified by professionals: personal factors;factors related to training programs and e-learning training and the tools; factors related to the organization of the training or e-learning training, and factors related to the dissemination were identified by the professionals. Factors concerning the training included the ‘use of actual cases from practice’ (project 1 and 2) and the ‘need for a periodic reminder or refresher about the topic’ (project 1 and 3). Specifically for project (1) on strengthening self-control, participants mentioned the necessity for ‘matching the training content with the needs within organizations’ to apply the Participatory Approach, targets organizations where OPs are involved in policy setting regarding support of workers with a chronic health condition (Additional file ). Therefore, OPs need to be involved more in policy setting within organizations. With regard to the organization of the training and e-learning training no common feasibility factors were found across all three projects. For project (1) it was mentioned to ‘involve the researcher of the project’ when offering the training program. The ‘use of a desk manual or summary as a handy memory aid’ was mentioned for the use of the tools with regarding project (2) and 3). In terms of dissemination factors for project (1) and (3), the suggestion was made to ‘embed the knowledge into guidelines’ (Table ). In terms of the personal factors, no across-project factors were found, but project-specific factors important to consider included, for example, ‘sufficient time’ during the consultation to apply the acquired knowledge and skills (project 2) (Additional file ): P3: “Well, you always have to take the time yourself as an OP [during consultation].[…] I can take that [time] by doing longer consultation hours, that’s not the problem. (project 2) For project (1) specific prerequisites for the implementation were mentioned including: organizational support, creating a sense of urgency, and creating recognition of importance for the target group. Concerning impeding factors for the implementation of project (1), the influence of the size and structure of the organization where the tool shall be applied is essential with higher chances for successful implementation in organizations with sufficient resources to invest in workplace improvement. With respect to the suitability, it was also mentioned that project (1) can be easier implemented among self-employed OPs as they have more freedom to make changes to their way of working (Additional file ). Practicality For the outcomes on practicality, concerning the practical uptake of the developed training programs, e-learning training and accompanying tools into practice, the ‘ability of participants to carry out intervention activities’ (e.g. use of conversation tool and supporting material), ‘positive/negative effects on target participants’ and ‘cost analysis’ based on the suggested outcomes by Bowen et al. were evaluated from the professional perspective. As to the Bowen outcome ‘positive/negative effects on target participants’ feasibility factors were only found for project (3) and included the ‘added value for participants’ of the skills they acquire during the e-learning training (Additional file ). No across-project feasibility factors were found for the outcome ‘ability of participants to carry out intervention activities’. As to project (2) about involving person-related factors, participants mentioned ‘not knowing it [the list of cognitions and perceptions] by heart after the training’ as a restrain for applying the knowledge in practice for the Bowen outcome ‘ability of participants to carry out intervention activities’ (Additional file ): P4: “You know, the moment that you are in a consultation hour, you will no longer have all those sample questions in front of you. So, you have to do it a bit by heart. Apparently, the material has not yet sunk in so that I know it all by heart, so to speak.” (project 2) . For project (3), the participants stressed that the theoretical knowledge increased their awareness in practice as it increased their sense of importance as to involving significant others during the consultation with a worker with a chronic health condition (Additional file ). Regarding the practical application of the tool developed in project 1), the participants mentioned the need for ‘more support for unexperienced OPs’. Also no common feasibility factor was found for the Bowen outcome ‘cost analysis’. As to project (1), in terms of the ‘costs analysis’, one participant expressed feelings of ‘uncertainty about cost-effectiveness of the tool’: P20: “Well it also didn’t work on a small-scale, but maybe on a large-scale it would have succeeded, because then the time investment, so the total investment is the same, but perhaps much more profitable for an organization.” (project 1) . Integration Professionals reported on the outcome ‘perceived fit with infrastructure’ for each of the three projects, but no common feasibility factor was found. However, project-specific factors were mentioned (Additional file ). For project (1) a category on the integration at an organizational level was identified and included the following feasibility factors: ‘include in the organization’s annual plan’ and the ‘degree of professional flexibility of the OP’ which may contribute to better integration of the training program. Only for project (1) the outcome ‘perceived sustainability’ was mentioned as to no continuity of implementing the knowledge in practice after following the training can be guaranteed which is needed for the success of the training. In terms of implementation, the outcome ‘factors affecting implementation ease or difficulty’ was evaluated. Based on the semi-structured interviews, the following factors were identified by professionals: personal factors;factors related to training programs and e-learning training and the tools; factors related to the organization of the training or e-learning training, and factors related to the dissemination were identified by the professionals. Factors concerning the training included the ‘use of actual cases from practice’ (project 1 and 2) and the ‘need for a periodic reminder or refresher about the topic’ (project 1 and 3). Specifically for project (1) on strengthening self-control, participants mentioned the necessity for ‘matching the training content with the needs within organizations’ to apply the Participatory Approach, targets organizations where OPs are involved in policy setting regarding support of workers with a chronic health condition (Additional file ). Therefore, OPs need to be involved more in policy setting within organizations. With regard to the organization of the training and e-learning training no common feasibility factors were found across all three projects. For project (1) it was mentioned to ‘involve the researcher of the project’ when offering the training program. The ‘use of a desk manual or summary as a handy memory aid’ was mentioned for the use of the tools with regarding project (2) and 3). In terms of dissemination factors for project (1) and (3), the suggestion was made to ‘embed the knowledge into guidelines’ (Table ). In terms of the personal factors, no across-project factors were found, but project-specific factors important to consider included, for example, ‘sufficient time’ during the consultation to apply the acquired knowledge and skills (project 2) (Additional file ): P3: “Well, you always have to take the time yourself as an OP [during consultation].[…] I can take that [time] by doing longer consultation hours, that’s not the problem. (project 2) For project (1) specific prerequisites for the implementation were mentioned including: organizational support, creating a sense of urgency, and creating recognition of importance for the target group. Concerning impeding factors for the implementation of project (1), the influence of the size and structure of the organization where the tool shall be applied is essential with higher chances for successful implementation in organizations with sufficient resources to invest in workplace improvement. With respect to the suitability, it was also mentioned that project (1) can be easier implemented among self-employed OPs as they have more freedom to make changes to their way of working (Additional file ). For the outcomes on practicality, concerning the practical uptake of the developed training programs, e-learning training and accompanying tools into practice, the ‘ability of participants to carry out intervention activities’ (e.g. use of conversation tool and supporting material), ‘positive/negative effects on target participants’ and ‘cost analysis’ based on the suggested outcomes by Bowen et al. were evaluated from the professional perspective. As to the Bowen outcome ‘positive/negative effects on target participants’ feasibility factors were only found for project (3) and included the ‘added value for participants’ of the skills they acquire during the e-learning training (Additional file ). No across-project feasibility factors were found for the outcome ‘ability of participants to carry out intervention activities’. As to project (2) about involving person-related factors, participants mentioned ‘not knowing it [the list of cognitions and perceptions] by heart after the training’ as a restrain for applying the knowledge in practice for the Bowen outcome ‘ability of participants to carry out intervention activities’ (Additional file ): P4: “You know, the moment that you are in a consultation hour, you will no longer have all those sample questions in front of you. So, you have to do it a bit by heart. Apparently, the material has not yet sunk in so that I know it all by heart, so to speak.” (project 2) . For project (3), the participants stressed that the theoretical knowledge increased their awareness in practice as it increased their sense of importance as to involving significant others during the consultation with a worker with a chronic health condition (Additional file ). Regarding the practical application of the tool developed in project 1), the participants mentioned the need for ‘more support for unexperienced OPs’. Also no common feasibility factor was found for the Bowen outcome ‘cost analysis’. As to project (1), in terms of the ‘costs analysis’, one participant expressed feelings of ‘uncertainty about cost-effectiveness of the tool’: P20: “Well it also didn’t work on a small-scale, but maybe on a large-scale it would have succeeded, because then the time investment, so the total investment is the same, but perhaps much more profitable for an organization.” (project 1) . Professionals reported on the outcome ‘perceived fit with infrastructure’ for each of the three projects, but no common feasibility factor was found. However, project-specific factors were mentioned (Additional file ). For project (1) a category on the integration at an organizational level was identified and included the following feasibility factors: ‘include in the organization’s annual plan’ and the ‘degree of professional flexibility of the OP’ which may contribute to better integration of the training program. Only for project (1) the outcome ‘perceived sustainability’ was mentioned as to no continuity of implementing the knowledge in practice after following the training can be guaranteed which is needed for the success of the training. The aim of this study was to investigate the feasibility in terms of factors concerning the implementation, practicality and integration of trainings and e-learning training with accompanying tools for improved guidance and support for workers with a chronic health condition. The results show that the training programs, e-learning training and tools were seen as feasible from an educational as well as professional perspective when considering the mentioned factors. In order to contribute to successful implementation in educational structures and embedding the developed training programs, e-learning training and tools into the practice of occupational health care several factors were mentioned by the participants across the three projects, including adaptation to online versions of the face-to-face trainings, train-the-trainer approaches to facilitate correct delivery of the face-to-face trainings, costs concerning the implementation of the trainings and e-learning training, the use of actual cases from practice during the trainings and e-learning training, and follow-up trainings in the form of blended learning. By involving the researchers and actual users of the developed training programs and accompanying tools in an early state during implementation optimal fit into practice is warranted. The importance of conducting a feasibility study has been recognized earlier [ , – ]. By evaluating three major focus areas for assessing the feasibility, our study aimed to provide insight into the factors concerning the implementation, practicality, and integration of person-centered and organization centered tools. Especially in occupational health care where not one single intervention may have an impact on workers feasibility studies are essential . While previous studies on the training programs and e-learning training with accompanying tools also evaluated aspects of acquired knowledge and skills, and satisfaction [ – ], the current evaluation focused specifically on broader educational and practical aspects of the implementation. Another previous study investigated the determinants for the implementation of person-centered tools from the users’ perspective for the suitability of the tools for the target group . This study identified that taking into account the individual needs and wishes of workers would support successful implementation of the tools in practice during consultation with an OP or IP . However, that study was conducted during the early-stages of the development of the tools to support co-creation with the end-users, but did not investigate practical aspects of delivering the training programs and e-learning training. Evaluation of the feasibility of a training in occupational health care has been performed previously to enhance guideline use in occupational health care . Similar to the current study, they identified time, organizational constraints and financial aspects as barriers for implementation . To transfer knowledge and skills for guideline use, training has been found an effective method to facilitate uptake . Our study also showed that offering training is a suitable way, but particularly effective if combined with online education (i.e., blended learning). Blended learning is the combination of face-to-face education and technology-mediated instruction . The current study found that only technology-mediated instruction like the e-learning training for project 3) can be helpful to learn the theory, but to translate the skills into practice, it would require accompanying face-to-face education. Nevertheless, e-learning has advantages over face to face training, for example, being able to follow the education at the student’s own pace . The argument to add more actual cases from practice during the trainings and e-learning training as reported by participants in our study was also confirmed by previous studies as it improves integration of new skills with current knowledge of professionals . Moreover, a train-the-trainer model as mentioned by participants in the current study could potentially enhance correct delivery of the training programs. A train-the-trainer model for the studied training programs and e-learning training could include training sessions from the involved researchers offered to potential trainers in practice. Train-the-trainer models may have the potential to contribute to more sustainable programs due to the involvement of trainers in early stages of a program . Future research should establish a suitable model for training potentials trainers on the training programs and e-learning training. In order to integrate the training programs and e-learning training into existing educational structures and practical use of acquired knowledge and accompanying tools for OPs and IPs should address these factors for more successful uptake. Limitations For all the respective projects, the goal was to investigate the factors concerning the three focus areas (implementation, practicality, and integration) of Bowen et al. from both the educational, as well as professional perspective. However, the interviews for project (1) used a different interview approach for the professional perspective due to convenience. In the context of that project, interviews were held with the goal of exploring barriers and facilitators for implementation of the tool and identifying possible points of improvement for the training, but without the specific goal of asking about implementation, practicality, and integration factors as in the other projects. This might have led to project-specific factors going unrecognized as the questions were not as specific during the interviews. Moreover, in the second step of the analysis a cross-project analysis was conducted to find overarching feasibility factors across the three projects. The interviews for the educational perspective were all held based on the same interview guide, which might have influenced the results of finding feasibility factors for all three focus areas (implementation, practicality, and integration) compared to the professional perspective where only factors on the implementation were found. The different structure of interviews and diverse character of the training programs and e-learning training may have influenced the result of finding rather project-specific factors instead of overarching cross-project factors. Therefore, the results from the professional perspective need to be considered per training program and e-learning training. However, the goal of the cross-project analysis was not to stress importance of certain feasibility factors, but to illustrate commonalities and differences between the three projects. Furthermore, the factors concerning the educational structure in the Netherlands may not be generalizable to other contexts. The educational structure in the Netherlands is quite unique for postgraduate medical training to become occupational health care professionals. Yet, the factors identified in this study may not be limited to a certain group of medical education, but may be viewed in a wider context for general factors identified as costs, for example. Moreover, the selection of participants might have impacted the results as participation was voluntary and the picture of the most motivated trainers and occupational health professionals may be portrayed which may have led to more favorable and positive results. Recommendations for future research and practice The current study provides insight into possible factors affecting the implementation, practicality, and integration of the training programs, e-learning training and accompanying tools in educational structures and occupational health care practice. However, due to the explorative qualitative design of this study specific goal-setting may be challenging and not all factors are feasible to be tackled for future implementation. Therefore, future prioritization of the most important factors would help for the formulation of tailored implementation strategies and translation into workable implementation strategies. Furthermore, to evaluate the effect of the training and e-learning training on certain outcome measures, as for example improved person-centered care or improved participation, further study should be conducted on a larger scale. The current feasibility study provides a basis for future larger studies aimed at improving person-centered occupational health care. For practical uptake into existing educational structures, the current trainings and e-learning training need to be adapted taking into account the identified factors. Specifically, adaptation focusing on the factors as offering online versions of the training programs and offering a train-the-trainer course might be promising for broader reach and to support provision of the training programs and e-learning training as intended. To safeguard use of the acquired knowledge in practice, the training programs and e-learning training need to be structurally integrated into existing educational structures. Our study found that it is challenging to fit new trainings into the existing curriculum of OPs and IPs, but since the added-value of acquired skills in the practice of OPs and IPs, embedding these trainings as elective courses or as refresher courses might be a suitable approach for the integration of the training programs and e-learning training. For sustainable implementation and attracting professionals to follow the training, we recommend using case-based learning with more actual cases from the practice of the professionals and incorporating periodic reminders, by means of e-mails for example, to stimulate application of the acquired skills in practice. Currently, follow-up studies with pilot implementations in occupational health practice are being set-up. For all the respective projects, the goal was to investigate the factors concerning the three focus areas (implementation, practicality, and integration) of Bowen et al. from both the educational, as well as professional perspective. However, the interviews for project (1) used a different interview approach for the professional perspective due to convenience. In the context of that project, interviews were held with the goal of exploring barriers and facilitators for implementation of the tool and identifying possible points of improvement for the training, but without the specific goal of asking about implementation, practicality, and integration factors as in the other projects. This might have led to project-specific factors going unrecognized as the questions were not as specific during the interviews. Moreover, in the second step of the analysis a cross-project analysis was conducted to find overarching feasibility factors across the three projects. The interviews for the educational perspective were all held based on the same interview guide, which might have influenced the results of finding feasibility factors for all three focus areas (implementation, practicality, and integration) compared to the professional perspective where only factors on the implementation were found. The different structure of interviews and diverse character of the training programs and e-learning training may have influenced the result of finding rather project-specific factors instead of overarching cross-project factors. Therefore, the results from the professional perspective need to be considered per training program and e-learning training. However, the goal of the cross-project analysis was not to stress importance of certain feasibility factors, but to illustrate commonalities and differences between the three projects. Furthermore, the factors concerning the educational structure in the Netherlands may not be generalizable to other contexts. The educational structure in the Netherlands is quite unique for postgraduate medical training to become occupational health care professionals. Yet, the factors identified in this study may not be limited to a certain group of medical education, but may be viewed in a wider context for general factors identified as costs, for example. Moreover, the selection of participants might have impacted the results as participation was voluntary and the picture of the most motivated trainers and occupational health professionals may be portrayed which may have led to more favorable and positive results. The current study provides insight into possible factors affecting the implementation, practicality, and integration of the training programs, e-learning training and accompanying tools in educational structures and occupational health care practice. However, due to the explorative qualitative design of this study specific goal-setting may be challenging and not all factors are feasible to be tackled for future implementation. Therefore, future prioritization of the most important factors would help for the formulation of tailored implementation strategies and translation into workable implementation strategies. Furthermore, to evaluate the effect of the training and e-learning training on certain outcome measures, as for example improved person-centered care or improved participation, further study should be conducted on a larger scale. The current feasibility study provides a basis for future larger studies aimed at improving person-centered occupational health care. For practical uptake into existing educational structures, the current trainings and e-learning training need to be adapted taking into account the identified factors. Specifically, adaptation focusing on the factors as offering online versions of the training programs and offering a train-the-trainer course might be promising for broader reach and to support provision of the training programs and e-learning training as intended. To safeguard use of the acquired knowledge in practice, the training programs and e-learning training need to be structurally integrated into existing educational structures. Our study found that it is challenging to fit new trainings into the existing curriculum of OPs and IPs, but since the added-value of acquired skills in the practice of OPs and IPs, embedding these trainings as elective courses or as refresher courses might be a suitable approach for the integration of the training programs and e-learning training. For sustainable implementation and attracting professionals to follow the training, we recommend using case-based learning with more actual cases from the practice of the professionals and incorporating periodic reminders, by means of e-mails for example, to stimulate application of the acquired skills in practice. Currently, follow-up studies with pilot implementations in occupational health practice are being set-up. In this study, the feasibility of the developed trainings and e-learning training and accompanying tools were evaluated and perceived as feasible in terms of implementation, practicality, and integration. In addition, possible barriers to the implementation and practical use were identified. All three tools were perceived as valuable with adaptions as proposed in the current study. Future larger-scale implementation may be enhanced when addressing the identified factors. Below is the link to the electronic supplementary material. Additional file 1. Example questions from the semi-structured interview guides per perspective and tool. Additional file 2. Results of the interviews from an educational perspective and professionals perspective per developed tool.
Editorial: Postoperative management of Crohn's disease: One size does not fit all
03665bc1-8316-4337-8398-847974a2458e
10083459
Internal Medicine[mh]
The corresponding author confirms on behalf of all authors that there have been no involvements that might raise the question of bias in the work reported or in the conclusions, implications, or opinions stated. Eugeni Domènech has served as a speaker or has received research or educational funding or advisory fees from AbbVie, Adacyte Therapeutics, Biogen, Celltrion, Galapagos, Gilead, Janssen, Kern Pharma, MSD, Pfizer, Roche, Samsung, Takeda, Tillots; Míriam Mañosa has served as a speaker and has received research or educational funding from MSD, AbbVie, Takeda, Janssen, Faes Farma, Ferring and Pfizer; Margalida Calafat has served as a speaker for Takeda, Janssen, Faes Farma, and MSD.
Factors affecting physicians’ attitudes towards patient-centred care: a cross-sectional survey in Beijing
950fbf17-4114-4037-9fa5-755d10f0d5b6
10083761
Patient-Centered Care[mh]
Background As a healthcare approach that prioritises the needs, preferences and experiences of patients in the treatment and management of their health conditions, patient-centred care has been proposed as one of the key factors needed for better quality healthcare services. Patient-centred care aims to empower patients to take an active role in their care and decision-making by promoting effective communication and collaborative decision-making between patients and healthcare providers. Compared with the paternalistic healthcare approach, patient-centred care can respond to individual patient preferences and ensure patients’ involvement in medical decision-making. Patient-centred care has been shown to improve the physician–patient relationship, reduce patient complaints and improve healthcare outcomes. Researchers have also suggested that physicians’ patient-centred communication skills need to be honed if patients are to play more of a role in medical decision-making. To achieve patient-centred care, clinicians need to have specific skills and competencies that enable them to provide care that is patient-centred, including effective communication skills, shared decision-making skills, cultural competence, empathy and compassion, and patient education skill. Developing these skills can help to improve patient outcomes, increase patient satisfaction and enhance the quality of care. The role of patients in medical decision-making is a topic of ongoing debate, particularly in China where doctors face unique challenges. Allowing patients to have more say in their treatment plans can empower them to take greater responsibility for their health and well-being. This can lead to better adherence to treatment plans and improved health outcomes. In addition, when patients are more involved in the decision-making process, they may have a better understanding of the risks and benefits of different treatment options, which can lead to more informed decision-making. Furthermore, giving patients more control over their medical care can help to build trust between patients and healthcare providers, which can lead to better communication and improved health outcomes. However, patients may not have the necessary medical knowledge to fully understand the risks and benefits of different treatment options, which could lead to poor decision-making. Moreover, doctors in China are often under significant time constraints and may not have the time to fully involve patients in the decision-making process. Additionally, in China, patients may have different expectations for their healthcare providers, which could make it difficult to involve them in the decision-making process. While there are both benefits and challenges to involving patients more in medical decision-making, it is ultimately up to individual healthcare providers and patients to decide what approach is most appropriate for their specific circumstances. However, in a system where time constraints and cultural factors may pose significant barriers to patient involvement, it is important to ensure that healthcare providers are properly trained in communication and decision-making skills that can facilitate effective patient involvement. A systematic review and meta-analysis found that Chinese physicians were more likely to use a paternalistic approach to care, in which the physician makes decisions for the patient, rather than a patient-centred approach. Another study found that Chinese physicians had a low level of patient-centred communication skills, and that cultural factors and time constraints were major barriers to providing patient-centred care. These findings support the need to assess and describe physician attitudes towards patient-centred care in China. Such a study could help to identify specific areas where training and support may be needed to promote patient-centred care in Chinese healthcare settings. Patient-centred care attitudes of physicians in other countries have also been studied. For example, a study found that physicians in the USA generally had a positive attitude towards patient-centred care and believed that it could improve patient outcomes. Similarly, a study found that physicians in the UK believed that patient-centred care was important and were generally supportive of its principles. However, it is worth noting that the implementation of patient-centred care can vary depending on the specific healthcare system and cultural context. For example, Hammersley et al found that while physicians in Australia generally supported the principles of patient-centred care, they faced barriers in its implementation due to time constraints, resource limitations and the complexity of patients’ health needs. There is strong evidence showing that Chinese physicians do not practice patient-centred care, which highlights the need for further research on physician attitudes towards patient-centred care in China. While physicians in other countries generally have a positive attitude towards patient-centred care, its implementation can vary depending on the specific healthcare system and cultural context. Understanding physician attitudes towards patient-centred care can help to promote the adoption of patient-centred care in healthcare settings, which can lead to improved patient outcomes and increased patient satisfaction. Chinese physicians have long been criticised for focusing on the specifics of disease areas and knowledge provision while neglecting individual patients’ needs and values and engaging in patient-centred decision-making. This may be due to the fact that many physicians in China have traditionally believed that patients are not able to make informed decisions about their treatment and that involving patients in decision-making processes might not necessarily lead to better treatment results. This could be a reason causing the strained doctor–patient relationship often observed in clinical practice in China, thus impacting the quality of healthcare services. Physicians’ attitudes towards patient-centred care have been measured in many countries. However, Chinese physicians’ attitudes towards patient-centred care in clinical practice and the factors predicting their attitudes are still underexplored. Objectives and research questions This research aims to measure physicians’ attitudes towards patient-centred care in Chinese healthcare settings and to identify the sociodemographic predictors of their attitudes. The research questions are: What are the Chinese physicians’ attitudes towards patient-centred care? What are the sociodemographic predictors of their attitudes? As a healthcare approach that prioritises the needs, preferences and experiences of patients in the treatment and management of their health conditions, patient-centred care has been proposed as one of the key factors needed for better quality healthcare services. Patient-centred care aims to empower patients to take an active role in their care and decision-making by promoting effective communication and collaborative decision-making between patients and healthcare providers. Compared with the paternalistic healthcare approach, patient-centred care can respond to individual patient preferences and ensure patients’ involvement in medical decision-making. Patient-centred care has been shown to improve the physician–patient relationship, reduce patient complaints and improve healthcare outcomes. Researchers have also suggested that physicians’ patient-centred communication skills need to be honed if patients are to play more of a role in medical decision-making. To achieve patient-centred care, clinicians need to have specific skills and competencies that enable them to provide care that is patient-centred, including effective communication skills, shared decision-making skills, cultural competence, empathy and compassion, and patient education skill. Developing these skills can help to improve patient outcomes, increase patient satisfaction and enhance the quality of care. The role of patients in medical decision-making is a topic of ongoing debate, particularly in China where doctors face unique challenges. Allowing patients to have more say in their treatment plans can empower them to take greater responsibility for their health and well-being. This can lead to better adherence to treatment plans and improved health outcomes. In addition, when patients are more involved in the decision-making process, they may have a better understanding of the risks and benefits of different treatment options, which can lead to more informed decision-making. Furthermore, giving patients more control over their medical care can help to build trust between patients and healthcare providers, which can lead to better communication and improved health outcomes. However, patients may not have the necessary medical knowledge to fully understand the risks and benefits of different treatment options, which could lead to poor decision-making. Moreover, doctors in China are often under significant time constraints and may not have the time to fully involve patients in the decision-making process. Additionally, in China, patients may have different expectations for their healthcare providers, which could make it difficult to involve them in the decision-making process. While there are both benefits and challenges to involving patients more in medical decision-making, it is ultimately up to individual healthcare providers and patients to decide what approach is most appropriate for their specific circumstances. However, in a system where time constraints and cultural factors may pose significant barriers to patient involvement, it is important to ensure that healthcare providers are properly trained in communication and decision-making skills that can facilitate effective patient involvement. A systematic review and meta-analysis found that Chinese physicians were more likely to use a paternalistic approach to care, in which the physician makes decisions for the patient, rather than a patient-centred approach. Another study found that Chinese physicians had a low level of patient-centred communication skills, and that cultural factors and time constraints were major barriers to providing patient-centred care. These findings support the need to assess and describe physician attitudes towards patient-centred care in China. Such a study could help to identify specific areas where training and support may be needed to promote patient-centred care in Chinese healthcare settings. Patient-centred care attitudes of physicians in other countries have also been studied. For example, a study found that physicians in the USA generally had a positive attitude towards patient-centred care and believed that it could improve patient outcomes. Similarly, a study found that physicians in the UK believed that patient-centred care was important and were generally supportive of its principles. However, it is worth noting that the implementation of patient-centred care can vary depending on the specific healthcare system and cultural context. For example, Hammersley et al found that while physicians in Australia generally supported the principles of patient-centred care, they faced barriers in its implementation due to time constraints, resource limitations and the complexity of patients’ health needs. There is strong evidence showing that Chinese physicians do not practice patient-centred care, which highlights the need for further research on physician attitudes towards patient-centred care in China. While physicians in other countries generally have a positive attitude towards patient-centred care, its implementation can vary depending on the specific healthcare system and cultural context. Understanding physician attitudes towards patient-centred care can help to promote the adoption of patient-centred care in healthcare settings, which can lead to improved patient outcomes and increased patient satisfaction. Chinese physicians have long been criticised for focusing on the specifics of disease areas and knowledge provision while neglecting individual patients’ needs and values and engaging in patient-centred decision-making. This may be due to the fact that many physicians in China have traditionally believed that patients are not able to make informed decisions about their treatment and that involving patients in decision-making processes might not necessarily lead to better treatment results. This could be a reason causing the strained doctor–patient relationship often observed in clinical practice in China, thus impacting the quality of healthcare services. Physicians’ attitudes towards patient-centred care have been measured in many countries. However, Chinese physicians’ attitudes towards patient-centred care in clinical practice and the factors predicting their attitudes are still underexplored. This research aims to measure physicians’ attitudes towards patient-centred care in Chinese healthcare settings and to identify the sociodemographic predictors of their attitudes. The research questions are: What are the Chinese physicians’ attitudes towards patient-centred care? What are the sociodemographic predictors of their attitudes? Study design This research uses an exploratory research design. Unlike explanatory research, which tests specific hypotheses, exploratory research seeks to generate new hypotheses and ideas. Since there is limited research on this specific topic, with little understanding of the factors that influence physicians’ attitudes towards patient-centred care, an exploratory research design is useful as it allows new ideas and hypotheses to be generated. In addition, the relationship between physicians’ attitudes towards patient-centred care and the factors that shape them is complex and multifaceted. An exploratory research design is particularly useful for investigating complex relationships, as it allows for a more open-minded and flexible approach to the research question. This study used a cross-sectional survey design to investigate Chinese physicians’ attitudes towards patient-centred care. The survey included the Chinese-revised Patient-Practitioner Orientation Scale (CR-PPOS), a previously validated 6-point Likert scale, to measure participants’ attitudes. The CR-PPOS is designed to assess the patient-centredness of healthcare providers from the perspective of the patient. The survey also collected sociodemographic and related information from the participants. Descriptive statistics were used to summarise the distribution of the data, including means, SD, frequencies and percentages. Multivariable logistic regression analyses were performed to identify the sociodemographic predictors of Chinese physicians’ attitudes towards patient-centred care. The logistic regression model allowed the researchers to investigate the relationship between the dependent variable (attitudes towards patient-centred care) and several independent variables (such as age, gender, years of experience and type of practice). The use of a validated survey tool and statistical analysis allowed the researchers to obtain reliable and valid data on Chinese physicians’ attitudes towards patient-centred care and to identify potential predictors of these attitudes. Overall, this study design provided a rigorous approach to investigating the research questions and contributed to the growing body of knowledge on patient-centred care in China. Setting and participants A cross-sectional survey was undertaken from May to June 2022 in nine public and three private hospitals in Beijing. In China, all public hospitals can be classified in a three-tier system that recognises a hospital’s ability to provide healthcare services, clinical training and scientific research. Accordingly, hospitals in China are classified as primary, secondary or tertiary institutions. Primary hospitals are typically community hospitals aiming to provide accessible medical services to the general public. Secondary hospitals provide more comprehensive medical services than primary hospitals including medical students’ clinical training. Tertiary hospitals provide the most comprehensive and specialised healthcare services, medical students’ clinical training as well as being tasked with conducting scientific research. However, not all private hospitals can be classified in this system. Many private hospitals/clinics do not belong to any of these three tiers. The twelve hospitals selected in this research are three primary public hospitals, three secondary public hospitals, three tertiary public hospitals and three private hospitals that do not fit into this classification located in a total of six districts in Beijing. These 12 hospitals mainly practise Western medicine. However, healthcare practitioners in these hospitals may use an integrative approach, combining elements of both traditional Chinese medicine (TCM) and Western medicine to offer patients a range of treatment options. Physicians who practice TCM typically view health as a state of balance and harmony between the body, mind and spirit. They often aim to restore this balance through a holistic approach that includes acupuncture, herbal remedies, dietary adjustments and other therapies. TCM physicians may focus on prevention as well as treatment, and may encourage patients to take an active role in their own healthcare. In contrast, physicians who practice Western medicine often rely on scientific evidence and standardised treatments, such as medications, surgery and other interventions. They may view health in terms of the absence of disease or the presence of specific symptoms, and may prioritise the use of technology and specialised expertise to diagnose and treat medical conditions. In recent years, there has been a growing recognition of the value of TCM and other complementary and alternative therapies, and some Western physicians are now incorporating these approaches into their practices. Snowball sampling was used to recruit physicians working full-time in these 12 selected hospitals—we contacted physicians we knew in the hospitals and asked them to help distribute the questionnaires to their colleagues who met the recruitment criteria. The electronic questionnaires were distributed on a popular online survey platform (wenjuanxing.com) to the participants by six trained investigators, who were medical students and medical staff studying or working at the selected hospitals. The lead researcher instructed them to explain the purpose of the study to the participants in groups or individually and to obtain informed consent before distributing the questionnaire. The participants were not given an incentive (monetary or otherwise), but they were asked, before the questionnaires were distributed, whether they were interested in completing a questionnaire. Only those who expressed their interest were given the questionnaire. Instrument The Patient-Practitioner Orientation Scale (PPOS) was developed in 1999 to assess physicians’, medical students’ and patients’ attitudes about the extent to which these parties should have power during healthcare interactions. PPOS has been translated into many languages and validated in various countries. PPOS is a self-administrated scale, requiring respondents to indicate their attitudes towards each item using a 6-point Likert scale. It includes 18 items, which are divided into two dimensions: ‘Caring’ and ‘Sharing’. The Caring dimension indicates that respondents believe physicians are oriented to caring about patients’ expectations, needs, preferences and emotions, and are interested in providing holistic healthcare to patients rather than focusing only on treating their diseases. The Sharing dimension indicates that respondents believe physicians are oriented to involve patients in the medical decision-making process. Higher scores on summed items indicate that the respondents are more patient-centred, whereas lower scores indicate that they are more doctor-centred or disease-centred. The first application of PPOS in the Chinese healthcare context was a study by Ting et al , who used it to measure Chinese patients’ attitudes towards patient-centred communication. Ting et al translated the scale into Chinese and made some modifications. However, their research only measured attitudes of patients (not physicians) in a single medical unit located in the southwest of China. In addition, they did not perform reliability analysis on the adapted scale. Wang et al revised the original PPOS and developed the CR-PPOS. They tested the internal consistency and test–retest reliability of CR-PPOS and obtained acceptable results. The CR-PPOS includes 11 items, where 5 items were retained in the Caring subscale, and the other 6 in the Sharing subscale. The results of exploratory factor analysis indicated that these two subscales were well separated. After developing CR-PPOS, Wang et al used it to measure physicians’ and patients’ attitudes towards patient-centred communication in clinical units in Shanghai, China. The research was conducted with a relatively small sample (116 physicians) using convenience sampling. Liu et al used CR-PPOS to explore Chinese medical students’ attitudes towards patient-centred care and found that gender differences had an impact on attitudes—female participants had more patient-centred attitudes than their male counterparts. However, that study only explored the perspectives of medical students from a province located in the northeast of China; the results cannot be generalised to other parts of China. Moreover, data on physicians’ attitudes were not obtained. Later, Song et al used the CR-PPOS developed by Wang et al to explore the attitudes of physicians working in seven medical institutions located in the same province in China, and revealed low preference of physicians towards patient-centredness. However, studies conducted in the Chinese healthcare context using CR-PPOS have focused so far on investigating physicians working in public hospitals; the attitudes of physicians working in private hospitals in China have not been widely investigated in this regard to date. Nearly two-thirds of hospitals are privately owned in China. However, to date, no studies have explored whether there is a difference on patient-centred attitudes between physicians working in public and private hospitals in China. The survey used in this study consisted of two parts, as can be seen in . The first part elicited data on participants’ demographic and other characteristics (eg, gender, age, years of practice, educational level, overseas education experience, professional title, hospital type, specialty, average working time per workday, workload) and other self-reported variables related to the research problem (eg, physician–patient relationship, communication training, satisfaction with income). The second part of the survey was CR-PPOS, the adapted version of PPOS, which has undergone some validation. There are 11 items in CR-PPOS, scored on a 6-point Likert scale (1 =‘strongly disagree’ to 6=‘strongly agree’). 10.1136/bmjopen-2023-073224.supp1 Supplementary data Data analysis We used SPSS V.26.0 to analyse survey data. After checking that the data were suitable for the use of parametric statistics, t-tests and one-way analyses of variance (ANOVAs) were performed to compare mean group differences in the CR-PPOS scores for each categorical variable, with a p value set at <0.05 to designate statistical significance. However, the large number of statistical tests undertaken means that there is a risk of false positives. Accordingly, we report significance levels for all statistical tests; those with a p value of <0.01 are more likely to be meaningful. Next, we included any statistically significant variables in a multivariable logistic regression analysis. We then calculated median total CR-PPOS scores and scores on the Caring and Sharing dimensions, with the median scores set as the cut point for defining respondents’ attitudes. If respondents’ scores were higher than the median score, they were marked as ‘patient-centred attitudes’. Otherwise, they were marked as ‘doctor-centred or disease-centred attitudes’. In the multivariable logistic regression analysis, the dependent variables are: ‘1’=‘patient-centred attitudes’ and ‘0’=‘doctor-centred or disease-centred attitudes’. We calculated the ORs and the 95% CIs to measure the correlation between the outcomes and exposures. This research uses an exploratory research design. Unlike explanatory research, which tests specific hypotheses, exploratory research seeks to generate new hypotheses and ideas. Since there is limited research on this specific topic, with little understanding of the factors that influence physicians’ attitudes towards patient-centred care, an exploratory research design is useful as it allows new ideas and hypotheses to be generated. In addition, the relationship between physicians’ attitudes towards patient-centred care and the factors that shape them is complex and multifaceted. An exploratory research design is particularly useful for investigating complex relationships, as it allows for a more open-minded and flexible approach to the research question. This study used a cross-sectional survey design to investigate Chinese physicians’ attitudes towards patient-centred care. The survey included the Chinese-revised Patient-Practitioner Orientation Scale (CR-PPOS), a previously validated 6-point Likert scale, to measure participants’ attitudes. The CR-PPOS is designed to assess the patient-centredness of healthcare providers from the perspective of the patient. The survey also collected sociodemographic and related information from the participants. Descriptive statistics were used to summarise the distribution of the data, including means, SD, frequencies and percentages. Multivariable logistic regression analyses were performed to identify the sociodemographic predictors of Chinese physicians’ attitudes towards patient-centred care. The logistic regression model allowed the researchers to investigate the relationship between the dependent variable (attitudes towards patient-centred care) and several independent variables (such as age, gender, years of experience and type of practice). The use of a validated survey tool and statistical analysis allowed the researchers to obtain reliable and valid data on Chinese physicians’ attitudes towards patient-centred care and to identify potential predictors of these attitudes. Overall, this study design provided a rigorous approach to investigating the research questions and contributed to the growing body of knowledge on patient-centred care in China. A cross-sectional survey was undertaken from May to June 2022 in nine public and three private hospitals in Beijing. In China, all public hospitals can be classified in a three-tier system that recognises a hospital’s ability to provide healthcare services, clinical training and scientific research. Accordingly, hospitals in China are classified as primary, secondary or tertiary institutions. Primary hospitals are typically community hospitals aiming to provide accessible medical services to the general public. Secondary hospitals provide more comprehensive medical services than primary hospitals including medical students’ clinical training. Tertiary hospitals provide the most comprehensive and specialised healthcare services, medical students’ clinical training as well as being tasked with conducting scientific research. However, not all private hospitals can be classified in this system. Many private hospitals/clinics do not belong to any of these three tiers. The twelve hospitals selected in this research are three primary public hospitals, three secondary public hospitals, three tertiary public hospitals and three private hospitals that do not fit into this classification located in a total of six districts in Beijing. These 12 hospitals mainly practise Western medicine. However, healthcare practitioners in these hospitals may use an integrative approach, combining elements of both traditional Chinese medicine (TCM) and Western medicine to offer patients a range of treatment options. Physicians who practice TCM typically view health as a state of balance and harmony between the body, mind and spirit. They often aim to restore this balance through a holistic approach that includes acupuncture, herbal remedies, dietary adjustments and other therapies. TCM physicians may focus on prevention as well as treatment, and may encourage patients to take an active role in their own healthcare. In contrast, physicians who practice Western medicine often rely on scientific evidence and standardised treatments, such as medications, surgery and other interventions. They may view health in terms of the absence of disease or the presence of specific symptoms, and may prioritise the use of technology and specialised expertise to diagnose and treat medical conditions. In recent years, there has been a growing recognition of the value of TCM and other complementary and alternative therapies, and some Western physicians are now incorporating these approaches into their practices. Snowball sampling was used to recruit physicians working full-time in these 12 selected hospitals—we contacted physicians we knew in the hospitals and asked them to help distribute the questionnaires to their colleagues who met the recruitment criteria. The electronic questionnaires were distributed on a popular online survey platform (wenjuanxing.com) to the participants by six trained investigators, who were medical students and medical staff studying or working at the selected hospitals. The lead researcher instructed them to explain the purpose of the study to the participants in groups or individually and to obtain informed consent before distributing the questionnaire. The participants were not given an incentive (monetary or otherwise), but they were asked, before the questionnaires were distributed, whether they were interested in completing a questionnaire. Only those who expressed their interest were given the questionnaire. The Patient-Practitioner Orientation Scale (PPOS) was developed in 1999 to assess physicians’, medical students’ and patients’ attitudes about the extent to which these parties should have power during healthcare interactions. PPOS has been translated into many languages and validated in various countries. PPOS is a self-administrated scale, requiring respondents to indicate their attitudes towards each item using a 6-point Likert scale. It includes 18 items, which are divided into two dimensions: ‘Caring’ and ‘Sharing’. The Caring dimension indicates that respondents believe physicians are oriented to caring about patients’ expectations, needs, preferences and emotions, and are interested in providing holistic healthcare to patients rather than focusing only on treating their diseases. The Sharing dimension indicates that respondents believe physicians are oriented to involve patients in the medical decision-making process. Higher scores on summed items indicate that the respondents are more patient-centred, whereas lower scores indicate that they are more doctor-centred or disease-centred. The first application of PPOS in the Chinese healthcare context was a study by Ting et al , who used it to measure Chinese patients’ attitudes towards patient-centred communication. Ting et al translated the scale into Chinese and made some modifications. However, their research only measured attitudes of patients (not physicians) in a single medical unit located in the southwest of China. In addition, they did not perform reliability analysis on the adapted scale. Wang et al revised the original PPOS and developed the CR-PPOS. They tested the internal consistency and test–retest reliability of CR-PPOS and obtained acceptable results. The CR-PPOS includes 11 items, where 5 items were retained in the Caring subscale, and the other 6 in the Sharing subscale. The results of exploratory factor analysis indicated that these two subscales were well separated. After developing CR-PPOS, Wang et al used it to measure physicians’ and patients’ attitudes towards patient-centred communication in clinical units in Shanghai, China. The research was conducted with a relatively small sample (116 physicians) using convenience sampling. Liu et al used CR-PPOS to explore Chinese medical students’ attitudes towards patient-centred care and found that gender differences had an impact on attitudes—female participants had more patient-centred attitudes than their male counterparts. However, that study only explored the perspectives of medical students from a province located in the northeast of China; the results cannot be generalised to other parts of China. Moreover, data on physicians’ attitudes were not obtained. Later, Song et al used the CR-PPOS developed by Wang et al to explore the attitudes of physicians working in seven medical institutions located in the same province in China, and revealed low preference of physicians towards patient-centredness. However, studies conducted in the Chinese healthcare context using CR-PPOS have focused so far on investigating physicians working in public hospitals; the attitudes of physicians working in private hospitals in China have not been widely investigated in this regard to date. Nearly two-thirds of hospitals are privately owned in China. However, to date, no studies have explored whether there is a difference on patient-centred attitudes between physicians working in public and private hospitals in China. The survey used in this study consisted of two parts, as can be seen in . The first part elicited data on participants’ demographic and other characteristics (eg, gender, age, years of practice, educational level, overseas education experience, professional title, hospital type, specialty, average working time per workday, workload) and other self-reported variables related to the research problem (eg, physician–patient relationship, communication training, satisfaction with income). The second part of the survey was CR-PPOS, the adapted version of PPOS, which has undergone some validation. There are 11 items in CR-PPOS, scored on a 6-point Likert scale (1 =‘strongly disagree’ to 6=‘strongly agree’). 10.1136/bmjopen-2023-073224.supp1 Supplementary data We used SPSS V.26.0 to analyse survey data. After checking that the data were suitable for the use of parametric statistics, t-tests and one-way analyses of variance (ANOVAs) were performed to compare mean group differences in the CR-PPOS scores for each categorical variable, with a p value set at <0.05 to designate statistical significance. However, the large number of statistical tests undertaken means that there is a risk of false positives. Accordingly, we report significance levels for all statistical tests; those with a p value of <0.01 are more likely to be meaningful. Next, we included any statistically significant variables in a multivariable logistic regression analysis. We then calculated median total CR-PPOS scores and scores on the Caring and Sharing dimensions, with the median scores set as the cut point for defining respondents’ attitudes. If respondents’ scores were higher than the median score, they were marked as ‘patient-centred attitudes’. Otherwise, they were marked as ‘doctor-centred or disease-centred attitudes’. In the multivariable logistic regression analysis, the dependent variables are: ‘1’=‘patient-centred attitudes’ and ‘0’=‘doctor-centred or disease-centred attitudes’. We calculated the ORs and the 95% CIs to measure the correlation between the outcomes and exposures. Participants’ demographic and other information There were 1290 physicians who were invited to complete the survey, with a target response rate of 90% as indicated by similar studies conducted previously in Chinese healthcare context ; the actual response rate was 84%. Responses with missing values were excluded, leaving 1053 valid responses. Non-response participants may have had limited time to complete the survey due to work or personal obligations. Demographic and other characteristics of the participants are summarised in . Among the 1053 physicians, 488 are men and 565 are women. Over 70% of them (762) work in tertiary hospitals. Nearly two-thirds (688) work in non-surgical departments. Over a quarter (295) work over 8 hours per workday on average. Roughly the same proportion (287) feel that their workload is too high. Over a half (543) did not think they maintain good relationships with their patients. Fully 761 reported that they did not receive any physician–patient communication skills training. Only 177 of them were satisfied with their income, with most of these (135) working in tertiary hospitals. Variables related to higher scores of CR-PPOS, Caring subscale and Sharing subscale As shown in , the mean score of the total CR-PPOS is 3.92±0.75, the mean score of the Caring subscale is 4.61±0.95 and the mean score of the Sharing subscale is 3.35±0.85. T-tests and ANOVAs revealed that among the thirteen variables, nine of them were significantly related to higher CR-PPOS total scores. We calculated the effect sizes of each of these nine variables using Cohen’s d for t-test results and η 2 for ANOVA results, with the following findings: ‘communication training’ (d=0.744), ‘satisfaction with income’ (d=0.742), ‘overseas education experience’ (d=0.740), ‘gender’ (d=0.736), ‘professional title’ (η 2 =0.079), ‘years of practice’ (η 2 =0.052), ‘average working time per day’ (η 2 =0.045), ‘specialty’ (η 2 =0.022) and ‘hospital type’ (η 2 =0.012). The University of Cambridge MRC Cognition and Brain Unit gives effect sizes as 0.2, 0.5 and 0.8, respectively, as the values for small, medium and large effect sizes as measured by Cohen’s d for t-tests, and 0.01, 0.06 and 0.14, respectively, as the values for small, medium and large effect sizes as measured by η 2 for ANOVA. Female physicians, physicians with less than 5 years of practice, physicians with no overseas education experience, physicians with an intermediate title, physicians working in tertiary hospitals, physicians working in a non-surgical department, physicians working less than 8 hours per workday on average, physicians who have received communication training, and physicians who are satisfied with their income had higher patient-centred scores than those who were men, had worked over 5 years, had overseas education experience, had titles lower or higher than intermediate, working in first, secondary or private hospitals, working in surgical department, working over 8 hours per workday, had not received communication training and were unsatisfied with their income. Then we performed partial regression analyses and found that the effect of ‘overseas education experience’ on the total scale was not significant after controlling for ‘satisfaction with income’ (p=0.302). T-tests and ANOVAs revealed that eight variables are significantly related to higher Caring subscale scores. The effect size findings were as follows: ‘satisfaction with income’ (d=0.948), ‘overseas education experience’ (d=0.945), ‘gender’ (d=0.938), ‘communication training’ (d=0.953), ‘professional title’ (η 2 =0.068), ‘hospital type’ (η 2 =0.051), ‘years of practice’ (η 2 =0.041) and ‘average working time per day’ (η 2 =0.030). Female physicians, physicians who had worked from 11 to 20 years, physicians with no overseas education experience, physicians who have intermediate titles, physicians working in tertiary hospitals, physicians who work 8–10 hours per workday, physicians who have received communication training, and those who are satisfied with their income obtained higher Caring subscale scores than those who were men, had worked less than 11 years or over 20 years, had no overseas education experience, had titles lower or higher than intermediate titles, working in first, secondary or private hospitals, working less than 8 hours and over 10 hours, and were unsatisfied with their income. Then we performed partial regression analyses and found that the effect of ‘overseas education experience’ on the Caring subscale was not significant after controlling for ‘satisfaction with income’ (p=0.336), the effect of ‘average working time per day’ on the Caring subscale was not significant after controlling for ‘years of practice’ (p=0.718), and the effect of ‘title’ on the Caring subscale was not significant after controlling for ‘years of practice’ (p=0.560). T-tests and ANOVAs revealed that 10 variables were significantly related to higher Sharing subscale scores. We again calculated the effect sizes of these 10 variables with the following findings: ‘gender’ (d=0.938), ‘communication training’ (d=0.845), ‘satisfaction with income’ (d=0.845), ‘overseas education experience’ (d=0.843), ‘workload’ (d=0.842), ‘physician–patient relationship’ (d=0.842), ‘specialty’ (d=0.825), ‘professional title’ (η 2 =0.078), ‘average working time per day’ (η 2 =0.047) and ‘years of practice’ (η 2 =0.045). Female physicians, physicians who have worked between 11 and 20 years, have no overseas educational experience, have no title, work in non-surgical department, work less than 8 hours per workday on average, did not feel they had a high workload, have received communication training, did not feel they maintain good physician–patient relationship, and who are satisfied with their income obtained higher Sharing subscale scores than those who were men, had worked over 5 years, had no overseas educational experience, had titles, were working in surgical department, felt that they had a high workload, had not received communication training, felt they had maintained good physician–patient relationships, and were not satisfied with their income. Again, we performed partial regression analyses and found that the effect of ‘overseas education experience’ on the Sharing subscale was not significant after controlling for ‘title’ (p=0.264) and ‘satisfaction with income’ (p=0.066) and the effect of ‘physician–patient relationship’ on the Sharing subscale was not significant after controlling for ‘workload’ (p=0.572). presents the results of all 13 t-tests and ANOVAs (arranged according to the order of their appearance in the survey). Multivariable logistic regression analysis The median scores of the total CR-PPOS, Caring subscale and Sharing subscale are 44, 24 and 20, respectively, which are used as the cut-off point for ‘patient-centred attitude’ and ‘doctor-centred or disease-centred attitude’. For example, we consider CR-PPOS scores larger than 44 as ‘patient-centred’ and scores less than or equal to 44 as ‘doctor-centred or disease-centred’. Multivariable logistic regression analysis revealed that, on the Caring subscale, female physicians, physicians who had worked between 11 and 20 years, physicians who had intermediate titles or deputy chief titles, and those working in tertiary hospitals tended to be more patient-centred (OR=2.201, 95% CI 1.685 to 2.876; OR=0.284, 95% CI 0.103 to 0.787; OR=2.849, 95% CI 1.157 to 7.018; OR=2.595, 95% CI 1.746 to 3.850, respectively). On the Sharing subscale, physicians who had worked between 5 and 10 years, who had intermediate titles, and those who work in non-surgical department tended to be more patient-centred (OR=0.510, 95% CI 0.305 to 0.853; OR=1.806, 95% CI 1.051 to 3.105; OR=1.388, 95% CI 1.030 to 1.871, respectively). In the total CR-PPOS, female physicians, physicians with intermediate titles and those who were working in tertiary hospitals tended to be more patient-centred (OR=1.532, 95% CI 1.160 to 2.022; OR=2.089, 95% CI 1.206 to 3.618; OR=2.198, 95% CI 1.465 to 3.297), respectively. displays these results. Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. There were 1290 physicians who were invited to complete the survey, with a target response rate of 90% as indicated by similar studies conducted previously in Chinese healthcare context ; the actual response rate was 84%. Responses with missing values were excluded, leaving 1053 valid responses. Non-response participants may have had limited time to complete the survey due to work or personal obligations. Demographic and other characteristics of the participants are summarised in . Among the 1053 physicians, 488 are men and 565 are women. Over 70% of them (762) work in tertiary hospitals. Nearly two-thirds (688) work in non-surgical departments. Over a quarter (295) work over 8 hours per workday on average. Roughly the same proportion (287) feel that their workload is too high. Over a half (543) did not think they maintain good relationships with their patients. Fully 761 reported that they did not receive any physician–patient communication skills training. Only 177 of them were satisfied with their income, with most of these (135) working in tertiary hospitals. As shown in , the mean score of the total CR-PPOS is 3.92±0.75, the mean score of the Caring subscale is 4.61±0.95 and the mean score of the Sharing subscale is 3.35±0.85. T-tests and ANOVAs revealed that among the thirteen variables, nine of them were significantly related to higher CR-PPOS total scores. We calculated the effect sizes of each of these nine variables using Cohen’s d for t-test results and η 2 for ANOVA results, with the following findings: ‘communication training’ (d=0.744), ‘satisfaction with income’ (d=0.742), ‘overseas education experience’ (d=0.740), ‘gender’ (d=0.736), ‘professional title’ (η 2 =0.079), ‘years of practice’ (η 2 =0.052), ‘average working time per day’ (η 2 =0.045), ‘specialty’ (η 2 =0.022) and ‘hospital type’ (η 2 =0.012). The University of Cambridge MRC Cognition and Brain Unit gives effect sizes as 0.2, 0.5 and 0.8, respectively, as the values for small, medium and large effect sizes as measured by Cohen’s d for t-tests, and 0.01, 0.06 and 0.14, respectively, as the values for small, medium and large effect sizes as measured by η 2 for ANOVA. Female physicians, physicians with less than 5 years of practice, physicians with no overseas education experience, physicians with an intermediate title, physicians working in tertiary hospitals, physicians working in a non-surgical department, physicians working less than 8 hours per workday on average, physicians who have received communication training, and physicians who are satisfied with their income had higher patient-centred scores than those who were men, had worked over 5 years, had overseas education experience, had titles lower or higher than intermediate, working in first, secondary or private hospitals, working in surgical department, working over 8 hours per workday, had not received communication training and were unsatisfied with their income. Then we performed partial regression analyses and found that the effect of ‘overseas education experience’ on the total scale was not significant after controlling for ‘satisfaction with income’ (p=0.302). T-tests and ANOVAs revealed that eight variables are significantly related to higher Caring subscale scores. The effect size findings were as follows: ‘satisfaction with income’ (d=0.948), ‘overseas education experience’ (d=0.945), ‘gender’ (d=0.938), ‘communication training’ (d=0.953), ‘professional title’ (η 2 =0.068), ‘hospital type’ (η 2 =0.051), ‘years of practice’ (η 2 =0.041) and ‘average working time per day’ (η 2 =0.030). Female physicians, physicians who had worked from 11 to 20 years, physicians with no overseas education experience, physicians who have intermediate titles, physicians working in tertiary hospitals, physicians who work 8–10 hours per workday, physicians who have received communication training, and those who are satisfied with their income obtained higher Caring subscale scores than those who were men, had worked less than 11 years or over 20 years, had no overseas education experience, had titles lower or higher than intermediate titles, working in first, secondary or private hospitals, working less than 8 hours and over 10 hours, and were unsatisfied with their income. Then we performed partial regression analyses and found that the effect of ‘overseas education experience’ on the Caring subscale was not significant after controlling for ‘satisfaction with income’ (p=0.336), the effect of ‘average working time per day’ on the Caring subscale was not significant after controlling for ‘years of practice’ (p=0.718), and the effect of ‘title’ on the Caring subscale was not significant after controlling for ‘years of practice’ (p=0.560). T-tests and ANOVAs revealed that 10 variables were significantly related to higher Sharing subscale scores. We again calculated the effect sizes of these 10 variables with the following findings: ‘gender’ (d=0.938), ‘communication training’ (d=0.845), ‘satisfaction with income’ (d=0.845), ‘overseas education experience’ (d=0.843), ‘workload’ (d=0.842), ‘physician–patient relationship’ (d=0.842), ‘specialty’ (d=0.825), ‘professional title’ (η 2 =0.078), ‘average working time per day’ (η 2 =0.047) and ‘years of practice’ (η 2 =0.045). Female physicians, physicians who have worked between 11 and 20 years, have no overseas educational experience, have no title, work in non-surgical department, work less than 8 hours per workday on average, did not feel they had a high workload, have received communication training, did not feel they maintain good physician–patient relationship, and who are satisfied with their income obtained higher Sharing subscale scores than those who were men, had worked over 5 years, had no overseas educational experience, had titles, were working in surgical department, felt that they had a high workload, had not received communication training, felt they had maintained good physician–patient relationships, and were not satisfied with their income. Again, we performed partial regression analyses and found that the effect of ‘overseas education experience’ on the Sharing subscale was not significant after controlling for ‘title’ (p=0.264) and ‘satisfaction with income’ (p=0.066) and the effect of ‘physician–patient relationship’ on the Sharing subscale was not significant after controlling for ‘workload’ (p=0.572). presents the results of all 13 t-tests and ANOVAs (arranged according to the order of their appearance in the survey). The median scores of the total CR-PPOS, Caring subscale and Sharing subscale are 44, 24 and 20, respectively, which are used as the cut-off point for ‘patient-centred attitude’ and ‘doctor-centred or disease-centred attitude’. For example, we consider CR-PPOS scores larger than 44 as ‘patient-centred’ and scores less than or equal to 44 as ‘doctor-centred or disease-centred’. Multivariable logistic regression analysis revealed that, on the Caring subscale, female physicians, physicians who had worked between 11 and 20 years, physicians who had intermediate titles or deputy chief titles, and those working in tertiary hospitals tended to be more patient-centred (OR=2.201, 95% CI 1.685 to 2.876; OR=0.284, 95% CI 0.103 to 0.787; OR=2.849, 95% CI 1.157 to 7.018; OR=2.595, 95% CI 1.746 to 3.850, respectively). On the Sharing subscale, physicians who had worked between 5 and 10 years, who had intermediate titles, and those who work in non-surgical department tended to be more patient-centred (OR=0.510, 95% CI 0.305 to 0.853; OR=1.806, 95% CI 1.051 to 3.105; OR=1.388, 95% CI 1.030 to 1.871, respectively). In the total CR-PPOS, female physicians, physicians with intermediate titles and those who were working in tertiary hospitals tended to be more patient-centred (OR=1.532, 95% CI 1.160 to 2.022; OR=2.089, 95% CI 1.206 to 3.618; OR=2.198, 95% CI 1.465 to 3.297), respectively. displays these results. Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. This research analysed factors related to physicians’ attitudes towards patient-centred care in the Chinese healthcare context. The results shows that the average total CR-PPOS score is 3.92, which is higher than has been found elsewhere in China (in Heilongjiang (3.24) and Shanghai (3.66)), but lower than in Australia (4.46) and the USA (4.26). This indicates that physicians in Beijing, at least in this study, tend to be more patient-centred than physicians in Heilongjiang and Shanghai, but less patient-centred than physicians in Australia and USA. Physicians in this study obtained a higher mean score on the Caring subscale (4.61) than on the Sharing subscale (3.35), indicating that they hoped to treat patients supportively, paying attention to their psychosocial background information. This is consistent with other studies conducted in China, while different from studies conducted in Australia and Portugal where physicians had higher mean scores on the Sharing subscale than on the Caring subscale. The reason physicians in China had higher scores on the Caring subscale than on the Sharing subscale may be because Asian cultures tend to presume that medical decisions should be made by doctors, or even families, instead of engaging patients in medical decision-making processes. Indeed, Chinese patients tend to expect their physicians to lead in medical consultations and to make decisions. Studies have also found that many patients see themselves as having insufficient knowledge to deal with medical issues. Female physicians obtained higher mean scores than male physicians on the total CR-PPOS, the Sharing subscale and the Caring subscale, indicating that female physicians tend to be more patient-centred than male physicians, which is consistent with findings of research conducted in countries outside China. A study conducted with Chinese medical students also found that female students were more patient-centred than men. The findings of Liu et al reveal the gender differences between female and male physicians regarding their attitudes towards patient-centred care. In addition, in clinical communication, female medical staff are reported to have better communication strategies with patients than their male counterparts, which may explain, at least to some extent, why female physicians tend to be more patient-centred than male physicians. Physicians working in tertiary hospitals obtained higher scores on the CR-PPOS, Caring subscale and Sharing subscale, indicating that they tend to be more patient-centred than physicians working in other hospitals. Tertiary hospitals in China have the most demanding entrance requirement among all hospitals for physicians in terms of their education background. There is also evidence that patients in China prefer tertiary hospitals to others when seeking medical services. This implies that tertiary hospitals may be influential in the development and implementation of patient-centred care in Chinese healthcare system. However, future research could be conducted to further explore how the patient-centred care concept is more developed in tertiary hospitals than in other hospitals to provide information about the implications on medical system reform. This research also found that physicians who work in non-surgical departments tend to be more patient-centred that those working in surgical departments. Surgeons are reported to experience particular stress through training and clinical practice. In China, physicians working in surgical department are generally more emotionally stressed, having higher burnout and work pressures than those working in non-surgical department ; this may help explain the difference between physicians working in surgical and non-surgical departments in terms of their attitudes towards patient-centred care. Physicians who had received training in physician–patient communication tended to be more patient-centred than those who had not. It has been found that effective communication with patients during clinical practice is important in providing a high-quality medical service. Improving physicians’ communication with patients could contribute to better physician–patient relationships and reduce medical complaints. Another interesting finding is that physicians who are satisfied with their income tended to be more patient-centred than those who were unsatisfied. Physicians’ satisfaction with income has been found to be associated with their job satisfaction and the way they deal with patients in clinical settings. However, 83.2% physicians surveyed reported being unsatisfied with their income. To address this issue, actions on reforming the whole Chinese healthcare system may be required. Furthermore, this research found that physicians who do not feel they have high workloads have higher patient-centred scores on the Sharing subscale and the total CR-PPOS. This may be because physicians with higher workloads are less capable of proving emotional support to their patients. Overall, 28% of the surveyed physicians reported that they worked over 8 hours per workday on average, with 27.3% of physicians reporting they have a high workload. We would argue that if patient-centred care is to be provided, it may be necessary to ensure that physicians have reasonable workloads and work hours. The findings of this study provide a foundation for future research to further investigate the complex relationship between physicians’ attitudes towards patient-centred care and the factors that shape them. This study has implications for medical education, policy and practice. Medical educators and policymakers could take into account the factors identified in this study when designing training and policy interventions aimed at improving patient-centred care. Specifically, interventions aimed at improving physician–patient communication, reducing physicians’ workload, and promoting patient-centred care attitudes and skills should be considered. In addition, the study highlights the need for increased attention to the training of physicians working in surgical departments and the involvement of tertiary hospitals in the development and implementation of patient-centred care. As an exploratory study, the findings highlight several issues that warrant further investigation. First, the reasons for the observed gender differences in patient-centred care attitudes should be explored further. Second, future research could investigate the impact of job satisfaction and income on patient-centred care attitudes. Third, it would be valuable to conduct longitudinal studies to assess changes in patient-centred care attitudes over time, as well as the impact of interventions aimed at promoting patient-centred care. The study provides insights into the factors that influence Chinese physicians’ attitudes towards patient-centred care, and highlights differences in patient-centred care attitudes between physicians in China and other countries. The study’s use of a well-validated tool to measure patient-centred care attitudes and its large sample size also contribute to its strength. The study also has several limitations that should be considered when interpreting its results. First, the study was conducted in a single city in China and may not be generalisable to other regions or countries. Second, the study relied on self-reported data from physicians, which may be subject to social desirability bias. Third, the sampling strategy used in this study—snowball sampling—may be susceptible to sampling bias. Participants who are recruited through referrals may share similar characteristics or opinions. This can lead to a biased sample that does not accurately represent the target population. Additionally, snowball sampling may miss out on certain subgroups that are not well-connected to the existing participants, further exacerbating the sampling bias. Finally, the study did not examine the impact of patient-centred care attitudes on patient outcomes, so further research is needed to investigate this relationship. This research identified sociodemographic predictors of Chinese physicians’ attitudes towards patient-centred care. This research found that gender, professional title and hospital type influence Chinese physicians’ attitudes towards patient-centred care. Female physicians, physicians with an intermediate title and those who work in tertiary hospitals tend to have higher patient-centred attitudes than male physicians with other titles, and those who work in first, secondary or private hospitals. It is also found that physicians work in non-surgical department, those who have received training in physician–patient communication, and those are satisfied with their income have high patient-centred scores on the CR-PPOS and the two subscales. These findings imply that more attention could be paid to these factors in medical education. The findings of this study can inform medical educators and policymakers on how to improve physician–patient relationships and provide high-quality healthcare services to patients in China. The study has several implications for future policy, practice and research. First, policies aimed at improving physician–patient communication and reducing physicians’ workload should be considered to enhance patient-centred care. Second, medical education and training programmes should focus on the development of patient-centred care attitudes and skills, particularly for physicians working in surgical departments. Third, the development and implementation of patient-centred care in China may require the involvement of tertiary hospitals. Finally, future research should explore reasons for the gender differences in patient-centred care attitudes and investigate the impact of job satisfaction and income on patient-centred care attitudes. Reviewer comments Author's manuscript
Aetiology of ear infection and antimicrobial susceptibility pattern among patients attending otorhinolaryngology clinic at a tertiary hospital in Dar es Salaam, Tanzania: a hospital-based cross-sectional study
e7fe5e43-8814-44d4-97db-379664b5c3a7
10083798
Otolaryngology[mh]
An ear infection is among the leading causes of deafness in many low/middle-income countries. Unfortunately, most patients with ear infections in resource-limited settings delay seeking medical attention; hence, usually present with complications. Bacteria are the leading pathogens of ear infection, whereby, Staphylococcus aureus , Pseudomonas aeruginosa, Proteus mirabilis and Klebsiell a species are the dominant bacteria causing ear infection globally. In addition, Candida spp and Aspergillus spp are predominant fungal isolates responsible for ear infections. However, due to limited diagnostic opportunities, fungal ear infections are often undiagnosed, especially in resource-limited countries, including Tanzania. Most practitioners in our settings tend to treat ear infections empirically or adhere to the standard treatment guideline (STG) without considering laboratory investigation and antimicrobial susceptibility testing (AST) results. This has created a gap in managing most ear infections, which raises the risk of acquiring multidrug-resistant bacteria. When first-line antibiotics cannot treat diseases, more costly antibiotics must be used. This consequently affects patients’ treatment options, resulting in prolonged hospital stays and increased healthcare costs, which impacts families’ financial burden and quality of life. Furthermore, there needs to be more data on the effectiveness of empirical treatment in managing ear infections in Tanzania. However, experience based on the clinic’s patient return rate after initial treatment for ear infections, it appears that a considerable number of patients return to the clinic with the same problem. This suggests that relying solely on empirical treatment methods may not be effective in treating ear infections. Hence, this warrants further research to investigate the antimicrobial susceptibility patterns of bacteria isolated in ear infections to improve the outcome of ear infections following appropriate empirical treatment. Aetiological studies of ear infections are essential to guide the choice of an effective antibiotic and monitor bacterial patterns and their varying antimicrobial susceptibilities. This is crucial for risk analysis, mitigation measures and logistical plans. Therefore, this study aimed to determine the aetiological pathogens and antimicrobial susceptibility patterns of bacteria-causing ear infections. The data obtained, if used, will strengthen the prevention and control measures and update the management and treatment options for ear infections. Also, the information will serve as a baseline for countrywide surveillance of antibiotic resistance. Study design and settings We conducted a hospital-based cross-sectional study from March to July 2021 in the otorhinolaryngology clinic at Muhimbili National Hospital (MNH), Dar es Salaam, Tanzania. MNH is the leading national referral hospital, research centre and a university teaching hospital. It is the largest tertiary healthcare facility in Tanzania. The hospital has a capacity of 1500 beds, attending from 1000 to 1200 outpatients per week and admitting from 1000 to 1200 inpatients per week. The otorhinolaryngology department has inpatient and outpatient units; about 20–30 patients attend the outpatient clinic per day. Study participants The study included patients attending the otorhinolaryngology clinic with signs and symptoms of ear infection, such as accumulation of fluid in the middle ear, bulging of the eardrum, ear pain, ear itching, perforation of the eardrum and ear discharge (otorrhoea). We excluded patients with other hearing disorders unrelated to infection (congenital malformations, physical head injury) and those on regular check-ups. Sample size and sampling procedure The study sample size was estimated using a Kish Leslie formula (1965) for a cross-sectional study considering the prevalence of 62.1% reported previously by Mushi et al in a study conducted in a tertiary hospital in Mwanza city, Tanzania. The minimum sample size was 241 participants; considering the 5% non-response rate, we obtained a sample size of 255 participants. Data collection Data collection was conducted by two trained research assistants (RAs) and an ear, nose and throat (ENT) surgeon; briefly, a structured questionnaire was administered to the participants by two RAs. RAs used the questionnaire to collect demographic data (age, sex, marital status, occupation and education) and behavioural risk characteristics (swimming, frequent use of earphones, cotton buds, sharp objects and cigarette smoking). In addition, the participants’ clinical information, including the type of ear infection, use of antibiotics, nasal congestion or blockage, recurrent upper respiratory tract infection (URTI), and cerumen impaction, was also collected from the patient’s medical records and during a physical examination by ENT surgeon. In this study, chronic suppurative otitis media (CSOM) was diagnosed when there is persistent otorrhoea from the ear for at least 3–12 weeks despite appropriate medical treatment or when there is a persistent eardrum perforation with otorrhoea for more than 3 months. This chronicity of otorrhoea distinguishes CSOM from acute otitis media, a short-term middle ear infection with acute onset and rapid resolution. Specimen collection The ENT surgeon collected specimens with precaution to prevent contamination. The sterile swab was used to clear the oozing pus from the patient’s ear; another sterile swab was then used to collect fresh pus. The collected specimens were kept at room temperature in Stuart’s transport media before processing at central pathology laboratory. Isolation and identification On arrival in the laboratory, specimens were processed for culture and identification. Each specimen was inoculated on selective and non-selective media: chocolate agar (CA), sheep-blood agar, MacConkey agar (MCA) and Sabouraud dextrose agar (SDA). We used CA to isolate fastidious bacteria, such as Haemophilus influenzae and Streptococcus pneumoniae , the frequent aetiological agents of ear infection. MCA was used as a selective and differential medium for Gram-negative bacteria, and BA was used as a general-purpose medium. SDA was used for the isolation of fungal species. We incubated MCA in an aerobic environment and BA and CA in a 5% CO 2 environment at 37°C for 18–24 hours. Bacterial isolates were identified by interpreting colonial morphologies, microscopic examination (Gram stain) and biochemical tests. The catalase and coagulase tests were performed for Gram-positive bacteria, while Kligler iron agar, sulfur indole motility, citrate and urease tests were for gram-negative bacteria. Further, phenotypical identification and confirmation of Gram-negative bacterial isolates were performed by Analytical Profile Index tests, API 20E and API 20NE. For fungal isolates, growth on the SDA plate was used preliminary to classify mould or yeast based on the colonial morphology and colour. A germ tube test was used to identify Candida albicans . In addition, lactophenol cotton blue was used for moulds to identify the conidial spore in Aspergillus spp. Antimicrobial susceptibility testing AST for bacterial isolates was performed using the Kirby-Bauer disc diffusion method on Mueller-Hinton agar (MHA), and MHA supplemented with 5% blood for S. pneumonia following the 2021 Clinical and Laboratory Standard Institute (CLSI) guidelines. Zones of inhibition were measured using a ruler in millimetres and interpreted as susceptible, resistant or intermediate according to the 2021 CLSI guideline. The antibiotic discs used were as follows: Ciprofloxacin (5 µg), trimethoprim/sulfamethoxazole (1.25/23.75 µg), gentamycin (10 µg), clindamycin (2 µg), erythromycin (15 µg),) for Gram-positive bacteria. Ciprofloxacin (5 µg), trimethoprim/sulfamethoxazole (1.25/23.75 µg), gentamycin (10 µg), meropenem (10 µg), amoxicillin/clavulanic acid (20 µg), ceftriaxone (30 µg) and ceftazidime (30 µg) for Enterobacterales and Acinetobacter spp. Ciprofloxacin (5 µg), gentamycin (10 µg), meropenem (10 µg) and ceftazidime (30 µg) for Pseudomonas spp. Standard methods were used to identify methicillin-resistance S. aureus (MRSA) using cefoxitin (30 µg) disc in which resistant isolates were considered MRSA positive. In addition, extended-spectrum beta-lactamase-producing Enterobacterales (ESBL-PE) screening was done using ceftazidime (30 µg) and cefotaxime (30 µg) antibiotic discs, and if resistant, ESBL-PE confirmation was done by the double-disc synergy method. Quality control The reference organisms and reagents were clearly and uniquely labelled, dated and stored at optimal conditions. The room, incubator and refrigerator temperatures were monitored daily. The culture media were prepared following the manufacturer’s guidelines and internal standard operating procedures and tested for performance and sterility. Data analysis The data were analysed by using SPSS V.23 software. Continuous variables were summarised as the median and IQR, whereas percentages and proportions were used to describe categorical variables. The resistance rate was obtained by computing the number of bacteria that resisted a specific drug over a total number of isolated bacterial species. AST intermediate results were regarded as resistant. Reporting guideline This study adhered to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines for cross-sectional studies, which provide a checklist for reporting observational studies. The checklist includes crucial elements that should be included in the report, such as the study design, participant selection, data collection and statistical analysis. The authors have carefully reviewed the checklist to ensure that they incorporated each relevant item into the study design and analysis. The authors used a standardised data collection tool to collect information on all study participants and employed appropriate statistical methods to analyse the data and draw conclusions. Patient and public involvement Patients and the public were not involved in this research’s design, conduct, reporting or dissemination plans. We conducted a hospital-based cross-sectional study from March to July 2021 in the otorhinolaryngology clinic at Muhimbili National Hospital (MNH), Dar es Salaam, Tanzania. MNH is the leading national referral hospital, research centre and a university teaching hospital. It is the largest tertiary healthcare facility in Tanzania. The hospital has a capacity of 1500 beds, attending from 1000 to 1200 outpatients per week and admitting from 1000 to 1200 inpatients per week. The otorhinolaryngology department has inpatient and outpatient units; about 20–30 patients attend the outpatient clinic per day. The study included patients attending the otorhinolaryngology clinic with signs and symptoms of ear infection, such as accumulation of fluid in the middle ear, bulging of the eardrum, ear pain, ear itching, perforation of the eardrum and ear discharge (otorrhoea). We excluded patients with other hearing disorders unrelated to infection (congenital malformations, physical head injury) and those on regular check-ups. The study sample size was estimated using a Kish Leslie formula (1965) for a cross-sectional study considering the prevalence of 62.1% reported previously by Mushi et al in a study conducted in a tertiary hospital in Mwanza city, Tanzania. The minimum sample size was 241 participants; considering the 5% non-response rate, we obtained a sample size of 255 participants. Data collection was conducted by two trained research assistants (RAs) and an ear, nose and throat (ENT) surgeon; briefly, a structured questionnaire was administered to the participants by two RAs. RAs used the questionnaire to collect demographic data (age, sex, marital status, occupation and education) and behavioural risk characteristics (swimming, frequent use of earphones, cotton buds, sharp objects and cigarette smoking). In addition, the participants’ clinical information, including the type of ear infection, use of antibiotics, nasal congestion or blockage, recurrent upper respiratory tract infection (URTI), and cerumen impaction, was also collected from the patient’s medical records and during a physical examination by ENT surgeon. In this study, chronic suppurative otitis media (CSOM) was diagnosed when there is persistent otorrhoea from the ear for at least 3–12 weeks despite appropriate medical treatment or when there is a persistent eardrum perforation with otorrhoea for more than 3 months. This chronicity of otorrhoea distinguishes CSOM from acute otitis media, a short-term middle ear infection with acute onset and rapid resolution. The ENT surgeon collected specimens with precaution to prevent contamination. The sterile swab was used to clear the oozing pus from the patient’s ear; another sterile swab was then used to collect fresh pus. The collected specimens were kept at room temperature in Stuart’s transport media before processing at central pathology laboratory. On arrival in the laboratory, specimens were processed for culture and identification. Each specimen was inoculated on selective and non-selective media: chocolate agar (CA), sheep-blood agar, MacConkey agar (MCA) and Sabouraud dextrose agar (SDA). We used CA to isolate fastidious bacteria, such as Haemophilus influenzae and Streptococcus pneumoniae , the frequent aetiological agents of ear infection. MCA was used as a selective and differential medium for Gram-negative bacteria, and BA was used as a general-purpose medium. SDA was used for the isolation of fungal species. We incubated MCA in an aerobic environment and BA and CA in a 5% CO 2 environment at 37°C for 18–24 hours. Bacterial isolates were identified by interpreting colonial morphologies, microscopic examination (Gram stain) and biochemical tests. The catalase and coagulase tests were performed for Gram-positive bacteria, while Kligler iron agar, sulfur indole motility, citrate and urease tests were for gram-negative bacteria. Further, phenotypical identification and confirmation of Gram-negative bacterial isolates were performed by Analytical Profile Index tests, API 20E and API 20NE. For fungal isolates, growth on the SDA plate was used preliminary to classify mould or yeast based on the colonial morphology and colour. A germ tube test was used to identify Candida albicans . In addition, lactophenol cotton blue was used for moulds to identify the conidial spore in Aspergillus spp. AST for bacterial isolates was performed using the Kirby-Bauer disc diffusion method on Mueller-Hinton agar (MHA), and MHA supplemented with 5% blood for S. pneumonia following the 2021 Clinical and Laboratory Standard Institute (CLSI) guidelines. Zones of inhibition were measured using a ruler in millimetres and interpreted as susceptible, resistant or intermediate according to the 2021 CLSI guideline. The antibiotic discs used were as follows: Ciprofloxacin (5 µg), trimethoprim/sulfamethoxazole (1.25/23.75 µg), gentamycin (10 µg), clindamycin (2 µg), erythromycin (15 µg),) for Gram-positive bacteria. Ciprofloxacin (5 µg), trimethoprim/sulfamethoxazole (1.25/23.75 µg), gentamycin (10 µg), meropenem (10 µg), amoxicillin/clavulanic acid (20 µg), ceftriaxone (30 µg) and ceftazidime (30 µg) for Enterobacterales and Acinetobacter spp. Ciprofloxacin (5 µg), gentamycin (10 µg), meropenem (10 µg) and ceftazidime (30 µg) for Pseudomonas spp. Standard methods were used to identify methicillin-resistance S. aureus (MRSA) using cefoxitin (30 µg) disc in which resistant isolates were considered MRSA positive. In addition, extended-spectrum beta-lactamase-producing Enterobacterales (ESBL-PE) screening was done using ceftazidime (30 µg) and cefotaxime (30 µg) antibiotic discs, and if resistant, ESBL-PE confirmation was done by the double-disc synergy method. The reference organisms and reagents were clearly and uniquely labelled, dated and stored at optimal conditions. The room, incubator and refrigerator temperatures were monitored daily. The culture media were prepared following the manufacturer’s guidelines and internal standard operating procedures and tested for performance and sterility. The data were analysed by using SPSS V.23 software. Continuous variables were summarised as the median and IQR, whereas percentages and proportions were used to describe categorical variables. The resistance rate was obtained by computing the number of bacteria that resisted a specific drug over a total number of isolated bacterial species. AST intermediate results were regarded as resistant. This study adhered to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines for cross-sectional studies, which provide a checklist for reporting observational studies. The checklist includes crucial elements that should be included in the report, such as the study design, participant selection, data collection and statistical analysis. The authors have carefully reviewed the checklist to ensure that they incorporated each relevant item into the study design and analysis. The authors used a standardised data collection tool to collect information on all study participants and employed appropriate statistical methods to analyse the data and draw conclusions. Patients and the public were not involved in this research’s design, conduct, reporting or dissemination plans. Participants’ demographic, clinical and risk behaviour characteristics Two hundred and fifty-five participants were recruited; 52.5% (134/255) were males. The median age was 31 years (IQR: 15–49). Most participants (30.2%) were students, 32.9% had a college education and 15.7% were from outside Dar es Salaam region ( ). The median duration of ear infections was 210 days (IQR: 21–1095). Otitis externa (OE) was the most common type of ear infection, accounting for 45.1% (115/255), followed by CSOM (41.2%) ( ). Around 49% of the participants with ear infections had a history of antibiotic use, whereby ciprofloxacin eardrop was the most prescribed topical antibiotic. In addition, 33.3% of the study participants had nasal congestion/blockage/discharge, and 28.2% had recurrent URTI ( ). Distribution of bacterial and fungal isolates causing ear infections In this study, 136 out of 255 (53.3%) participants had a positive aerobic culture for either bacterial or fungal pathogen, whereby 10.3% (14/136) of participants had a polymicrobial infection (mixed growth of either two different bacteria or bacterial and fungal infection). A total of 150 isolates (bacteria and fungi) were identified, of which 87.3% (131/150) were bacteria. Of the isolated bacteria, Gram-negative, 71.0% (93/131) were predominant. The predominant bacterial isolates were S. aureus, 27.5% (36/131), followed by P. aeruginosa, 24.4% (32/131) ( ). On the other hand, Candida spp accounted for 63.2% (12/19) of the isolated fungi (data not shown). Moreover, 41% of isolates were obtained from CSOM patients. Further stratification of isolated pathogens by type of ear infection showed that S. aureus 16/131 (12.2%) was the most prevalent bacterium in OE patients, whereas P. aeruginosa 22/131 (16.8%) predominated in CSOM patients ( ). In this study, 34.4% (21/61) of the Enterobacterales, excluding P. aeruginosa , were ESBL-PE; and Klebsiella spp was predominant, accounting for 33.3% (7/21) of the ESBL-PE isolates ( ). On the other hand, 44.4% (16/36) of the S. aureus species were MRSA (data not shown). Antimicrobial susceptibility pattern of bacterial isolates Almost all (93%) isolated Enterobacterales were resistant to amoxicillin/clavulanic acid, more so Escherichia coli and Acinetobacter spp were 100% resistant. Also, 73% of isolated bacteria were resistant to ceftazidime (data not shown), whereby P. aeruginosa had the highest resistance rate of 75%. In addition, 43% of isolated bacteria were resistant to trimethoprim-sulfamethoxazole (data not shown), whereby E. coli was leading with a 75% resistance rate. Sulfamethoxazole-trimethoprim resistance rates ranged from 57% to 100% among ESBL producers, higher than 29%–100% among non-ESBL producers. Moreover, 14.6% (6/41) of the non-ESBL-PE bacteria were resistant to all the third-generation cephalosporins, and all non-ESBL-PE isolates were sensitive to meropenem. S. aureus had an 89% resistance rate to erythromycin. However, MRSA isolates were more resistant to sulfamethoxazole-trimethoprim (81%) and gentamicin (50%) than non-MRSA isolates 35% and 25% for sulfamethoxazole-trimethoprim and gentamicin, respectively. In this study, we report that resistance to ciprofloxacin, a primary topical antibiotic used to manage ear infections, is 22%. Most isolated bacteria had a low resistance rate against meropenem (4%) ( ). Two hundred and fifty-five participants were recruited; 52.5% (134/255) were males. The median age was 31 years (IQR: 15–49). Most participants (30.2%) were students, 32.9% had a college education and 15.7% were from outside Dar es Salaam region ( ). The median duration of ear infections was 210 days (IQR: 21–1095). Otitis externa (OE) was the most common type of ear infection, accounting for 45.1% (115/255), followed by CSOM (41.2%) ( ). Around 49% of the participants with ear infections had a history of antibiotic use, whereby ciprofloxacin eardrop was the most prescribed topical antibiotic. In addition, 33.3% of the study participants had nasal congestion/blockage/discharge, and 28.2% had recurrent URTI ( ). In this study, 136 out of 255 (53.3%) participants had a positive aerobic culture for either bacterial or fungal pathogen, whereby 10.3% (14/136) of participants had a polymicrobial infection (mixed growth of either two different bacteria or bacterial and fungal infection). A total of 150 isolates (bacteria and fungi) were identified, of which 87.3% (131/150) were bacteria. Of the isolated bacteria, Gram-negative, 71.0% (93/131) were predominant. The predominant bacterial isolates were S. aureus, 27.5% (36/131), followed by P. aeruginosa, 24.4% (32/131) ( ). On the other hand, Candida spp accounted for 63.2% (12/19) of the isolated fungi (data not shown). Moreover, 41% of isolates were obtained from CSOM patients. Further stratification of isolated pathogens by type of ear infection showed that S. aureus 16/131 (12.2%) was the most prevalent bacterium in OE patients, whereas P. aeruginosa 22/131 (16.8%) predominated in CSOM patients ( ). In this study, 34.4% (21/61) of the Enterobacterales, excluding P. aeruginosa , were ESBL-PE; and Klebsiella spp was predominant, accounting for 33.3% (7/21) of the ESBL-PE isolates ( ). On the other hand, 44.4% (16/36) of the S. aureus species were MRSA (data not shown). Almost all (93%) isolated Enterobacterales were resistant to amoxicillin/clavulanic acid, more so Escherichia coli and Acinetobacter spp were 100% resistant. Also, 73% of isolated bacteria were resistant to ceftazidime (data not shown), whereby P. aeruginosa had the highest resistance rate of 75%. In addition, 43% of isolated bacteria were resistant to trimethoprim-sulfamethoxazole (data not shown), whereby E. coli was leading with a 75% resistance rate. Sulfamethoxazole-trimethoprim resistance rates ranged from 57% to 100% among ESBL producers, higher than 29%–100% among non-ESBL producers. Moreover, 14.6% (6/41) of the non-ESBL-PE bacteria were resistant to all the third-generation cephalosporins, and all non-ESBL-PE isolates were sensitive to meropenem. S. aureus had an 89% resistance rate to erythromycin. However, MRSA isolates were more resistant to sulfamethoxazole-trimethoprim (81%) and gentamicin (50%) than non-MRSA isolates 35% and 25% for sulfamethoxazole-trimethoprim and gentamicin, respectively. In this study, we report that resistance to ciprofloxacin, a primary topical antibiotic used to manage ear infections, is 22%. Most isolated bacteria had a low resistance rate against meropenem (4%) ( ). Understanding the aetiology of ear infections and resistance pattern is crucial in planning interventions and managing ear infections. The results indicate a substantial proportion of ear infections, with bacteria as the primary aetiological agent. Most isolated bacteria were resistant to third-generation cephalosporins, sulfamethoxazole-trimethoprim and amoxicillin/clavulanic acid. Gram-positive bacteria were highly resistant to erythromycin. The two antibiotics that worked the best were ciprofloxacin and meropenem. The results imply the need to review ear infection management and the selection of an efficient antibiotic. The study found that many ear infections are of bacterial aetiology. The finding is similar to studies done in Tanzania by Kennedy et al in Morogoro, Zephania et al in Dar es Salaam, Martha et al in Mwanza and other studies in Kenya and India. Furthermore, we observed that S. aureus and P.s aeruginosa are ear infections’ leading bacterial aetiological agents, similar to previous studies in Tanzania, Nigeria, Angola, Kenya and India. In addition, this study found Candida spp and Aspergillus spp the fungal spp, causing ear infections consistent with previous findings in Tanzania and elsewhere (Nigeria, Iran, Ethiopia, Egypt and India). Nonetheless, the contribution of fungi aetiology in ear infections in this study was expected because many individuals had risk behaviours for fungal ear infections, including excessive use of eardrops containing antibiotics, regular cleaning of ears and swimming. Antibiotic overuse promotes the growth of fungi, and the regular ear cleaning habit removes cerumen and exposes ears to fungi colonisation and, subsequently, infection. The current study revealed a high proportion of MRSA (44.4%) and ESBL-PE (34.4%). In addition, our study showed Klebsiella spp (33.3%) as the dominant ESBL-PE. The higher proportion of MRSA and ESBL-PE coincides with studies done in Tanzania by Martha et al among patients with CSOM infection and another study in India. The greater inclination for self-prescribing and empirically prescribing antibiotics without considering laboratory culture and sensitivity may explain the higher proportion of ESBL and MRSA. Furthermore, an increased tendency for people to visit hospital facilities due to chronic ear infections can also explain the high incidence of ESBL and MRSA, which raises the danger of exposure to muiltidrug-resistant (MDR) bacteria. In addition, the tendency to use inanimate objects to remove earwax can be attributed to the increased proportion of ESBL and MRSA, as these inanimate objects are often found in environments that may be contaminated with ESBL-producing bacteria and MRSA. Almost all isolated bacteria (93%) were resistant to amoxicillin/clavulanic acid. Nearly three-quarters of Gram-negative bacteria were resistant to ceftazidime, and about half were resistant to trimethoprim-sulfamethoxazole. On the other hand, 89% of isolate Gram-positive were resistant to erythromycin. ESBL-PE and MRSA isolates were resistant to the most common antimicrobial agents compared with non-MRSA and non-ESBL-PE. The resistance patterns found in the current study are similar to those reported in other studies in Tanzania, Kenya, Ethiopia, India, Egypt and Romania. The frequent use of these antibiotics to treat various bacterial infections in our setting and the likelihood that most bacterial species have developed resistance to antimicrobial drugs over time may contribute to the observed resistance pattern. In this study, most isolated bacteria were sensitive to meropenem and ciprofloxacin. Ciprofloxacin is a drug of choice for ear infections as per STGs in our setting. The fact that meropenem is infrequently used to treat ear infections may explain the high sensitivity rate. Surprisingly, we observed that ciprofloxacin is still effective despite being prescribed often in our setting for treating ear infections. There is no clinical rationale for why quinolones are still more effective in treating ear infections. However, these results assure that quinolones are still beneficial as first-line topical antibiotics for ear infections. This study has some limitations. We were not able to identify the fungi isolates to species level. This is due to insufficient funding and the availability of resources. To mitigate this, all fungi isolates were stored appropriately for future testing. In addition, due to financial constraints and lack of equipment, it was impossible to isolate anaerobic bacteria from the collected pus specimen. The results of this study indicate that bacteria are the most common cause of ear infections in our context. Furthermore, we report that many multidrug-resistant bacteria (ESBL-PE and MRSA) are implicated in causing ear infections. Therefore, antimicrobial susceptibility testing is crucial to guide clinicians on appropriately managing ear infections in our setting. Reviewer comments Author's manuscript
Dental treatment of patients with prune belly syndrome
20c1bd3c-c37b-4c0a-8b38-4cb88fb7ce3b
10083899
Dental[mh]
BACKGROUND Prune belly syndrome (PBS) is also known as Eagle‐Barrett syndrome (EGBRS) It is a rare congenital disease affecting around 1 in 30 000–40 000 live births with significant prevalence of males. It is characterized by a triad of features: deficiency or absence of abdominal wall muscles, urological abnormalities (megaureter, hydroureter, hydronephrosis, vesicoureteral reflux, megacystis), and bilateral cryptorchidism. Autosomal recessive inheritance of mutated variant of CHRM3 gene on chromosome 1q43 and sex‐influenced autosomal recessive mode of inheritance were suggested. Genetic contribution to etiology is also supported by occurrence of PBS associated with chromosomal defects, specifically. trisomy 21, and large deletions in the long arm of chromosome 6. PBS also often occurs as a sporadic condition. Primary pathogenetic cause of PBS is a functional obstruction of urethra early in development of urogenital system. It leads to vesicoureteral reflux and increases hydrostatic pressure damaging renal tissue. Increased intraabdominal pressure may contribute to impaired development of abdominal wall muscles. Basically, PBS is considered to be a mesodermal developmental defect. , , PBS profoundly affects child's physical, emotional, social, and academic functioning. Perinatal mortality rates for PBS were reported between 10% and 25%. The purpose of this article is to educate dental practitioners about the PBS. We will describe disturbances of various organs, dental abnormalities and we will propose modifications of dental treatments that could contribute to a better patient care. METHODS We reviewed research articles published during the time from 1965 to 2021 using four search engines PubMed, Scopus, and Google Scholar. Individually or in combinations, we used keywords prune belly, prune belly syndrome, PBS, Eagle‐Barrett, dental manifestation, clinical manifestation, psychological aspects. The search was run with no language restrictions. We obtained 522 articles on PBS and 11 articles on EGBRS. Interest in PBS started to increase in late eighties and has continued till now. The majority of articles dealt with diagnosis and treatment. We focused on information related to oral health and dental care. RESULTS 3.1 Diagnosis Most patients affected with PBS are diagnosed as newborns presenting with characteristic appearance of abdominal wall. Findings shown by maternal ultrasound in 11–12 weeks of pregnancy can suggest PBS. Classical triad of features confirming diagnosis of PBS are usually identified later in pregnancy. PBC can be classified based on antenatal and postnatal features into three categories. Category 1 PBS includes patients with severe renal dysfunction and pulmonary hypoplasia. It has almost 100% mortality rate. Category 2 PBS patients display the classic triad features with varying degrees of renal dysplasia. There is a large variation in severity of PBS, some patients may need an early dialysis. Category 3 patients have a normal renal function and mild phenotypic features of PBS. Of all the medical issues of PBS, renal impairment has the largest effect on the patient's health status. As kidney function declines, renal osteodystrophy may develop. Decline in kidney function causes a deficient activation of vitamin D and decreased renal reabsorption of calcium leading to hypocalcemia. It promotes increased secretion of parathyroid hormone (secondary hyperparathyroidism) which is diagnosed with routine blood tests measuring the levels of parathyroid hormone, calcium, and other minerals. In addition to it, declined production of erythropoietin by kidney causes anemia. The majority (90%) of patients with chronic renal disease suffer from oral symptoms. They include but are not limited to gingival bleeding, gingival hyperplasia, pulp obliteration, delay or alteration of tooth eruption, osteoporosis, infections (most frequently candidiasis) and xerostomia. Renal osteodystrophy in the jaw can cause tooth mobility, malocclusion, and even weaking the jaw which could result in fracture (spontaneous or after dental procedures) and abnormal bone healing after extractions. 3.2 Renal impairment and cardiac side effects Category 2 PBS patients with moderate to severe renal dysplasia often need dialysis. These patients require a special consideration in relation to planned dental treatment not only because of multiple oral manifestations of PBS but also due to side effects of medications they receive. Consultation with the patient's nephrologist will inform the dentist about current stage of renal disease, types of treatment the patient receives, about possible modifications of medications and about best timing of the dental treatment. Anticoagulant (heparin) is administered to a patient during dialysis. It may increase a risk of bleeding. Therefore, it is important to plan dental treatment on non‐dialysis days. , Regarding antibiotic premedication protecting a dialysis patient from bacterial endocarditis, the American Heart Association's recommendations should be considered and discussed with the patient's physician. Hypertension is another feature of a chronic renal disease. If untreated, a process started by depositions of calcium into endothelium of renal arteries narrows their lumen and volume of blood supplied to kidneys diminishes. The kidneys react by secretion of renin that activates the renin‐angiotensin system. Together with increased secretion of aldosterone, water and sodium are reabsorbed, volume of blood supplied to kidneys is increased, but blood pressure is also increased. The increased blood pressure further aggravates the processes narrowing the renal arteries. This vicious cycle repeats and may, eventually, lead to a renal failure. Total 30% of PBS patients will become candidates for kidney transplantation during their lifetime. Patients placed on antihypertensive medications often demonstrate a variety of dental problems that are primarily related to xerostomic side effects. Xerostomia increases caries risk. Other oral problems include taste changes, ulcerations, gingival enlargement, increased risk of gingival bleeding, and lichenoid reactions. A conversation with a physician prescribing medication for a patient may be helpful to find out if a different medication with fewer oral side effects could be substituted while maintaining the desired antihypertensive results. 3.3 Pulmonary impairment Pulmonary hypoplasia is associated with more severe cases of PBS. A good medical history of a patient with PBS includes asking the patient (or caregiver) if they had any respiratory problems. If so, consultation with the patient's primary care physician or specialist needs to take place prior to a planned dental treatment of the patient. Abnormal respiratory function in PBS patients may be caused by pulmonary hypoplasia, rib cage abnormalities (such as pectus excavatum), thoracic abnormalities (such as scoliosis), abdominal muscle weakness. In patients with chronic renal disease, the pulmonary function may be impaired as a consequence of uremia. Patients with chronic respiratory conditions may be at risk of poor oral health due to systemic inflammation. The medications used to treat various pulmonary diseases may lead to xerostomia which, consequently, increases risk caries, and oropharyngeal candidiasis. Discussion with the patient's physician needs to include the level of observed xerostomia. If the patient's medication cannot be changed or discontinued, all methods to improve oral hygiene need to be explored and topical fluorides provided. Treatment of the oropharyngeal candidiasis can be accomplished by the dentist, or the patient can be referred back to the primary care physician for treatment. 3.4 Oral findings As PBS is a rare medical condition that presents with a wide phenotypic spectrum, many oral findings were reported in individual patients. In a 15‐year‐old patient with PBS, all teeth showed characteristics of hypoplasia and generalized hypocalcification, yellowish color, and marked depressions on the tooth surfaces. There is also an increased predisposition to caries and malocclusion resulting from enamel hypoplasia. The patient reported sensation of dry mouth and sensitivity of all teeth. Calculus was found in a toddler with end stage of renal disease due to PBS. One article discusses a PBS patient with geminated primary teeth and hypodontia. Oral presentations may include cleft lip and gingival fibromatosis. , In a 4‐year‐old patient, all maxillary teeth were covered by excessive gingival tissue except the deciduous central incisors. As gingival hyperplasia can be a side effect of antihypertensive medications, antirejection medications such as cyclosporin and calcium channel blockers all medications of the patient should be checked. 3.5 Dental treatment Preparing for a dental treatment of a patient with PBS requires a consultation with the patient's physician to obtain an information about the patient's health and about a degree of impairment of various organs, especially the kidney. Although a thorough health history can start familiarizing the dentist with the patient's overall medical status, it cannot be assumed that the patient nor the caregiver completely understand the full scope and ramifications of PBS health issues. Health literacy includes a set of skills needed to make appropriate health decisions and ability to relay this information. National data indicates that more than one‐third of U.S adults have a limited health literacy. Questions to be included in the conversation with the physician are: (1) what category of PBS does the patient fall into, (2) what organs have been affected and to what degree, (3) what medications is the patient correctly taking, (4) does the physician recommend premedication with antibiotics, and (5) should the dosage of administered drugs be adjusted to decreased function of kidney , , are there any other modifications to dental care that the physician can recommend. After the initial exam, if oral pathology is noted that could be a side effect of the medications given by the physician, conversation with the prescribing physician could result in their substitution by equally effective medications with fewer oral pathological sequalae. As PBS has multiple presentations which range from high mortality within the first few years of life to minimal or no noted organ impairment, treatment must be personally tailored to the individual patient. For patients with relatively minor or no significant organ impairment secondary to their PBS, no alterations to routine dental care are indicated. However, a patient who has significant organ impairment and medical issues associated with PBS may need multiple changes to routine dental care. These need to be consulted with the patient's primary care physician, specialist and explained to the patient or caregiver. A list of possible presentations of medical co‐morbidities associated with PBS is shown in the Table , together with suggestions on possible limitations/changes to routine dental care. 3.6 Teledentistry As with any patient who has impaired organ function and may be immunocompromised either due to presented medical issues or due to prescribed medications, a limited time in the dental clinic is advisable, especially, during events such as the COVID pandemic. Initial contact with the patient or caregiver can safely and effectively be conducted via teledentistry. You can meet the patient without facial covering which is less frightening for young children. You can spend time with the patient or the treating physician to obtain information regarding medical care and review any necessary possible adjuncts such as antibiotic premedication for patients with significant leukopenia prior to the patient's planned dental treatment. All post‐operative and triage appointments should be considered for teledentistry. Although 11% of PBS patients have some degree of hearing loss, this method of patient care should be used whenever possible. Care for patients with PBS may have additional challenges for not only the dental provider but also for the patient and possible caregiver. One study evaluated the health‐related quality of life for children with PBS and their caregivers and found a lower overall health‐related quality of life (HRQoL) scores. PBS patients had HRQoL scores comparable to those of children with cerebral palsy. Arlen and colleagues found that 84% of PBS children scored below half of a standard deviation for healthy children. The article concluded that “PBS profoundly affected the HRQoL in children and negatively impacted physical, emotional, social, and school functioning”. Caregivers of PBS patients also report an overall lower quality of life, highlighting the challenges that families with chronically ill children frequently face. Diagnosis Most patients affected with PBS are diagnosed as newborns presenting with characteristic appearance of abdominal wall. Findings shown by maternal ultrasound in 11–12 weeks of pregnancy can suggest PBS. Classical triad of features confirming diagnosis of PBS are usually identified later in pregnancy. PBC can be classified based on antenatal and postnatal features into three categories. Category 1 PBS includes patients with severe renal dysfunction and pulmonary hypoplasia. It has almost 100% mortality rate. Category 2 PBS patients display the classic triad features with varying degrees of renal dysplasia. There is a large variation in severity of PBS, some patients may need an early dialysis. Category 3 patients have a normal renal function and mild phenotypic features of PBS. Of all the medical issues of PBS, renal impairment has the largest effect on the patient's health status. As kidney function declines, renal osteodystrophy may develop. Decline in kidney function causes a deficient activation of vitamin D and decreased renal reabsorption of calcium leading to hypocalcemia. It promotes increased secretion of parathyroid hormone (secondary hyperparathyroidism) which is diagnosed with routine blood tests measuring the levels of parathyroid hormone, calcium, and other minerals. In addition to it, declined production of erythropoietin by kidney causes anemia. The majority (90%) of patients with chronic renal disease suffer from oral symptoms. They include but are not limited to gingival bleeding, gingival hyperplasia, pulp obliteration, delay or alteration of tooth eruption, osteoporosis, infections (most frequently candidiasis) and xerostomia. Renal osteodystrophy in the jaw can cause tooth mobility, malocclusion, and even weaking the jaw which could result in fracture (spontaneous or after dental procedures) and abnormal bone healing after extractions. Renal impairment and cardiac side effects Category 2 PBS patients with moderate to severe renal dysplasia often need dialysis. These patients require a special consideration in relation to planned dental treatment not only because of multiple oral manifestations of PBS but also due to side effects of medications they receive. Consultation with the patient's nephrologist will inform the dentist about current stage of renal disease, types of treatment the patient receives, about possible modifications of medications and about best timing of the dental treatment. Anticoagulant (heparin) is administered to a patient during dialysis. It may increase a risk of bleeding. Therefore, it is important to plan dental treatment on non‐dialysis days. , Regarding antibiotic premedication protecting a dialysis patient from bacterial endocarditis, the American Heart Association's recommendations should be considered and discussed with the patient's physician. Hypertension is another feature of a chronic renal disease. If untreated, a process started by depositions of calcium into endothelium of renal arteries narrows their lumen and volume of blood supplied to kidneys diminishes. The kidneys react by secretion of renin that activates the renin‐angiotensin system. Together with increased secretion of aldosterone, water and sodium are reabsorbed, volume of blood supplied to kidneys is increased, but blood pressure is also increased. The increased blood pressure further aggravates the processes narrowing the renal arteries. This vicious cycle repeats and may, eventually, lead to a renal failure. Total 30% of PBS patients will become candidates for kidney transplantation during their lifetime. Patients placed on antihypertensive medications often demonstrate a variety of dental problems that are primarily related to xerostomic side effects. Xerostomia increases caries risk. Other oral problems include taste changes, ulcerations, gingival enlargement, increased risk of gingival bleeding, and lichenoid reactions. A conversation with a physician prescribing medication for a patient may be helpful to find out if a different medication with fewer oral side effects could be substituted while maintaining the desired antihypertensive results. Pulmonary impairment Pulmonary hypoplasia is associated with more severe cases of PBS. A good medical history of a patient with PBS includes asking the patient (or caregiver) if they had any respiratory problems. If so, consultation with the patient's primary care physician or specialist needs to take place prior to a planned dental treatment of the patient. Abnormal respiratory function in PBS patients may be caused by pulmonary hypoplasia, rib cage abnormalities (such as pectus excavatum), thoracic abnormalities (such as scoliosis), abdominal muscle weakness. In patients with chronic renal disease, the pulmonary function may be impaired as a consequence of uremia. Patients with chronic respiratory conditions may be at risk of poor oral health due to systemic inflammation. The medications used to treat various pulmonary diseases may lead to xerostomia which, consequently, increases risk caries, and oropharyngeal candidiasis. Discussion with the patient's physician needs to include the level of observed xerostomia. If the patient's medication cannot be changed or discontinued, all methods to improve oral hygiene need to be explored and topical fluorides provided. Treatment of the oropharyngeal candidiasis can be accomplished by the dentist, or the patient can be referred back to the primary care physician for treatment. Oral findings As PBS is a rare medical condition that presents with a wide phenotypic spectrum, many oral findings were reported in individual patients. In a 15‐year‐old patient with PBS, all teeth showed characteristics of hypoplasia and generalized hypocalcification, yellowish color, and marked depressions on the tooth surfaces. There is also an increased predisposition to caries and malocclusion resulting from enamel hypoplasia. The patient reported sensation of dry mouth and sensitivity of all teeth. Calculus was found in a toddler with end stage of renal disease due to PBS. One article discusses a PBS patient with geminated primary teeth and hypodontia. Oral presentations may include cleft lip and gingival fibromatosis. , In a 4‐year‐old patient, all maxillary teeth were covered by excessive gingival tissue except the deciduous central incisors. As gingival hyperplasia can be a side effect of antihypertensive medications, antirejection medications such as cyclosporin and calcium channel blockers all medications of the patient should be checked. Dental treatment Preparing for a dental treatment of a patient with PBS requires a consultation with the patient's physician to obtain an information about the patient's health and about a degree of impairment of various organs, especially the kidney. Although a thorough health history can start familiarizing the dentist with the patient's overall medical status, it cannot be assumed that the patient nor the caregiver completely understand the full scope and ramifications of PBS health issues. Health literacy includes a set of skills needed to make appropriate health decisions and ability to relay this information. National data indicates that more than one‐third of U.S adults have a limited health literacy. Questions to be included in the conversation with the physician are: (1) what category of PBS does the patient fall into, (2) what organs have been affected and to what degree, (3) what medications is the patient correctly taking, (4) does the physician recommend premedication with antibiotics, and (5) should the dosage of administered drugs be adjusted to decreased function of kidney , , are there any other modifications to dental care that the physician can recommend. After the initial exam, if oral pathology is noted that could be a side effect of the medications given by the physician, conversation with the prescribing physician could result in their substitution by equally effective medications with fewer oral pathological sequalae. As PBS has multiple presentations which range from high mortality within the first few years of life to minimal or no noted organ impairment, treatment must be personally tailored to the individual patient. For patients with relatively minor or no significant organ impairment secondary to their PBS, no alterations to routine dental care are indicated. However, a patient who has significant organ impairment and medical issues associated with PBS may need multiple changes to routine dental care. These need to be consulted with the patient's primary care physician, specialist and explained to the patient or caregiver. A list of possible presentations of medical co‐morbidities associated with PBS is shown in the Table , together with suggestions on possible limitations/changes to routine dental care. Teledentistry As with any patient who has impaired organ function and may be immunocompromised either due to presented medical issues or due to prescribed medications, a limited time in the dental clinic is advisable, especially, during events such as the COVID pandemic. Initial contact with the patient or caregiver can safely and effectively be conducted via teledentistry. You can meet the patient without facial covering which is less frightening for young children. You can spend time with the patient or the treating physician to obtain information regarding medical care and review any necessary possible adjuncts such as antibiotic premedication for patients with significant leukopenia prior to the patient's planned dental treatment. All post‐operative and triage appointments should be considered for teledentistry. Although 11% of PBS patients have some degree of hearing loss, this method of patient care should be used whenever possible. Care for patients with PBS may have additional challenges for not only the dental provider but also for the patient and possible caregiver. One study evaluated the health‐related quality of life for children with PBS and their caregivers and found a lower overall health‐related quality of life (HRQoL) scores. PBS patients had HRQoL scores comparable to those of children with cerebral palsy. Arlen and colleagues found that 84% of PBS children scored below half of a standard deviation for healthy children. The article concluded that “PBS profoundly affected the HRQoL in children and negatively impacted physical, emotional, social, and school functioning”. Caregivers of PBS patients also report an overall lower quality of life, highlighting the challenges that families with chronically ill children frequently face. CONCLUSIONS AND RECOMMENDATIONS PBC is a rare congenital disease which has multiple clinical presentations ranging from minor deficiencies of abdominal wall musculature to severe cases with high mortality within the first few years of life. Many patients with PBS need a special dental care because compromised renal and respiratory functions negatively affect their oral and dental health. One obstacle with regards to dental care for these patients is a difficulty in finding a dentist who would fully understand variations of features and medical history of patients with PBS and who would be willing to provide compassionate and safe care to prune belly patients. We believe that providing information to dental practitioners will improve their understanding of PBS, will help them to better treat these patients, and will encourage them to welcome patients with PBS into their practice. The authors declares no conflict of interest.
Birth outcomes by type of attendance at antenatal education: An observational study
08a118b2-dd83-46a5-9cc3-4a5d2614eab9
10083900
Patient Education as Topic[mh]
Antenatal education aims to provide expectant parents with information about pregnancy, childbirth, breastfeeding and parenthood. Women may attend antenatal education, in addition to antenatal care, to be informed, obtain advice, have their questions answered, reduce anxiety, meet other parents, have a better labour and/or reduce birth intervention as well as gain parenting advice. , Antenatal education is undertaken by a range of healthcare providers including physiotherapists, midwives, as well as specifically trained childbirth and parenting educators. Education varies in content focusing on childbirth fear and pain, pain relief techniques, mode of birth, parenting, breastfeeding, relaxation training as a life skill and/or specifically for labour and birth, and may include breathing and relaxation methods for pain relief, termed psychoprophylaxis. Education may involve couples or women alone, be run in small groups or large classes and have different formats including lectures, role play education, leaflets, telephone and/or online and may have a different number of sessions. The content may also vary from class to class within one program type, and may change over time with different educators. Over the last few decades, there has been a rise in obstetric interventions during labour and birth in most developed countries. This has led to interest in antenatal education as a strategy to reduce birth interventions, particularly caesarean section. In 2018, the World Health Organization released recommendations about reducing unnecessary caesarean sections which stated that ‘health education for women is an essential component of antenatal care’. In addition, they noted that educational interventions and support programs, including childbirth training workshops, nurse‐led applied relaxation training, psychosocial couple‐based prevention programs and psychoeducation for women with fear of childbirth, are recommended to reduce caesarean births. Systematic reviews and meta‐analyses have found that childbirth training workshops for mothers and couples, as well as nurse‐led applied relaxation training and psychoprophylaxis couple‐based programs, were associated with a reduction in caesarean section and may increase spontaneous vaginal birth rates (relative risk (RR) 2.25, 95% CI 1.16–4.36). , However, studies included in the meta‐analysis were small, included different educational interventions in various maternity settings, and a number had the potential for bias. , Given the rise in caesarean section rates and recent evidence suggesting antenatal education may reduce caesarean section rates, we aimed to determine the type of antenatal education classes attended by nulliparous women and evaluate the impact on this on mode of birth and other birth outcomes. Study population This prospective, cross‐sectional study included nulliparous pregnant women with a singleton pregnancy ≥28 weeks gestation planning to have their baby at two hospitals in Sydney, Australia from July 2017 to December 2018. The Royal Hospital For Women (RHW) is a tertiary maternity hospital with approximately 4000 births per annum. St George Hospital (STG) is a maternity hospital within the same area health service, able to care for women giving birth after 32 weeks gestation, with approximately 2500 births per annum, serving a socio‐demographically diverse population. Study design and recruitment The study combined patient data from three sources: a pregnancy survey, a postnatal survey and hospital pregnancy outcome data. The self‐administered surveys collected information on socio‐demographic characteristics, attendance at antenatal classes, type of class, satisfaction with education and birth outcomes (Appendix ). Pregnant women were identified via hospital antenatal classes, clinics, and wards as well as other hospital education sessions (eg free breastfeeding information sessions or hospital tours). Women enrolled in antenatal classes who agreed to being approached about research were identified at the time of online registration. They were sent a participant information sheet and a secure REDCap weblink to the pregnancy survey at around 28 weeks gestation. Women recruited face‐to‐face were given either a paper survey or sent a secure weblink. Electronic surveys were in English and paper surveys were available in English, Mandarin and Arabic. Women completing the first survey opted in or out to receive a follow‐up postnatal survey and/or allow researchers to access their hospital birth records. Women (known to have had a live baby) who consented to receive a postpartum survey, were sent a survey weblink six weeks after the expected due date, with a single reminder. Birth outcome data were obtained from the hospitals' maternity database (e‐Maternity, Meridian Health Informatics, Sydney, Australia). Antenatal class attendance The study's exposure of interest was attendance at antenatal classes and type of classes attended. Multiple classes were available at the two hospitals, and in the community by a range of non‐hospital providers (Table ). Women chose which, if any class(es), to attend. Antenatal classes provided at each hospital usually incur a fee. Classes were held during the day, evening and/or weekend. We aimed to recruit 100 women doing no classes, and those attending each type of hospital antenatal class. As such, recruitment ceased at different times for different types of classes. However, in the end, recruitment was ceased prior to the anticipated number of women attending no antenatal classes being reached due to limited resources. Education attendance was self‐reported on both the pregnancy and postpartum survey; if women completed both surveys, only information from the postpartum survey was used. For women who attended two or more types of education, education was categorised into a hierarchy (‘She births’ > ‘Calmbirth’ > ‘Having a baby (STG or RHW)’ > ’birth intensive’ > midwife > other), and women were placed into the highest ranked class category reported. For analysis, education was then classified into four groups: psychoprophylaxis (‘She births’, ‘Calmbirth’), birth and parenting (‘Having a Baby at the RHW’; ‘Having Your Baby’ at STG, and other (‘Birth intensive’, midwife classes, ‘Active Birth’ at STG, other classes) or none. Study outcomes The main outcomes of the study were mode of birth, defined as vaginal birth versus caesarean section, use of regional analgesia (epidural, spinal, or combined epidural/spinal) among women having a vaginal birth during labour and birth, and maternal postnatal length of stay (≤3 or >4 days). Secondary outcomes included type of labour, perineal trauma, gestational age, birthweight, and infant feeding at discharge. These outcomes were defined using hospital maternity data. For women who did not consent to provide their hospital data, mode of birth and analgesia use were ascertained from self‐reported responses on the postpartum survey. In addition, self‐reported satisfaction with birth (five‐point Likert scale) measured on the postpartum survey, was also examined with responses dichotomised as satisfied/very satisfied and neither satisfied or unsatisfied/unsatisfied/very unsatisfied. Statistical methods Descriptive statistics were used to explore socio‐demographic characteristics of women by uptake and classified group of antenatal birth education attended. Pearson's χ 2 tests or Fisher's exact tests and Kruskal‐Wallis tests were used to compare differences between socio‐demographic characteristics and type of antenatal class group for categorical and continuous variables, respectively. Multivariable logistic regression was used to examine the association between antenatal birth class attendance and mode of birth, use of regional analgesia among women having a vaginal birth during labour and birth, and maternal postnatal length of stay. Birth class attendance was examined by the four class groups as well as by any class attendance. Models were adjusted for potential confounding variables selected a priori including maternal age, body mass index, gestational age, model of antenatal care, and birth hospital. For regional analgesia, we also assessed findings for women without labour and those without emergency or elective caesarean section, respectively. Due to differences between birth populations at the two hospitals, an additional sensitivity analysis was conducted excluding women delivering at B hospital. All analyses were performed using SAS version 9.4 (SAS Institute Inc., Cary, NC, USA) and P ‐values <0.05 were considered statistically significant. Ethics approval was obtained from the South Eastern Sydney Local Health District Human Research Ethics Committee (Ref no: 17/090 (LNR/17/POWH/198)) with site‐specific approval for both sites. This prospective, cross‐sectional study included nulliparous pregnant women with a singleton pregnancy ≥28 weeks gestation planning to have their baby at two hospitals in Sydney, Australia from July 2017 to December 2018. The Royal Hospital For Women (RHW) is a tertiary maternity hospital with approximately 4000 births per annum. St George Hospital (STG) is a maternity hospital within the same area health service, able to care for women giving birth after 32 weeks gestation, with approximately 2500 births per annum, serving a socio‐demographically diverse population. The study combined patient data from three sources: a pregnancy survey, a postnatal survey and hospital pregnancy outcome data. The self‐administered surveys collected information on socio‐demographic characteristics, attendance at antenatal classes, type of class, satisfaction with education and birth outcomes (Appendix ). Pregnant women were identified via hospital antenatal classes, clinics, and wards as well as other hospital education sessions (eg free breastfeeding information sessions or hospital tours). Women enrolled in antenatal classes who agreed to being approached about research were identified at the time of online registration. They were sent a participant information sheet and a secure REDCap weblink to the pregnancy survey at around 28 weeks gestation. Women recruited face‐to‐face were given either a paper survey or sent a secure weblink. Electronic surveys were in English and paper surveys were available in English, Mandarin and Arabic. Women completing the first survey opted in or out to receive a follow‐up postnatal survey and/or allow researchers to access their hospital birth records. Women (known to have had a live baby) who consented to receive a postpartum survey, were sent a survey weblink six weeks after the expected due date, with a single reminder. Birth outcome data were obtained from the hospitals' maternity database (e‐Maternity, Meridian Health Informatics, Sydney, Australia). The study's exposure of interest was attendance at antenatal classes and type of classes attended. Multiple classes were available at the two hospitals, and in the community by a range of non‐hospital providers (Table ). Women chose which, if any class(es), to attend. Antenatal classes provided at each hospital usually incur a fee. Classes were held during the day, evening and/or weekend. We aimed to recruit 100 women doing no classes, and those attending each type of hospital antenatal class. As such, recruitment ceased at different times for different types of classes. However, in the end, recruitment was ceased prior to the anticipated number of women attending no antenatal classes being reached due to limited resources. Education attendance was self‐reported on both the pregnancy and postpartum survey; if women completed both surveys, only information from the postpartum survey was used. For women who attended two or more types of education, education was categorised into a hierarchy (‘She births’ > ‘Calmbirth’ > ‘Having a baby (STG or RHW)’ > ’birth intensive’ > midwife > other), and women were placed into the highest ranked class category reported. For analysis, education was then classified into four groups: psychoprophylaxis (‘She births’, ‘Calmbirth’), birth and parenting (‘Having a Baby at the RHW’; ‘Having Your Baby’ at STG, and other (‘Birth intensive’, midwife classes, ‘Active Birth’ at STG, other classes) or none. The main outcomes of the study were mode of birth, defined as vaginal birth versus caesarean section, use of regional analgesia (epidural, spinal, or combined epidural/spinal) among women having a vaginal birth during labour and birth, and maternal postnatal length of stay (≤3 or >4 days). Secondary outcomes included type of labour, perineal trauma, gestational age, birthweight, and infant feeding at discharge. These outcomes were defined using hospital maternity data. For women who did not consent to provide their hospital data, mode of birth and analgesia use were ascertained from self‐reported responses on the postpartum survey. In addition, self‐reported satisfaction with birth (five‐point Likert scale) measured on the postpartum survey, was also examined with responses dichotomised as satisfied/very satisfied and neither satisfied or unsatisfied/unsatisfied/very unsatisfied. Descriptive statistics were used to explore socio‐demographic characteristics of women by uptake and classified group of antenatal birth education attended. Pearson's χ 2 tests or Fisher's exact tests and Kruskal‐Wallis tests were used to compare differences between socio‐demographic characteristics and type of antenatal class group for categorical and continuous variables, respectively. Multivariable logistic regression was used to examine the association between antenatal birth class attendance and mode of birth, use of regional analgesia among women having a vaginal birth during labour and birth, and maternal postnatal length of stay. Birth class attendance was examined by the four class groups as well as by any class attendance. Models were adjusted for potential confounding variables selected a priori including maternal age, body mass index, gestational age, model of antenatal care, and birth hospital. For regional analgesia, we also assessed findings for women without labour and those without emergency or elective caesarean section, respectively. Due to differences between birth populations at the two hospitals, an additional sensitivity analysis was conducted excluding women delivering at B hospital. All analyses were performed using SAS version 9.4 (SAS Institute Inc., Cary, NC, USA) and P ‐values <0.05 were considered statistically significant. Ethics approval was obtained from the South Eastern Sydney Local Health District Human Research Ethics Committee (Ref no: 17/090 (LNR/17/POWH/198)) with site‐specific approval for both sites. Overall, 723 eligible women completed the antenatal survey, and 505 women (69.9%) with birth data were included in this study (Fig. ). Demographic characteristics are presented in Table . Three‐quarters (78%) of all women were tertiary educated. Most women surveyed (89%) attended antenatal education, with 23% attending psychoprophylaxis, 39% birth and parenting and 26% other education. The socio‐demographic characteristics of women differed by type of antenatal education attended (all P < 0.02), with women not attending classes less likely to be born in Australia/New Zealand, have care in a midwifery group practice, and had lower income and education levels compared to those attending classes. The median gestation at birth was 39 weeks with a median birthweight of 3385 g (Table ). Seventy percent of women had a vaginal birth (42% unassisted, 28% instrumental birth) and 30% had a caesarean section, with just over half (56%) having regional analgesia. There was a difference in the type of labour, mode of birth, perineal trauma, gestational age and infant birthweight by type of antenatal education attended, but no difference in infant feeding at discharge (Table ). Specifically, a higher proportion of women who attended psychoprophylaxis education had a vaginal birth (79%) compared with women who attended birth and parenting, other or no education (69%, 67%, 60%, respectively P = 0.045). Compared with women who did not attend antenatal education, women who attended psychoprophylaxis education were more than twice as likely to have a vaginal birth (odds ratio (OR) 2.54, 95% CI 1.28–5.07). However, after adjusting for maternal characteristics, birth and hospital factors, the association was attenuated (adjusted OR (aOR) 2.03; 95% CI 0.93–4.43) (Table ). There was no association between mode of birth and attendance at any antenatal class (aOR 1.42; 95% CI 0.74–2.71) and no difference in results when restricting analysis to women from A (Table ). When comparing use of regional analgesia by antenatal education, women who attended psychoprophylaxis antenatal education were twice as likely not to have an epidural or spinal analgesia for birth than women who did not attend education (OR 2.04, 95% CI 1.01–4.11). However, this association was attenuated after adjusting for confounders (aOR 1.93; 95% CI 0.87–4.29) (Table ). There was no difference in length of maternal stay by any or type of antenatal education attended. (Table ). There was no overall difference in satisfaction with birth by type of antenatal class ( P = 0.082) (Figure ). Women who attended psychoprophylaxis education were more likely to have a vaginal birth than a caesarean section compared to women who did not attend education. They were also less likely to have regional anaesthesia for birth. However, after adjusting for confounders, these associations were attenuated, likely due to small numbers. This reduction in caesarean section and regional anaesthesia rates in women who attended psychoprophylaxis education is similar to findings reported in a recent randomised trial. In that study women were randomised to either a two‐day program that included acupressure, relaxation, visualisation, breathing, massage, yoga techniques including partner support and standard care, or standard care alone. They found a reduction in caesarean section from 32.5 to 18% (RR 0.52, 95% CI 0.31–0.87) and a reduction in the rate of epidural (from 69 to 24%, RR 0.35, 95% CI 0.23–0.52). Psychoprophylaxis antenatal education aims to prepare women and their partners for childbirth through education on physiological/ hormonal birth; build women's confidence in their ability to labour and give birth through psychological preparation for normal labour (positive mindset) and support their ability to give birth without pain relief using evidence‐based tools for birth preparation. A recently published Cochrane review found acupuncture may increase satisfaction with pain relief compared to sham acupuncture, and probably reduces the use of pharmacological analgesia; however, there was no difference in caesarean section. Acupressure compared to a sham control was associated with a reduction in caesarean section rate (RR 0.44, 95% CI 0.27–0.71, four trials, 313 women). However, many studies included in the meta‐analysis were at high or unclear risk of bias. A further Cochrane review compared relaxation methods, yoga, music and mindfulness, on pain management in labour, found the use of some relaxation therapies, yoga, or music may possibly be helpful with reducing the intensity of pain, and in helping women feel more in control and satisfied with their labours; however, findings were limited by the low‐quality studies. A protocol for an individual participant meta‐analysis on types of birth classes has recently been published, and studies are planned to evaluate the effectiveness of a new low‐cost psychoprophylaxis child birth education program on caesarean section rates. Studies have found that not only do the content and format of antenatal classes differ widely, but also there is a lack of evidence‐based guidelines about what and how education should be offered. One study found no difference in epidural analgesia rates or obstetric interventions with small group antenatal education compared to auditorium standard lectures. Paz‐Pascual used Delphi methodology to survey health professionals and non‐health professionals to identify topics which should be included in antenatal education programs. They found there was consensus on content items, including: care during the initiation and establishment of breastfeeding; information for shared decision‐making with regard to childbirth; identification of problems in the postpartum period and coping tools; advice about healthy lifestyle; and information on options for pain management during labour and birth. Given the rates of increasing obstetric intervention including caesarean section in Australia and other countries and the high costs of caesarean section to the health service and government implementation of universal free antenatal education to nulliparous women could contribute to a reduction in associated costs. An economic analysis of the study by Levett et al found that an effective antenatal education program could lead to cost savings of $659 per woman in Australia. Further research needs to determine whether antenatal education leads to differences in obstetric interventions, but also the cost effectiveness of antenatal education interventions in a range of populations. In addition, during COVID19 lockdowns, antenatal education moved online; however, it has now moved back to face‐to‐face or a hybrid model. It is not known whether there are any differences with face‐to‐face education compared to online models. The strengths of this study include the prospective design, that it was conducted in two hospitals with a multicultural population, in nulliparous women. Nulliparous women were included as the mode of first birth strongly influences subsequent births, and if education leads to improved outcomes, then nulliparous women have the most to gain. We included women who attended a range of different types of antenatal education or no education and used both survey data and routinely collected data to obtain obstetric outcomes. Limitations of the study included the low response rate to the postnatal survey, although we obtained birth outcome data from both survey and hospital data. In addition, we do not know whether women who chose certain types of antenatal education were more motivated to have a vaginal birth than women who went to other types of education or to women who did not attend education. Unanticipated staffing problems led to closing of the study without the targeted sample size being obtained. In addition, the women in the study were highly educated compared to the general population. In conclusion we found nulliparous women who attended psychoprophylaxis couple‐based antenatal education programs had a trend toward higher rates of vaginal birth and lower rates of epidural use. Given the high and rising rates of caesarean section and impact on costs and maternal health outcomes, antenatal education may provide an effective strategy to reduce these. Future high‐quality randomised trials in a broader range of populations comparing different types of antenatal education without economic barriers to attendance, are required to determine whether psychoprophylaxis education can improve obstetric outcomes. Figure S1. Maternal satisfaction with birth by type of antenatal education. Click here for additional data file. Appendix S1. Antenatal classes pregnancy survey. Click here for additional data file.
First‐trimester screening for pre‐eclampsia and small for gestational age: A comparison of the gaussian and Fetal Medicine Foundation algorithms
48902bf3-e875-4d1c-8057-994044cf27bf
10083925
Pediatrics[mh]
INTRODUCTION Pre‐eclampsia (PE) and small for gestational age (SGA) are the main complications of placental disease. First‐trimester PE screening using algorithms that include a combination of maternal characteristics, biophysical markers (mean arterial blood pressure [MAP] and mean uterine artery pulsatility index [UtAPI]), and biochemical markers (placental growth factor [PlGF] and pregnancy‐associated plasma protein A [PAPP‐A]) can predict PE and SGA. , , , The Fetal Medicine Foundation (FMF) and Gaussian algorithms can identify 80%–90% of pregnant women who will develop PE with delivery <32/<34 weeks of gestation , and 60%–70% of women who will develop PE with delivery <37 weeks, , at a 10% false‐positive rate (FPR). These algorithms can also predict 50%–60% of SGA with delivery <32 weeks and 30%–40% of SGA with delivery <37 weeks. , Both algorithms use a similar methodology to assess the risk for PE: they combine the a priori risk (based on maternal characteristics and obstetric and medical history) with the results of various biochemical and biophysical markers, to estimate the individual a posteriori risk for PE, which is used to classify a pregnant person as at high or low risk for PE. In both algorithms, risk for PE can be obtained based on maternal factors alone and in combination with any of the biochemical and/or biophysical markers. Despite the FMF algorithm being the most used and validated worldwide, the Gaussian algorithm has some features that confer advantages in the clinical setting, which is why it has been used for routine first‐trimester PE screening in most maternities in Spain since 2018. First, blood samples for measurements of biochemical markers (PAPP‐A and PlGF) can be drawn between 8 ± 0 weeks and 13 ± 6 weeks, as with routine aneuploidy screening, while in the FMF algorithm biomarkers should be assessed only between 11 ± 0 and 13 ± 6 weeks. Second, UtAPI assessment can be done both transabdominally and transvaginally, rendering the algorithm more versatile to different clinical settings, as the UtAPI for the FMF algorithm can be assessed only transabdominally. Third, likelihood ratios for the a priori risk calculation were not derived from the study population in which the algorithm was investigated but from a larger meta‐analysis that included >25 million pregnancies. This may render the Gaussian algorithm less overfitted to a given population and, therefore, more adaptable for populations with different characteristics. The FMF algorithm has been developed and prospectively validated in large populations, showing comparable predictive performances to the original study. , , , , By contrast, the Gaussian algorithm has been investigated only in a single cohort of participants. In the past few years, routine PE screening has been implemented in most hospitals, leaving virtually no women at high risk for PE without aspirin treatment to prospectively assess the external validity of the Gaussian algorithm. Therefore, an indirect approach to test the performance of the Gaussian algorithm is to compare it with the most externally validated combined screening tool for PE worldwide: the FMF algorithm. The aim of this study was to compare the predictive accuracy for PE and SGA of the Gaussian and FMF algorithms. MATERIALS AND METHODS This is a secondary analysis of previously published data, which was used to test the Gaussian algorithm for early‐onset PE prediction. That study was approved by the local ethics committee (CEIC‐VHIR PR[AMI]265/2018) and conducted in a prospective fashion at Vall d'Hebron University Hospital (Barcelona) from October 2015 to September 2017. A total of 3777 unselected singleton pregnant women attending their routine first‐trimester scan (from 11 ± 0 to 13 ± 6 weeks) were invited to participate, and 2946 women agreed and provided their written informed consent. Of those, 305 participants (10.4%) had to be excluded for the following reasons: missing outcome data ( n = 86), major fetal defects or chromosomopathies ( n = 13), miscarriage or fetal death <24 weeks ( n = 15), and insufficient remaining blood sample to measure PlGF ( n = 191). Before the implementation of the first‐trimester combined screening for PE in 2018, no PE screening was performed at the Vall d'Hebron University Hospital; thus, none of the remaining 2641 participants received aspirin at any time during their pregnancy. Neonatal birthweight was not available for 158 participants; therefore, predictive accuracies for SGA were calculated with 2483 participants and their newborns. Gestational age was confirmed by fetal crown‐rump length measurement during the first‐trimester scan. Maternal characteristics and medical and obstetric history were recorded at the first‐trimester ultrasound scan via a patient questionnaire. The following maternal characteristics were recorded: age (years); height (centimeters); weight (kilograms); ethnicity (white European, South American, black, Asian, South‐East Asian, and others); smoking during pregnancy (yes/no); and conception method (spontaneous/assisted reproductive technology/ovulation drugs). Medical history variables included the presence of chronic hypertension (yes/no); diabetes (type 1/type 2/no); renal disease (yes/no); systemic lupus erythematosus (yes/no); and antiphospholipid syndrome (yes/no). Obstetric history variables included parity (nulliparous/multiparous); gestational age at birth (weeks) in the last pregnancy; interval between the last delivery and the beginning of the current one (years); and personal or family history of PE (yes/no). Biochemical markers, including serum PAPP‐A and PlGF, were measured at the first‐trimester routine blood test for aneuploidy screening (from 8 ± 0 to 13 ± 6 weeks) by the fully automated Elecsys assays for PAPP‐A and PlGF on an immunoassay platform (cobas e analyzers, Roche Diagnostics). Biophysical markers, including MAP and UtAPI, were assessed at the first‐trimester scan. Blood pressure was measured automatically using a calibrated device according to a standard procedure: single measurement in one arm (right or left) while women were seated and after a 5‐min rest. MAP was calculated as: diastolic blood pressure + (systolic‐diastolic blood pressure)/3. UtAPI was measured following the recommendations of the FMF. All examiners were certified by the FMF for PE risk assessment and Doppler ultrasound assessment. SGA newborns were defined as having a birthweight below the 10th centile according to customized local charts. Indication for elective delivery was based on Doppler ultrasound findings and conventional cardiotocogram interpretation, according to the current protocol. Newborns were classified as early SGA if delivery occurred before 32 weeks and as preterm SGA if delivery occurred before 37 weeks. PE was defined according to the guidelines of the International Society for the Study of Hypertension in Pregnancy: systolic blood pressure ≥ 140 mm Hg and/or diastolic blood pressure ≥ 90 mm Hg, confirmed by repeated measurements over a few hours, developing after 20 weeks in previously normotensive women, accompanied by proteinuria ≥300 mg in 24 h, spot urine protein/creatinine ratio ≥0.3 mg/mg, or dipstick urinalysis ≥1+ when a quantitative method was not available. Early‐onset and preterm PE were defined as PE requiring delivery before 34 and 37 weeks, respectively. For the Gaussian algorithm, multiples of the median (MoMs) for each marker were calculated according to the methodology described in a previous study. For the FMF algorithm, MoMs were obtained using the batch calculation tool provided in the FMF website. We then coded the variables required for the prediction formulas according to the description provided in the corresponding published articles. , For the Gaussian algorithm, the prenatal screening software SsdwLab 6 (SBP Soft 2007 S.L) was used to calculate early‐onset PE probability scores. For the FMF algorithm, the risk calculation tool provided in the FMF website was used. Besides the a priori risks, the four markers (PAPP‐A, PlGF, MAP, and UtAPI) can be incorporated alone or in combination of two, three, or four for risk calculation, depending on the markers available in the clinical practice. Therefore, there are 15 possible marker combinations. Nevertheless, only the seven most clinically relevant have been investigated in this study (MAP alone, MAP + PlGF, MAP + UtAPI, MAP + PAPP‐A, MAP + UtAPI + PAPP‐A, MAP + UtAPI + PlGF, and MAP + UtAPI + PlGF + PAPP‐A). 2.1 Statistical Analysis The statistical software RStudio Team (version 1.2.5033 [2019], RStudio: Integrated Development for R. RStudio, Inc.) was used for statistical analysis. Categorical data were reported as frequency and percentage, and comparisons between groups were performed by chi‐square or Fisher tests, as appropriate. Continuous variables were reported as the median and interquartile range, and the Mann–Whitney U test was used to assess differences between groups. Receiver operating characteristic (ROC) curves were generated and detection rates (DRs) at fixed 5%, 10%, 15%, 20%, 25%, and 30% FPRs were calculated for both algorithms. The predictive accuracies of both algorithms were compared for a fixed FPR of 10% as well as for the resulting areas under the curve (AUC), which were compared by the Delong test. Bonferroni correction was used in all tests when multiple comparisons were assessed. Statistical significance was set at P < 0.05. Statistical Analysis The statistical software RStudio Team (version 1.2.5033 [2019], RStudio: Integrated Development for R. RStudio, Inc.) was used for statistical analysis. Categorical data were reported as frequency and percentage, and comparisons between groups were performed by chi‐square or Fisher tests, as appropriate. Continuous variables were reported as the median and interquartile range, and the Mann–Whitney U test was used to assess differences between groups. Receiver operating characteristic (ROC) curves were generated and detection rates (DRs) at fixed 5%, 10%, 15%, 20%, 25%, and 30% FPRs were calculated for both algorithms. The predictive accuracies of both algorithms were compared for a fixed FPR of 10% as well as for the resulting areas under the curve (AUC), which were compared by the Delong test. Bonferroni correction was used in all tests when multiple comparisons were assessed. Statistical significance was set at P < 0.05. RESULTS Among the 2641 participants, 30 (1.14%) women developed preterm PE, including 11 (0.42%) with early‐onset PE. Among the 2483 newborns, 44 (1.77%) were preterm SGA, including 8 (0.32%) with early‐onset SGA. Characteristics of the study population are summarized in Table and Table . For prediction of early‐onset and preterm PE, and early‐onset and preterm SGA, the Gaussian and FMF algorithms showed a similar predictive performance with all marker combinations, except for early‐onset PE prediction with MAP and PAPP‐A (Gaussian AUC = 0.833 [95% CI, 0.727–0.939] vs FMF AUC = 0.771 [95% CI, 0.631–0.911]; P = 0.002), MAP and PlGF (Gaussian AUC = 0.905 [95% CI, 0.844–0.965] vs FMF AUC = 0.858 [95% CI, 0.768–0.947]; P = 0.01), and MAP alone (Gaussian AUC = 0.795 [95% CI, 0.679–0.912] vs FMF AUC = 0.758 [95% CI, 0.621–0.895]; P = 0.02), where the FMF algorithm showed a significantly lower AUC Tables , , , . For early‐onset PE prediction, the Gaussian algorithm showed the greatest AUC when combining maternal history, MAP, UtAPI and PlGF (0.951; 95% CI, 0.919–0.983), followed by the combination of all markers (0.945; 95% CI, 0.912–0.979). The FMF algorithm showed the greatest AUC when combining all markers (0.945; 95% CI, 0.908–0.982). For preterm PE prediction, the Gaussian algorithm showed the greatest AUC when combining maternal history, MAP and PlGF (0.802; 95% CI, 0.722–0.881), followed by the combination of all markers without PAPP‐A (0.798; 95% CI, 0.704–0.893). The FMF algorithm showed the greatest AUC when combining all markers (0.818; 95% CI, 0.728–0.907). For early‐onset SGA prediction, the Gaussian algorithm showed the greatest AUC when combining maternal history, MAP and PlGF (0.840; 95% CI, 0.710–0.970), followed by the combination of all markers without PAPP‐A (0.811; 95% CI, 0.641–0.982). The FMF algorithm showed the greatest AUC when combining all markers (0.906; 95% CI, 0.834–0.978). For preterm SGA prediction, the Gaussian algorithm showed the greatest AUC when combining maternal history, MAP, UtAPI, and PlGF (0.697; 95% CI, 0.612–0.782), followed by the combination of all markers (0.684; 95% CI, 0.598–0.769). The FMF algorithm showed the greatest AUC when combining all markers (0.727; 95% CI, 0.645–0.809). DISCUSSION This study shows that the Gaussian and FMF algorithms have similar predictive accuracies for PE and SGA, except for early‐onset PE, where the FMF algorithm showed a significantly lower AUC with the combinations of MAP and PAPP‐A, MAP and PlGF, and MAP alone. These significant differences could be partly attributed to the different methodology required for MAP assessment in both algorithms. In this study, MAP was measured once in only one arm and after a 5‐min rest, while the FMF algorithm was designed with an average of two MAP measurements performed at 1‐min intervals in both arms simultaneously after a 5‐min rest. This different methodology for MAP measurements may have affected the accuracy of all combinations including MAP in the FMF algorithm, but especially MAP alone or those combinations that included MAP with one other factor. The FMF algorithm has been externally validated by several studies in various populations, showing comparable performance to that of the original study. Nevertheless, one study showed that some algorithms could underperform when applied to populations that were different to the population where they were developed. In this study, we show that performance of the FMF algorithm in a Spanish population was similar to the performance obtained in the original study, further supporting the external validity of the FMF algorithm. By contrast, the predictive ability of the Gaussian algorithm has not been evaluated in other studies, aside from the original study where it was first validated. It must be noted that the Gaussian algorithm was not developed in our population, but just validated, since this algorithm was constructed using previously published data from a large meta‐analysis. This might make this algorithm less likely to be overfitted to our population and, therefore, less likely to underperform when applied to a different population. Since first‐trimester PE screening and aspirin prescription has been implemented in most countries across Europe, prospective external validation of the Gaussian algorithm in untreated populations seems unlikely. Therefore, a reasonable indirect approach to assess the predictive performance of the Gaussian algorithm is to compare it with the FMF algorithm, which has been extensively validated in various large populations. Although our results cannot be considered an external validation of the Gaussian algorithm, the similar accuracies of both algorithms suggest that the FMF algorithm is unlikely to outperform the Gaussian algorithm in our population where it is being routinely used in most maternities since 2018. For this reason, we believe that the Gaussian algorithm might be a reasonable alternative to the FMF algorithm for those settings where the latter cannot be applied because of ultrasonographers performing UtAPI both transabdominally and transvaginally or for settings measuring biomarkers for the aneuploidy and PE screenings before 11 weeks. The results of this study are relevant since the Gaussian algorithm is already being implemented in other countries aside from Spain. Additionally, as seen in previous studies, we confirm that PAPP‐A does not increase the predictive accuracy of any of the algorithms when PlGF was being used; however, when PlGF is not available, PAPP‐A could increase DR by 5% with some marker combinations. Finally, we observed that a single measurement of MAP could decrease the predictive accuracy of the FMF algorithm; therefore, the appropriate methodology (the average of two measurements in both arms simultaneously) should be performed when using this algorithm. One of the main strengths of the study includes the prospective enrollment of patients. Furthermore, this study was performed within the context of routine clinical practice and patients were seen by their usual physicians, making the results more reliable and applicable in routine care settings. Moreover, this is the first study assessing the performance of the FMF algorithm exclusively in a Spanish cohort and in a clinical setting where MAP was measured once and only in one arm, showing similar results to those reported in the original study, for most combinations of markers. Despite a previous study showing that prediction of PE is similar when biomarkers are measured before or after 11 weeks, the FMF algorithm was designed with biomarkers assessed between 11 ± 0 and 13 ± 6 weeks. In this study, biomarkers were measured before 11 ± 0 weeks in 1675 (63.4%) women. Therefore, another remarkable strength of our work is that it provides evidence of the applicability of the FMF and Gaussian algorithms before and after 11 weeks for predicting PE and SGA. The main limitation of our study is the low number of cases with early‐onset SGA and early‐onset PE and the relatively low number of cases with preterm SGA and preterm PE. Additionally, indication for elective delivery of SGA fetuses based on Doppler and cardiotocogram findings may be different when using other fetal growth restriction protocols. However, Doppler and cardiotocogram classification is uniform in Spain, where the Gaussian algorithm is widely used. Another limitation to be noted is that the technique for MAP measurements may potentially reduce the FMF algorithm's performance and could explain its lower AUC versus the Gaussian algorithm for some marker combinations. CONCLUSIONS This study shows that the first‐trimester Gaussian and FMF algorithms have similar predictive performances for PE and SGA in a Spanish population within a routine care setting. The accuracy of the FMF algorithm in our study was similar to that reported in previous studies, adding evidence to its external validity. Berta Serrano, MD; Erika Bonacina, MD; Pablo Garcia‐Manau, MD; Manel Mendoza, MD, PhD; and Elena Carreras, MD, PhD, had full access to all of the data in the study and take full responsibility for the integrity of the data and accuracy of the data analysis. Berta Serrano, MD; Erika Bonacina, MD; Pablo Garcia‐Manau, MD; Manel Mendoza, MD, PhD; and Elena Carreras, MD, PhD, conceived and designed the study. Berta Serrano, MD; Erika Bonacina, MD; Carlota Rodo, MD, PhD; Pablo Garcia‐Manau, MD; María Ángeles Sanchez‐Duran, MD, PhD; María Pancorbo, MD; Cristina Forcada, MD; María Teresa Murcia, MD; Ana Perestelo, MD; and Mireia Armengol‐Alsina, MD, contributed to literature research. Berta Serrano, MD; Erika Bonacina, MD; Carlota Rodo, MD, PhD; Pablo Garcia‐Manau, MD; María Ángeles Sanchez‐Duran, MD, PhD; María Pancorbo, MD; Cristina Forcada, MD; María Teresa Murcia, MD; Ana Perestelo, MD; and Mireia Armengol‐Alsina, MD, contributed to data collection and confirmation. Berta Serrano, MD; Erika Bonacina, MD; Pablo Garcia‐Manau, MD; and Manel Mendoza, MD, PhD, contributed to data analysis. Berta Serrano, MD; Erika Bonacina, MD; Pablo Garcia‐Manau, MD; Manel Mendoza, MD, PhD; and Elena Carreras, MD, PhD, contributed to data interpretation. Berta Serrano, MD, Erika Bonacina, MD; and Manel Mendoza, MD, PhD, were in charge of writing the article draft. All authors made substantial revisions to the article. All authors read and approved the final article. Manel Mendoza, MD, PhD, received lecture fees by Roche diagnostics. The other authors report no conflicts of interest.
Asia‐Inclusive Clinical Research and Development Enabled by Translational Science and Quantitative Clinical Pharmacology: Toward a Culture That Challenges the Status Quo
b2f8bc62-c690-4638-918d-454376cb65ff
10083990
Pharmacology[mh]
When a global drug development program is initiated in the Western Hemisphere, the need for, timing of, and extent/design of Asian phase I ethno‐bridging evaluations should be based on solid scientific rationale. Not all molecules/therapeutics should be treated the same. A science‐driven, case‐by‐case approach that is informed by ICH E5 scientific principles is crucial. For a complete list of ICH E5 factors associated with greater or lower risks for ethnic sensitivity, refer to Appendix D of the guideline document. Later sections of this tutorial will provide an overview of opportunities for translational research and MIDD enablers for the assessment of ethnic sensitivity. The following are some examples of useful considerations to guide the need for and design of standalone Asian phase I investigations ( Figure ) , : Known or expected sources of interethnic variation based on absorption, distribution, metabolism, and elimination (ADME) or PK, Evidence for interethnic variation in the pharmacologic target/mechanism of action, Safety profile (e.g., target organs) in the Western population suggestive of an increased risk in Asian populations, Narrow therapeutic index, Clinically meaningful ethnic sensitivity in safety or efficacy in the drug class under consideration. In a recent survey of MRCTs supporting the approval of drugs in Japan between 2007 and 2017, the approaches used to evaluate PK differences in Japanese vs. non‐Japanese populations were examined. Approximately 25% of evaluated Japan‐inclusive MRCTs embedded PK characterization for ethnic sensitivity assessments in the MRCT without standalone phase I PK ethno‐bridging studies. Of note, the cases where dedicated evaluation of PK in the Japanese population was not conducted ahead of initiating a global MRCT were largely those where ethnic sensitivity was expected to be low (e.g., topical or intravenously administered drugs without first‐pass metabolism) and a minority of cases where the indication was a rare disease. Balanced pragmatism and scientific rigor, in the context of proactive regulatory communications, must guide decisions regarding the content and timing of Asian phase I clinical studies ( Figure ). Timing of Asian phase I ethno‐bridging evaluation If the timing of inclusion of Asian populations in global MRCT(s) is intended to be at or following proof‐of‐concept (POC), it should suffice to initiate Asian phase I investigation in parallel after a suitable inflection point is reached (e.g., after completion of multiple‐dose safety and tolerability, pharmacodynamic (PD), and PK/PD characterization in support of the likely phase II dose range). This is depicted in scenario A of Figure . However, if an Asia‐inclusive geographic footprint of phase II is desired because Asia is a region of focus or due to an accelerated global development strategy, an Asian phase I evaluation would need to be conducted earlier. This is depicted in scenario B (for a pivotal phase III) and scenario C (for a pivotal phase II) of Figure . Ethno‐bridging data to support inclusion of Asian populations in an MRCT can be generated in a standalone Asian phase I study or through incorporation of Asian cohorts in the Western first‐in‐human (FIH) study. , For accelerated development programs (e.g., oncology), it may even be possible to consider generating such data in a safety/PK lead‐in phase within the first Asia‐inclusive MRCT. The lead‐in phase would specify a minimum number of patients for intensive PK sampling and close monitoring of safety. Review of emerging data from the safety lead‐in phase would inform the decision to trigger full expansion of enrollment of patients in the East Asian region at a common dose in the global pivotal study. Design considerations for Asian phase I ethno‐bridging assessments in healthy volunteers When a standalone Asian phase I ethno‐bridging study is part of the strategy and the clinical pharmacology of the molecule can be characterized in healthy volunteers, one important design consideration is whether to perform this study in healthy volunteers enrolled in Asia or to enroll Asian populations outside of Asia as part of ongoing Western clinical development. Traditionally, standalone Japanese phase I studies or Japanese cohorts within Western FIH studies have provided representative East Asian phase I data. Selection of the Japanese population as a representative one is based on typical regulatory expectations by the Pharmaceuticals and Medical Devices Agency (PMDA) for clinical ethno‐bridging data ahead of enrollment in Japan in larger phase II/III trials. Incorporation of representative Asian population(s) as cohort(s) in a healthy volunteer FIH study conducted outside of Asia (e.g., United States) could be more efficient, as this could be achieved without additional clinical trial application filings or supply chain considerations. In contrast, generation of Asian phase I data in an Asian country or countries provides beneficial early experience, which can potentially benefit longer‐term success of Asia‐inclusive development and fast‐to‐registration strategies and provide some degree of trust for larger trials in the region. In addition, generation of phase I data in Asia accounts for the unlikely but potential impact of extrinsic factors. These ethno‐bridging evaluations in Asian populations can also be designed as pan‐Asian studies open to subjects of any major East Asian population through conduct at phase I site(s) with access to such volunteer populations. The value of conducting Pan‐Asian phase I MRCTs engaging a research network of East Asian investigators with expertise in clinical pharmacology and ethno‐bridging science (e.g., Asia Clinical Pharmacology Network) has been discussed. A pan‐Asian ethno‐bridging evaluation (as opposed to only evaluating a single representative Asian population, such as Japanese) has the added advantage of supporting data‐driven positions regarding consistency in PK/PD, safety, and selected dose for the East Asian region at large, thereby more confidently enabling phase II/III MRCT designs, including prospective definition of a pooled East Asian region based on ICH E17 principles. With increasing opportunities for Asia‐inclusive phase I MRCTs and efficiencies in regulatory processes in China, the option of including sites in the region in a global FIH study after completion of dose escalation should be considered. An additional design consideration for Asian phase I studies is the need for multiple‐ or repeat‐dose data. As a base case, it is recommended that single‐dose PK data in healthy subjects over a clinically relevant range of doses (e.g., 3 dose levels informed by dose linearity in the Western population) should suffice for assessment of ethnic sensitivity. Exceptions include cases where overt time‐dependent nonlinearities in PK are noted in the Western multiple ascending dose study and interethnic differences are noted in single‐dose Asian PK data. Additionally, if the expected ethnic sensitivity is in the safety profile following multiple‐dose administration (e.g., when there is potential for ethnic sensitivity in adverse events that may manifest only following multiple dose administration), a multiple‐dose design involving repeat‐dose administration may be needed to adequately characterize repeat‐dose safety and tolerability profile in the Asian population for dose confirmation. Design considerations for ethno‐bridging evaluations in oncology drug development In oncology drug development, it is customary to conduct multidose/multicycle phase I studies in patients with advanced malignancies. Pan‐Asian phase I ethno‐bridging study designs have been executed in patients with hematologic and non‐hematologic malignancies. , , , , A design for this type of assessment within oncology phase I could specify collection of serial PK data in at least one Japanese and one Chinese subject per cohort during escalation and a minimum number of Japanese and Chinese patients (e.g., 6–12) during expansion, as implemented for the antibody‐drug conjugate TAK‐264. Phase I studies in patients with cancer represent one option for generating relevant ethno‐bridging data and may need to be considered for molecules that either cannot be dosed in healthy volunteers at clinically relevant doses or in cases where interpopulation differences in long‐term safety profile following multidose/ multicycle administration are of specific concern. However, it is important to consider the value of healthy volunteer, single‐dose phase I Asian PK assessments for molecules that can be safely administered to healthy volunteers where the questions of focus for the ethno‐bridging assessment are less related to multidose/multicycle safety and tolerability and more related to establishing consistency in PK/PD properties to support common dosage in an Asia‐inclusive MRCT. Single‐dose healthy volunteer studies can be completed far more efficiently than a typical oncology Asian phase I trial. As an example, timely conduct of a single‐dose ethno‐bridging PK study in Japanese healthy volunteers for the anaplastic lymphoma kinase inhibitor brigatinib enabled enrollment of Asian populations in the pivotal phase II ALTA trial and obviated the prior need for a dedicated Japanese phase I study. Although the full benefits of the ethno‐bridging data were not leveraged in this particular case, as neither Japan nor China was included in the ALTA trial, the inclusion of other Asian countries in ALTA (e.g., South Korea, Singapore, and Hong Kong) provided valuable clinical experience, enabling efficient regulatory and global development strategies for Asia. In another example of the development of mobocertinib for non‐small cell lung cancer harboring epidermal growth factor receptor exon 20 insertion mutations, there was a meaningful representation (~15%) of Asian patients in the Western FIH study conducted in the United States, likely explained in part by the higher incidence of epidermal growth factor receptor mutations in non‐small cell lung cancer in Asian populations. PK data on mobocertinib and total pharmacologically active species (molar sum of parent drug and 2 active metabolites with similar potency and plasma free fraction) could thus be evaluated across the dose escalation and expansion phases in the Asian subset in comparison to White subjects, supporting lack of ethnic sensitivity ( Figure ). This enabled Asia‐inclusive globalization of the pivotal study of mobocertinib, including mainland China, with informative sparse PK sampling for population PK modeling. If the timing of inclusion of Asian populations in global MRCT(s) is intended to be at or following proof‐of‐concept (POC), it should suffice to initiate Asian phase I investigation in parallel after a suitable inflection point is reached (e.g., after completion of multiple‐dose safety and tolerability, pharmacodynamic (PD), and PK/PD characterization in support of the likely phase II dose range). This is depicted in scenario A of Figure . However, if an Asia‐inclusive geographic footprint of phase II is desired because Asia is a region of focus or due to an accelerated global development strategy, an Asian phase I evaluation would need to be conducted earlier. This is depicted in scenario B (for a pivotal phase III) and scenario C (for a pivotal phase II) of Figure . Ethno‐bridging data to support inclusion of Asian populations in an MRCT can be generated in a standalone Asian phase I study or through incorporation of Asian cohorts in the Western first‐in‐human (FIH) study. , For accelerated development programs (e.g., oncology), it may even be possible to consider generating such data in a safety/PK lead‐in phase within the first Asia‐inclusive MRCT. The lead‐in phase would specify a minimum number of patients for intensive PK sampling and close monitoring of safety. Review of emerging data from the safety lead‐in phase would inform the decision to trigger full expansion of enrollment of patients in the East Asian region at a common dose in the global pivotal study. When a standalone Asian phase I ethno‐bridging study is part of the strategy and the clinical pharmacology of the molecule can be characterized in healthy volunteers, one important design consideration is whether to perform this study in healthy volunteers enrolled in Asia or to enroll Asian populations outside of Asia as part of ongoing Western clinical development. Traditionally, standalone Japanese phase I studies or Japanese cohorts within Western FIH studies have provided representative East Asian phase I data. Selection of the Japanese population as a representative one is based on typical regulatory expectations by the Pharmaceuticals and Medical Devices Agency (PMDA) for clinical ethno‐bridging data ahead of enrollment in Japan in larger phase II/III trials. Incorporation of representative Asian population(s) as cohort(s) in a healthy volunteer FIH study conducted outside of Asia (e.g., United States) could be more efficient, as this could be achieved without additional clinical trial application filings or supply chain considerations. In contrast, generation of Asian phase I data in an Asian country or countries provides beneficial early experience, which can potentially benefit longer‐term success of Asia‐inclusive development and fast‐to‐registration strategies and provide some degree of trust for larger trials in the region. In addition, generation of phase I data in Asia accounts for the unlikely but potential impact of extrinsic factors. These ethno‐bridging evaluations in Asian populations can also be designed as pan‐Asian studies open to subjects of any major East Asian population through conduct at phase I site(s) with access to such volunteer populations. The value of conducting Pan‐Asian phase I MRCTs engaging a research network of East Asian investigators with expertise in clinical pharmacology and ethno‐bridging science (e.g., Asia Clinical Pharmacology Network) has been discussed. A pan‐Asian ethno‐bridging evaluation (as opposed to only evaluating a single representative Asian population, such as Japanese) has the added advantage of supporting data‐driven positions regarding consistency in PK/PD, safety, and selected dose for the East Asian region at large, thereby more confidently enabling phase II/III MRCT designs, including prospective definition of a pooled East Asian region based on ICH E17 principles. With increasing opportunities for Asia‐inclusive phase I MRCTs and efficiencies in regulatory processes in China, the option of including sites in the region in a global FIH study after completion of dose escalation should be considered. An additional design consideration for Asian phase I studies is the need for multiple‐ or repeat‐dose data. As a base case, it is recommended that single‐dose PK data in healthy subjects over a clinically relevant range of doses (e.g., 3 dose levels informed by dose linearity in the Western population) should suffice for assessment of ethnic sensitivity. Exceptions include cases where overt time‐dependent nonlinearities in PK are noted in the Western multiple ascending dose study and interethnic differences are noted in single‐dose Asian PK data. Additionally, if the expected ethnic sensitivity is in the safety profile following multiple‐dose administration (e.g., when there is potential for ethnic sensitivity in adverse events that may manifest only following multiple dose administration), a multiple‐dose design involving repeat‐dose administration may be needed to adequately characterize repeat‐dose safety and tolerability profile in the Asian population for dose confirmation. In oncology drug development, it is customary to conduct multidose/multicycle phase I studies in patients with advanced malignancies. Pan‐Asian phase I ethno‐bridging study designs have been executed in patients with hematologic and non‐hematologic malignancies. , , , , A design for this type of assessment within oncology phase I could specify collection of serial PK data in at least one Japanese and one Chinese subject per cohort during escalation and a minimum number of Japanese and Chinese patients (e.g., 6–12) during expansion, as implemented for the antibody‐drug conjugate TAK‐264. Phase I studies in patients with cancer represent one option for generating relevant ethno‐bridging data and may need to be considered for molecules that either cannot be dosed in healthy volunteers at clinically relevant doses or in cases where interpopulation differences in long‐term safety profile following multidose/ multicycle administration are of specific concern. However, it is important to consider the value of healthy volunteer, single‐dose phase I Asian PK assessments for molecules that can be safely administered to healthy volunteers where the questions of focus for the ethno‐bridging assessment are less related to multidose/multicycle safety and tolerability and more related to establishing consistency in PK/PD properties to support common dosage in an Asia‐inclusive MRCT. Single‐dose healthy volunteer studies can be completed far more efficiently than a typical oncology Asian phase I trial. As an example, timely conduct of a single‐dose ethno‐bridging PK study in Japanese healthy volunteers for the anaplastic lymphoma kinase inhibitor brigatinib enabled enrollment of Asian populations in the pivotal phase II ALTA trial and obviated the prior need for a dedicated Japanese phase I study. Although the full benefits of the ethno‐bridging data were not leveraged in this particular case, as neither Japan nor China was included in the ALTA trial, the inclusion of other Asian countries in ALTA (e.g., South Korea, Singapore, and Hong Kong) provided valuable clinical experience, enabling efficient regulatory and global development strategies for Asia. In another example of the development of mobocertinib for non‐small cell lung cancer harboring epidermal growth factor receptor exon 20 insertion mutations, there was a meaningful representation (~15%) of Asian patients in the Western FIH study conducted in the United States, likely explained in part by the higher incidence of epidermal growth factor receptor mutations in non‐small cell lung cancer in Asian populations. PK data on mobocertinib and total pharmacologically active species (molar sum of parent drug and 2 active metabolites with similar potency and plasma free fraction) could thus be evaluated across the dose escalation and expansion phases in the Asian subset in comparison to White subjects, supporting lack of ethnic sensitivity ( Figure ). This enabled Asia‐inclusive globalization of the pivotal study of mobocertinib, including mainland China, with informative sparse PK sampling for population PK modeling. ADME characterization and pharmacogenomic studies Qualitative and quantitative understanding of the mechanisms and molecular determinants of ADME of drug candidates is crucial to forecasting risk for interethnic variation in drug exposures. , , This requires comprehensive nonclinical data on expected human clearance mechanisms and relative contributions of specific drug‐metabolizing enzymes and transporters, which expands with emerging human data and includes timely conduct of the human mass balance study. If a potentially important role for ADME‐related proteins that display ethnic variation in pharmacogenetics, expression, or activity is identified (e.g., CYP2C19 (Cytochrome P450 Family 2 Subfamily C Member 19), BCRP (Breast Cancer Resistance Protein), and OATP1B1 (Organic Anion Transporting Polypeptide 1B1)), , the totality of evidence must be considered in the overall assessment of risk for ethnic sensitivity ( Figure ). In some cases, it may be necessary to ensure that the assays for pharmacogenomic variation provide coverage for identification of allelic variants primarily relevant to Asian populations (e.g., UDP‐glucuronosyltransferase ( UGT ) 1A1*6 , CYP2C19*2 , SLCO1B1*15 , ABCG2 c.421C>A ). , , , , For example, one patient of Chinese descent from a Western phase I study experienced an increase in systemic exposures of tivantinib that was associated with grade 4 febrile neutropenia and grade 3 mucositis, which was subsequently linked to the subject being a poor CYP2C19 metabolizer ( CYP2C19*2/*2 ). This finding led to the design of the Japanese phase I study for the same drug to be stratified by CYP2C19 genotype to limit these toxicities, given the more frequent loss of function allele in Asian populations. , Disease biology/target and mechanism of action considerations Knowledge of the global molecular epidemiology of the disease is key to inform risk assessment for interethnic variation. This includes knowledge of differences between Asian and Western populations in target expression, genetic/epigenetic and functional variation in the target, and associated pathway genes/proteins. For biotherapeutics (e.g., monoclonal antibody‐based therapies, including bispecific constructs and antibody drug conjugates) with potential for target‐mediated drug disposition (TMDD), target expression can be a crucial determinant of clearance. As such, early understanding of the potential population differences in target expression under typical disease conditions can aid in forecasting of the risk for interethnic variation in PK and PD using mechanism‐based quantitative translational frameworks. Data demonstrating lack of apparent differences can be strong justification to forego dedicated Asian phase I evaluation. Planning and initiation of this research are not reliant on molecule‐specific data/information and, as such, can be timed well in advance of candidate selection, as it relies mainly on the knowledge of the target, mechanism of action, and therapeutic hypothesis. Biology and health data are important data sources that can inform disease understanding. In the last decade, Asian governments have significantly expanded resources to establish disease‐focused biobanks and emphasize quality management of biological resources. The national database of hospital‐based cancer registries in Japan is an example of infrastructure created to support evidence‐based cancer care and control. Collaboration with external partners (hospitals, academic institutions, or governments) on biobank generation and database quality can address population variability in critical disease factors (i.e., disease phenotype and factors that contribute to disease incidence, progression, and severity) and predictors of drug response. With the emerging importance of diversity in the gut microbiome, characterizing the impact of dietary factors and regional variations in the microbiome should be considered. , , , , Drug development teams should engage in a robust inquiry around the need for, value of, and strategic objectives of such Asian‐related research as early as possible, given the likely need for organizational investments in external collaborations and development of appropriate analytical methodology and quantitative models to forecast the impact of ethnic variation on underlying biology. In a recent example, during the development of pevonedistat, examination of the mutational landscape in higher‐risk myelodysplastic syndromes/low‐blast acute myeloid leukemia supported conservation in the molecular pathology of the target diseases between Asian and Western populations and across Asian populations (e.g., Korean and Japanese). This information, together with similarity in pevonedistat PK and safety across these populations, contributed to the rationalization of a global Asia‐inclusive phase III trial and consideration of a pooled East Asian region for assessment of consistency in benefit‐risk, applying principles of ICH E17 for MRCTs. Global pharmacoepidemiologic knowledge management Asian and Western populations can vary in disease etiology, severity, prognostic factors, and pathophysiology. Extrinsic factors may also differ between countries (e.g., Japan vs. China). Furthermore, patient treatment plans may be different due to variations in diagnostic methods (clinical and molecular) and the current local standards of care. All relevant factors demand inquiry and integration as part of the decision to include Asian populations in global MRCTs. For example, in relapsed/refractory multiple myeloma, significant differences in disease severity have been described in China, with patients presenting with more advanced/refractory disease and differences in prior treatments vs. Western populations. Although the treatment effect of ixazomib, when added to lenalidomide and dexamethasone, was statistically and clinically significant in both populations ( Figure ), the absolute progression‐free survival in the Chinese population (6.7 months with ixazomib/lenalidomide/dexamethasone and 4 months with lenalidomide/dexamethasone) was substantially shorter than that in the global population (20.6 months with ixazomib/lenalidomide/dexamethasone and 14.7 months with lenalidomide/dexamethasone) in the randomized phase III TOURMALINE‐MM1 trial. , Of note, this trial integrated China in a continuation study under a country‐specific protocol extension of the randomized global phase III trial, which allowed robust assessment of the efficacy of ixazomib across both global and Chinese populations without the confounding effects of these differences. However, such imbalances in prognostic factors and clinical outcomes, if not appropriately controlled for (e.g., via stratification by region) or if encountered in single‐arm phase II studies, can compromise interpretation of trial outcomes across regions. This example illustrates the critical importance of epidemiologic knowledge management during the design of MRCTs, considering not only drug‐related but, importantly, also disease‐related intrinsic and extrinsic factors across populations, per ICH E17. Assessments of global epidemiologic conservation and diversity should be conducted early in development and well in advance of MRCT planning, as these considerations can impact POC strategy if access in Asian populations is a key strategic imperative. For example, combination drug development can pose specific challenges if the combination partner (e.g., standard‐of‐care therapy selected for addition to the investigational agent) is not approved or clinically used for the intended indication in all countries. Of course, selection of combination partners should be driven by mechanism of action and the underlying therapeutic hypothesis, but if the available options do not appear feasible for clinical trial conduct and registrational strategies in Asia, these risks will require early acknowledgment and assessment of alternative strategies for clinical development in Asia. Regional variation in standards of non‐pharmacologic components of patient management (e.g., behavioral modification or surgery) and the use of traditional Asian medicines, if not controlled for and considered in the analysis, can introduce imbalance/bias, inflate placebo response, and compromise interpretability and MRCT success. Comparator selection for randomized controlled trials additionally requires careful consideration as the reference treatment may not be conserved in Asia at large or in certain Asian countries. These considerations further emphasize the need for early cross‐functional knowledge management of the global clinical and regulatory landscape. Qualitative and quantitative understanding of the mechanisms and molecular determinants of ADME of drug candidates is crucial to forecasting risk for interethnic variation in drug exposures. , , This requires comprehensive nonclinical data on expected human clearance mechanisms and relative contributions of specific drug‐metabolizing enzymes and transporters, which expands with emerging human data and includes timely conduct of the human mass balance study. If a potentially important role for ADME‐related proteins that display ethnic variation in pharmacogenetics, expression, or activity is identified (e.g., CYP2C19 (Cytochrome P450 Family 2 Subfamily C Member 19), BCRP (Breast Cancer Resistance Protein), and OATP1B1 (Organic Anion Transporting Polypeptide 1B1)), , the totality of evidence must be considered in the overall assessment of risk for ethnic sensitivity ( Figure ). In some cases, it may be necessary to ensure that the assays for pharmacogenomic variation provide coverage for identification of allelic variants primarily relevant to Asian populations (e.g., UDP‐glucuronosyltransferase ( UGT ) 1A1*6 , CYP2C19*2 , SLCO1B1*15 , ABCG2 c.421C>A ). , , , , For example, one patient of Chinese descent from a Western phase I study experienced an increase in systemic exposures of tivantinib that was associated with grade 4 febrile neutropenia and grade 3 mucositis, which was subsequently linked to the subject being a poor CYP2C19 metabolizer ( CYP2C19*2/*2 ). This finding led to the design of the Japanese phase I study for the same drug to be stratified by CYP2C19 genotype to limit these toxicities, given the more frequent loss of function allele in Asian populations. , Knowledge of the global molecular epidemiology of the disease is key to inform risk assessment for interethnic variation. This includes knowledge of differences between Asian and Western populations in target expression, genetic/epigenetic and functional variation in the target, and associated pathway genes/proteins. For biotherapeutics (e.g., monoclonal antibody‐based therapies, including bispecific constructs and antibody drug conjugates) with potential for target‐mediated drug disposition (TMDD), target expression can be a crucial determinant of clearance. As such, early understanding of the potential population differences in target expression under typical disease conditions can aid in forecasting of the risk for interethnic variation in PK and PD using mechanism‐based quantitative translational frameworks. Data demonstrating lack of apparent differences can be strong justification to forego dedicated Asian phase I evaluation. Planning and initiation of this research are not reliant on molecule‐specific data/information and, as such, can be timed well in advance of candidate selection, as it relies mainly on the knowledge of the target, mechanism of action, and therapeutic hypothesis. Biology and health data are important data sources that can inform disease understanding. In the last decade, Asian governments have significantly expanded resources to establish disease‐focused biobanks and emphasize quality management of biological resources. The national database of hospital‐based cancer registries in Japan is an example of infrastructure created to support evidence‐based cancer care and control. Collaboration with external partners (hospitals, academic institutions, or governments) on biobank generation and database quality can address population variability in critical disease factors (i.e., disease phenotype and factors that contribute to disease incidence, progression, and severity) and predictors of drug response. With the emerging importance of diversity in the gut microbiome, characterizing the impact of dietary factors and regional variations in the microbiome should be considered. , , , , Drug development teams should engage in a robust inquiry around the need for, value of, and strategic objectives of such Asian‐related research as early as possible, given the likely need for organizational investments in external collaborations and development of appropriate analytical methodology and quantitative models to forecast the impact of ethnic variation on underlying biology. In a recent example, during the development of pevonedistat, examination of the mutational landscape in higher‐risk myelodysplastic syndromes/low‐blast acute myeloid leukemia supported conservation in the molecular pathology of the target diseases between Asian and Western populations and across Asian populations (e.g., Korean and Japanese). This information, together with similarity in pevonedistat PK and safety across these populations, contributed to the rationalization of a global Asia‐inclusive phase III trial and consideration of a pooled East Asian region for assessment of consistency in benefit‐risk, applying principles of ICH E17 for MRCTs. Asian and Western populations can vary in disease etiology, severity, prognostic factors, and pathophysiology. Extrinsic factors may also differ between countries (e.g., Japan vs. China). Furthermore, patient treatment plans may be different due to variations in diagnostic methods (clinical and molecular) and the current local standards of care. All relevant factors demand inquiry and integration as part of the decision to include Asian populations in global MRCTs. For example, in relapsed/refractory multiple myeloma, significant differences in disease severity have been described in China, with patients presenting with more advanced/refractory disease and differences in prior treatments vs. Western populations. Although the treatment effect of ixazomib, when added to lenalidomide and dexamethasone, was statistically and clinically significant in both populations ( Figure ), the absolute progression‐free survival in the Chinese population (6.7 months with ixazomib/lenalidomide/dexamethasone and 4 months with lenalidomide/dexamethasone) was substantially shorter than that in the global population (20.6 months with ixazomib/lenalidomide/dexamethasone and 14.7 months with lenalidomide/dexamethasone) in the randomized phase III TOURMALINE‐MM1 trial. , Of note, this trial integrated China in a continuation study under a country‐specific protocol extension of the randomized global phase III trial, which allowed robust assessment of the efficacy of ixazomib across both global and Chinese populations without the confounding effects of these differences. However, such imbalances in prognostic factors and clinical outcomes, if not appropriately controlled for (e.g., via stratification by region) or if encountered in single‐arm phase II studies, can compromise interpretation of trial outcomes across regions. This example illustrates the critical importance of epidemiologic knowledge management during the design of MRCTs, considering not only drug‐related but, importantly, also disease‐related intrinsic and extrinsic factors across populations, per ICH E17. Assessments of global epidemiologic conservation and diversity should be conducted early in development and well in advance of MRCT planning, as these considerations can impact POC strategy if access in Asian populations is a key strategic imperative. For example, combination drug development can pose specific challenges if the combination partner (e.g., standard‐of‐care therapy selected for addition to the investigational agent) is not approved or clinically used for the intended indication in all countries. Of course, selection of combination partners should be driven by mechanism of action and the underlying therapeutic hypothesis, but if the available options do not appear feasible for clinical trial conduct and registrational strategies in Asia, these risks will require early acknowledgment and assessment of alternative strategies for clinical development in Asia. Regional variation in standards of non‐pharmacologic components of patient management (e.g., behavioral modification or surgery) and the use of traditional Asian medicines, if not controlled for and considered in the analysis, can introduce imbalance/bias, inflate placebo response, and compromise interpretability and MRCT success. Comparator selection for randomized controlled trials additionally requires careful consideration as the reference treatment may not be conserved in Asia at large or in certain Asian countries. These considerations further emphasize the need for early cross‐functional knowledge management of the global clinical and regulatory landscape. Population pharmacology models Population pharmacology models are key to quantifying the potential impact of ethnic variation on drug response. Four categories of population pharmacology models, including QSP, PBPK, population PK, and exposure‐response models, are part of the broader MIDD toolbox. QSP models can provide an understanding of the relative contributions of the drug candidate or its metabolites to efficacy and safety at a pathway level, including insight into how receptor variability, signaling heterogeneity, and genetic variability in xenobiotic metabolism and transport can impact outcomes. QSP modeling can provide predictive value in dose selection, the need for alternative dosing, and the probability of demonstrating an outcome under different intrinsic and extrinsic factor conditions that differentiate Asian and Western populations. , , , Additionally, advances in the ability to create mathematical models of tumor immunology and the cancer‐immune cycle have resulted in the development of QSP frameworks that can incorporate multidimensional sources of variation. With emerging knowledge of diversity in the microbiome and implications for response to cancer immunotherapy, , , it is envisioned that QSP models will play an important role in evaluating the impact of regional diversity in the microbiome and immunophenotype on the benefit‐risk profile of immuno‐oncology drug candidates. QSP modeling can also bring value to reverse translation decision making by validating mechanistic hypotheses that require human outcome data. Although QSP modeling is an emerging science, advances in molecular biology, the ability to measure cellular and functional events, and emerging interest by regulatory authorities will likely result in QSP becoming a common tool in Asian study‐related decision making. PBPK models aid in forecasting PK in Asian vs. Western populations and can be deployed well in advance of clinical data availability to get an early read on level of risk for ethnic sensitivity in PK. , , , , , These models integrate molecule‐specific information on human clearance mechanisms and population‐specific information, including demographic (e.g., body size), physiologic (e.g., liver weight/blood flow), biochemical (e.g., hepatic abundance of drug‐metabolizing enzymes), and genetic (e.g., frequencies of relevant polymorphisms in ADME genes) characteristics to enable quantitative PK predictions. Population system parameters for Chinese and Japanese populations have been published and integrated into the Simcyp population PBPK simulator, , as has a population model for the Korean population. As data on human PK and clearance mechanisms emerge during clinical development in the Western population, the initial PBPK model can be recalibrated to update PK predictions in Asian populations. Predictions of PK in Asian populations using a PBPK model that has been verified to predict a drug’s PK in non‐Asian populations can be valuable in either determining the need for an Asian phase I study and/or guiding the design and starting dose for the first Asian phase I study. Advances in the ability to quantify drug‐metabolizing enzymes and transporters, leverage “liquid biopsy” approaches to assess ADME variation, and assess the contribution of the gut microbiome to variability in human drug metabolism and disposition should enable continuous improvement in the fidelity of PBPK frameworks for predicting ethnic sensitivity in PK. , , Population PK modeling of data collected in phase I and II studies provide vital knowledge on the sources of variability in clinical PK. The impact of a lower distribution of body weights in Asian populations on drug exposure relative to the Western population can be simulated using allometric principles and can complement predictions from PBPK models. For example, population PK modeling of brigatinib, utilizing data from Japanese healthy volunteers residing in the United States, showed a lack of both ethnic sensitivity and clinically relevant body weight effects, allowing inclusion of some Asian countries into the pivotal trial without a standalone bridging study. In the case of monoclonal antibody‐based therapeutics, when TMDD is evident, knowledge of target expression in the Asian vs. Western populations can provide valuable input for model‐informed assessment of risk for clinically meaningful ethnic sensitivity in PK and dose‐response relationships. If the PK are linear without evidence of TMDD and the allometrically predicted exposure distribution in Asian populations (considering their body weight distribution) does not suggest clinically meaningful differences vs. the Western population, initiation of Asia‐inclusive global clinical development without an Asian phase I trial can be defended with health authorities (e.g., the PMDA). Although body weight in the global adult population is typically not a clinically meaningful source of variability in antibody PK and dosage requirements, observed variability due to differences in body weight between populations could, in some cases, necessitate a dose adjustment. One example is omalizumab, where dose is adjusted by immunoglobulin E level and body weight. Characterization of exposure‐safety, exposure‐PD, and exposure‐efficacy (where available) relationships in the Western population informs the investigational agent’s therapeutic window/index. Data on PK differences between Asian and Western populations cannot be used in isolation to infer the level of risk for ethnic sensitivity or to guide dosing decisions. A common global dose may be appropriate, even with modest PK differences if supported by quantitative understanding of the therapeutic exposure window. However, if data‐driven scientific considerations from population pharmacology characterization (i.e., population PK and exposure‐response analyses) indicate that Asian populations require a different dose to maximize benefit‐risk, one should not infer that an Asia‐inclusive pivotal trial should be deferred. Although the regulatory hurdles will be higher, appropriate positioning of an exposure‐matched dosing strategy should be considered, as provided for in the ICH E17 guideline. In the case of simeprevir, development was initiated in Asia due to the high prevalence of hepatitis C virus (HCV). At the time of submission to the FDA, exposure in Asians was 3.4‐fold higher than the overall population in the phase III trials, and was associated with an increased risk of rash and pruritus. , The initial FDA review concluded that patients of East Asian descent may need a reduced simeprevir dose, and the label stated that a dose recommendation could not be made for patients of East Asian ancestry, with a postmarketing requirement to define the appropriate dose for this patient population. , A later phase III trial in China and South Korea showed that mean simeprevir plasma exposure in East Asian subjects with HCV was 2.1‐fold higher vs. non‐Asian subjects with HCV, albeit with a similar safety profile. As a result, the same dose was approved in Asian and Western populations. In another case, an exposure‐matched dosing strategy was discussed during development for the investigational Aurora A kinase inhibitor alisertib, where a lower regional dose for Asian populations was determined to be necessary to preserve the benefit‐risk profile due to a clinically relevant difference in PK between Asian and Western populations. , These two examples illustrate the importance of evaluating ethnic/regional variation in drug exposures to inform recommended dosage and benefit‐risk profile. When differences in exposures are observed, timely regulatory consultations supported by strong, science‐driven positions regarding dosage recommendations for the Asian population are crucial to a successful Asia‐inclusive MRCT design. Disease models Platform models of disease progression dynamics, including model‐based meta‐analyses, can be extremely valuable in quantifying the impact of regional variations in intrinsic or extrinsic factors on disease progression and outcomes (independent of PK or PD differences). For example, longitudinal models of clinical and/or outcome end points that quantify the impact of patient‐specific clinical and demographic factors (e.g., disease stage and age), as well as factors related to medical practice ecosystems (e.g., diagnostic modalities, prior therapies, and standards of care), can forecast the overall risk for ethnic variation in drug response, associated implications for MRCT performance, and the probability of technical and regulatory success. With incorporation of molecular disease‐defining covariates, these models can be valuable in guiding global development of precision medicines. The impact of regional variations in the underlying molecular portraits of patients and their relationships to regional variation in available prior therapy and patterns of resistance are seldom quantitatively characterized, although such understanding is germane to the design of MRCTs. However, these problems provide opportunities for iterative forward and reverse translational research, bolstered by the power of population disease modeling and machine learning. Clinical trial simulations from disease models conditioned on the distribution of relevant covariates in Asian populations can provide in silico probabilities of country‐specific drug‐effect differences on primary and key secondary efficacy end points. Simulations from these models can be used to inform decisions regarding global footprints of confirmatory trials and stratification factors. The risks of Asia‐inclusive globalization under various design scenarios can be quantified via simulation to assess the impact of patient heterogeneity on trial success when Asian patients may represent 5% to 30% of the global trial population. This can minimize redundant regional clinical investigation while mitigating the risk of excessive heterogeneity or imbalance. Importantly, the absence of meaningful differences in expected outcomes for the Asian population and between the constituent East Asian populations can provide support to extrapolate Western data to Asia and use data from across East Asian populations for regulatory review and decision making, leveraging ICH E17 principles. Development of longitudinal disease progression models ideally requires access to well‐annotated, large‐scale, patient‐level datasets (e.g., contemporaneous real‐world data and/or individual patient data from clinical trials). In one example, a previously developed equation characterizing the risk of progression of chronic kidney disease to kidney failure was independently validated in the Korean population. With data sharing and trial transparency on the rise in clinical research, there are now multiple complementary avenues to access patient‐level data. For example, control‐arm data in certain oncology indications are accessible via Project Data Sphere and in many chronic diseases via the TransCelerate Historical Trial Data Sharing initiative. Data from competitor trials, where applicable, can be requested under the provisions of the European Medicines Agency’s Policy 70. Nevertheless, given that access to patient‐level datasets with adequate annotation is not trivial, planning for development of disease models requires foresight. Model‐based meta‐analytic approaches can also be applied to trial‐level data from systematically curated literature and other public sources to quantify the effects of race/ethnicity or region of enrollment among other covariates. It is worth noting that, given their established value in enhancing efficiency of POC trial designs and decision making, the drivers for investment in development of platform disease models go beyond their applicability in informing Asia‐inclusive drug development strategies. Population pharmacology models are key to quantifying the potential impact of ethnic variation on drug response. Four categories of population pharmacology models, including QSP, PBPK, population PK, and exposure‐response models, are part of the broader MIDD toolbox. QSP models can provide an understanding of the relative contributions of the drug candidate or its metabolites to efficacy and safety at a pathway level, including insight into how receptor variability, signaling heterogeneity, and genetic variability in xenobiotic metabolism and transport can impact outcomes. QSP modeling can provide predictive value in dose selection, the need for alternative dosing, and the probability of demonstrating an outcome under different intrinsic and extrinsic factor conditions that differentiate Asian and Western populations. , , , Additionally, advances in the ability to create mathematical models of tumor immunology and the cancer‐immune cycle have resulted in the development of QSP frameworks that can incorporate multidimensional sources of variation. With emerging knowledge of diversity in the microbiome and implications for response to cancer immunotherapy, , , it is envisioned that QSP models will play an important role in evaluating the impact of regional diversity in the microbiome and immunophenotype on the benefit‐risk profile of immuno‐oncology drug candidates. QSP modeling can also bring value to reverse translation decision making by validating mechanistic hypotheses that require human outcome data. Although QSP modeling is an emerging science, advances in molecular biology, the ability to measure cellular and functional events, and emerging interest by regulatory authorities will likely result in QSP becoming a common tool in Asian study‐related decision making. PBPK models aid in forecasting PK in Asian vs. Western populations and can be deployed well in advance of clinical data availability to get an early read on level of risk for ethnic sensitivity in PK. , , , , , These models integrate molecule‐specific information on human clearance mechanisms and population‐specific information, including demographic (e.g., body size), physiologic (e.g., liver weight/blood flow), biochemical (e.g., hepatic abundance of drug‐metabolizing enzymes), and genetic (e.g., frequencies of relevant polymorphisms in ADME genes) characteristics to enable quantitative PK predictions. Population system parameters for Chinese and Japanese populations have been published and integrated into the Simcyp population PBPK simulator, , as has a population model for the Korean population. As data on human PK and clearance mechanisms emerge during clinical development in the Western population, the initial PBPK model can be recalibrated to update PK predictions in Asian populations. Predictions of PK in Asian populations using a PBPK model that has been verified to predict a drug’s PK in non‐Asian populations can be valuable in either determining the need for an Asian phase I study and/or guiding the design and starting dose for the first Asian phase I study. Advances in the ability to quantify drug‐metabolizing enzymes and transporters, leverage “liquid biopsy” approaches to assess ADME variation, and assess the contribution of the gut microbiome to variability in human drug metabolism and disposition should enable continuous improvement in the fidelity of PBPK frameworks for predicting ethnic sensitivity in PK. , , Population PK modeling of data collected in phase I and II studies provide vital knowledge on the sources of variability in clinical PK. The impact of a lower distribution of body weights in Asian populations on drug exposure relative to the Western population can be simulated using allometric principles and can complement predictions from PBPK models. For example, population PK modeling of brigatinib, utilizing data from Japanese healthy volunteers residing in the United States, showed a lack of both ethnic sensitivity and clinically relevant body weight effects, allowing inclusion of some Asian countries into the pivotal trial without a standalone bridging study. In the case of monoclonal antibody‐based therapeutics, when TMDD is evident, knowledge of target expression in the Asian vs. Western populations can provide valuable input for model‐informed assessment of risk for clinically meaningful ethnic sensitivity in PK and dose‐response relationships. If the PK are linear without evidence of TMDD and the allometrically predicted exposure distribution in Asian populations (considering their body weight distribution) does not suggest clinically meaningful differences vs. the Western population, initiation of Asia‐inclusive global clinical development without an Asian phase I trial can be defended with health authorities (e.g., the PMDA). Although body weight in the global adult population is typically not a clinically meaningful source of variability in antibody PK and dosage requirements, observed variability due to differences in body weight between populations could, in some cases, necessitate a dose adjustment. One example is omalizumab, where dose is adjusted by immunoglobulin E level and body weight. Characterization of exposure‐safety, exposure‐PD, and exposure‐efficacy (where available) relationships in the Western population informs the investigational agent’s therapeutic window/index. Data on PK differences between Asian and Western populations cannot be used in isolation to infer the level of risk for ethnic sensitivity or to guide dosing decisions. A common global dose may be appropriate, even with modest PK differences if supported by quantitative understanding of the therapeutic exposure window. However, if data‐driven scientific considerations from population pharmacology characterization (i.e., population PK and exposure‐response analyses) indicate that Asian populations require a different dose to maximize benefit‐risk, one should not infer that an Asia‐inclusive pivotal trial should be deferred. Although the regulatory hurdles will be higher, appropriate positioning of an exposure‐matched dosing strategy should be considered, as provided for in the ICH E17 guideline. In the case of simeprevir, development was initiated in Asia due to the high prevalence of hepatitis C virus (HCV). At the time of submission to the FDA, exposure in Asians was 3.4‐fold higher than the overall population in the phase III trials, and was associated with an increased risk of rash and pruritus. , The initial FDA review concluded that patients of East Asian descent may need a reduced simeprevir dose, and the label stated that a dose recommendation could not be made for patients of East Asian ancestry, with a postmarketing requirement to define the appropriate dose for this patient population. , A later phase III trial in China and South Korea showed that mean simeprevir plasma exposure in East Asian subjects with HCV was 2.1‐fold higher vs. non‐Asian subjects with HCV, albeit with a similar safety profile. As a result, the same dose was approved in Asian and Western populations. In another case, an exposure‐matched dosing strategy was discussed during development for the investigational Aurora A kinase inhibitor alisertib, where a lower regional dose for Asian populations was determined to be necessary to preserve the benefit‐risk profile due to a clinically relevant difference in PK between Asian and Western populations. , These two examples illustrate the importance of evaluating ethnic/regional variation in drug exposures to inform recommended dosage and benefit‐risk profile. When differences in exposures are observed, timely regulatory consultations supported by strong, science‐driven positions regarding dosage recommendations for the Asian population are crucial to a successful Asia‐inclusive MRCT design. Platform models of disease progression dynamics, including model‐based meta‐analyses, can be extremely valuable in quantifying the impact of regional variations in intrinsic or extrinsic factors on disease progression and outcomes (independent of PK or PD differences). For example, longitudinal models of clinical and/or outcome end points that quantify the impact of patient‐specific clinical and demographic factors (e.g., disease stage and age), as well as factors related to medical practice ecosystems (e.g., diagnostic modalities, prior therapies, and standards of care), can forecast the overall risk for ethnic variation in drug response, associated implications for MRCT performance, and the probability of technical and regulatory success. With incorporation of molecular disease‐defining covariates, these models can be valuable in guiding global development of precision medicines. The impact of regional variations in the underlying molecular portraits of patients and their relationships to regional variation in available prior therapy and patterns of resistance are seldom quantitatively characterized, although such understanding is germane to the design of MRCTs. However, these problems provide opportunities for iterative forward and reverse translational research, bolstered by the power of population disease modeling and machine learning. Clinical trial simulations from disease models conditioned on the distribution of relevant covariates in Asian populations can provide in silico probabilities of country‐specific drug‐effect differences on primary and key secondary efficacy end points. Simulations from these models can be used to inform decisions regarding global footprints of confirmatory trials and stratification factors. The risks of Asia‐inclusive globalization under various design scenarios can be quantified via simulation to assess the impact of patient heterogeneity on trial success when Asian patients may represent 5% to 30% of the global trial population. This can minimize redundant regional clinical investigation while mitigating the risk of excessive heterogeneity or imbalance. Importantly, the absence of meaningful differences in expected outcomes for the Asian population and between the constituent East Asian populations can provide support to extrapolate Western data to Asia and use data from across East Asian populations for regulatory review and decision making, leveraging ICH E17 principles. Development of longitudinal disease progression models ideally requires access to well‐annotated, large‐scale, patient‐level datasets (e.g., contemporaneous real‐world data and/or individual patient data from clinical trials). In one example, a previously developed equation characterizing the risk of progression of chronic kidney disease to kidney failure was independently validated in the Korean population. With data sharing and trial transparency on the rise in clinical research, there are now multiple complementary avenues to access patient‐level data. For example, control‐arm data in certain oncology indications are accessible via Project Data Sphere and in many chronic diseases via the TransCelerate Historical Trial Data Sharing initiative. Data from competitor trials, where applicable, can be requested under the provisions of the European Medicines Agency’s Policy 70. Nevertheless, given that access to patient‐level datasets with adequate annotation is not trivial, planning for development of disease models requires foresight. Model‐based meta‐analytic approaches can also be applied to trial‐level data from systematically curated literature and other public sources to quantify the effects of race/ethnicity or region of enrollment among other covariates. It is worth noting that, given their established value in enhancing efficiency of POC trial designs and decision making, the drivers for investment in development of platform disease models go beyond their applicability in informing Asia‐inclusive drug development strategies. Holistic approaches to Asia‐inclusive development are facilitated by the ICH E17 guideline for MRCT design. This recently finalized regulatory guideline and the evolution in the Asian regulatory landscape (e.g., China regulatory reform) are promoting simultaneous global drug development and near‐simultaneous global drug registration. The core principles of the E17 guidelines are as follows: Conduct well‐designed MRCTs to increase drug development efficiency and support regulatory decision making across regions, Understand relevant intrinsic and extrinsic factor effects early in MRCT design, Allocate sample size by region to verify consistency in treatment effect while allowing feasibility in recruitment and timely trial conduct, Pool prespecified regions based upon similarities in drug‐ and disease‐related intrinsic and extrinsic factors, Use a single primary analysis supported by structured exploration of consistency, Ensure high‐quality trial design and conduct, and Encourage efficient communication between sponsors and regulatory authorities during MRCT design. A specific opportunity of direct relevance to clinical pharmacologists is the pooled region concept under ICH E17. Prospectively defining a pooled East Asian region based on scientific rationale will confer advantages in efficiency over a country‐specific strategy. In principle, the MRCT would be designed to demonstrate consistency in treatment effect in an adequately sized pooled East Asian region vs. the global clinical trial population. The pooled region would still be designed with a reasonable representation of the constituent populations (e.g., specific countries) as opposed to requiring obligate minimum sample sizes per country to individually demonstrate consistency at the country level. Pooling justification should be based on a prospective and systematic evaluation of similarity in drug‐ and disease‐related intrinsic and extrinsic factors. Figure offers a framework to facilitate cross‐functional and cross‐regional discussions in drug development teams for synthesizing the required body of scientific evidence to design MRCTs with a pooled East Asian region. The framework is comprised of five anchors with the following associated questions. The respective clinical pharmacology enablers for each of these questions are also indicated: Does ethnic sensitivity assessment based on ICH E5 principles support an expected lack of meaningful differences in PK/PD and safety among the subpopulations comprising the pooled East Asian region to support a common dosage? Enablers: ADME; PBPK models; QSP models; phase I ethno‐bridging; population PK models; PK/PD, exposure‐safety and exposure‐efficacy relationships to inform therapeutic index. For the target indication and patient population being investigated in the MRCT, are the subpopulations comprising the pooled East Asian region generally similar with respect to disease epidemiology (e.g., age‐adjusted incidence and prognostic factors) and current standards of care, considering currently approved therapies and local compendial guidelines? Enablers: Systematic literature reviews, model‐based meta‐analyses, simulations from disease progression models with covariates. For approved and/or investigational treatments (especially those with related mechanisms of action) in the target indication under study, are treatment responses and effective doses generally conserved across the subpopulations comprising the pooled East Asian region? If not, can the observed differences be explained by differences in key prognostic factors? Enablers: Model‐based meta‐analyses of completed MRCTs in the target indication designed to evaluate cross‐population/cross‐region consistency in treatment effects, Simulations from disease progression models with covariates. Is the disease pathophysiology conserved across the subpopulations comprising the East Asian region at the clinical phenotype and molecular levels? This assessment should consider factors such as population frequencies of genetic and/or other (e.g., transcriptomic and metabolomic) signatures of disease activity, prognosis, and treatment response (especially to drugs with related mechanisms of action). It should also consider evaluation of similarity in prior treatments that may have implications for treatment response to subsequent lines of therapy (e.g., through disease evolution and resistance mechanisms). Enablers: Simulations from QSP models to assess impact of population variability in disease biology/molecular pathology, simulations from disease progression models with molecular pathology covariates. What sample size of the pooled East Asian region would provide an adequate probability of demonstrating consistency in efficacy relative to the global population under the expected range of treatment effects? This should be prospectively defined at the point of trial design. Enablers: Stochastic clinical trial simulations from population exposure‐response and/or disease progression models with relevant covariate effects. Questions 1 to 4, as written, specifically address conservation of the noted features among subpopulations comprising the pooled East Asian region. Equally important for design of an Asia‐inclusive MRCT is the general assessment of conservation in relation to the global clinical trial population. As discussed earlier, the framework shown in Figure was applied to design an Asia‐inclusive MRCT for the investigational anticancer agent pevonedistat in patients with higher‐risk myelodysplastic syndrome, higher‐risk chronic myelomonocytic leukemia, or low‐blast acute myeloid leukemia. Such deep interrogation of disease biology, epidemiology, and drug‐related PK/PD and safety properties coupled with a prospectively defined sample size allocation and statistical analysis plan for consistency in treatment effects requires highly integrated inter‐disciplinary effort. It requires quantitative knowledge management of disease‐related and clinical trial data in the target indication. Recent analyses of completed MRCTs across multiple disease areas and drug treatments (schizophrenia, diabetes, chronic obstructive pulmonary disease, bipolar disorders, attention deficit hyperactivity disorder, and benign prostatic hyperplasia) are yielding valuable insights, generally supporting similarity of treatment outcomes between East Asian populations. , , We encourage the conduct and publication of such retrospective analyses of MRCTs, including model‐based analyses applying methods in pharmacometrics to build understanding of population variability in treatment outcomes across diseases and drug classes/mechanisms to inform design of future MRCTs. Recent progressive changes in the global regulatory landscape catalyzed by advances at the intersection of translational, clinical, and regulatory science have enabled a holistic and efficient approach to Asia‐inclusive global drug development. The ICH E17 guideline brings a principled approach, reinforcing opportunities, where appropriate, for a holistic approach to the East Asian region in MRCTs, thereby minimizing the need for redundant clinical investigation at the country and/or regional levels. The principles of clinical pharmacology and translational science are foundational to enabling an Asia‐inclusive development plan informed by risk assessment for ethnic sensitivity. This tutorial outlined the principles for consideration to help guide this cross‐functional process. It is recommended that opportunities for Asia‐inclusive development and a holistic approach to the region at large be considered early in the development life cycle. Many considerations related to the overall assessment of risk for ethnic sensitivity and the tactical plan for characterization of these risks are mechanism‐ and disease‐related. As such, it is possible for the requisite body of translational and epidemiologic research regarding intrinsic and extrinsic factors of relevance to the mechanism and disease area to be initiated even before a development candidate is selected. This can be supplemented with drug‐specific considerations related to ADME mechanisms following candidate selection. A rational, science‐guided approach that considers the region at large, not solely driven by historically precedented, country‐level regulatory expectations, is foundational for success. When the risk for ethnic sensitivity is low and a common global dose is supported, one should consider the opportunity for inclusion of East Asian countries in an MRCT without standalone phase I dose‐finding trials. The success of this approach relies on proactive regulatory communication of underlying scientific rationale with prospective integration of population PK and exposure‐response analyses in the pivotal trial(s) to support the global dose for Asian patients. Forward and reverse translational research bolstered by application of an MIDD toolkit of population, systems pharmacology, and disease progression models to quantify regional/ethnic variation will inform strategies to mitigate risk in Asia‐inclusive MRCTs. We trust that systematic integration of these opportunities with a Totality of Evidence mindset and cross‐functional/cross‐regional partnerships will enable more inclusive and efficient global drug development resulting in decreased access lag for Asian populations. This work was funded by Takeda Pharmaceutical Company, Ltd. K.V. and M.R. are former employees of Takeda Development Center Americas, Inc. K.V. is a current employee of EMD Serono Research & Development Institute, Inc. M.R. is employed with the University of Florida as a research professor. All authors who are current or former employees of Takeda (except K.V.) own stock in Takeda Pharmaceutical Company, Ltd. P.S. and T.L. are employees of Certara, Inc. As an Associate Editor of Clinical Pharmacology & Therapeutics , Karthik Venkatakrishnan was not involved in the review or decision process for this paper.
Dyad pedagogy in practical anatomy: A description of the implementation and student perceptions of an adaptive approach to cadaveric teaching
532e280e-e188-41ba-9afb-61f3ab5cb7c2
10084083
Anatomy[mh]
Cadaveric‐based teaching remains a key pedagogue in health science education due to its promotion of professionalism, ethical consciousness, and enhancement of communication skills (Flack & Nicholson, ). Notwithstanding, increasing student numbers, congested medical curricula, trends showing increases in transactional distances, and the strains of the recent Covid‐19 pandemic have resulted in a documented reduction in cadaveric contact time (Drake et al., ; Carmichael, ; Singh et al., ; Stone & Barry, ; Rockarts et al., ). The Covid‐19 pandemic has accelerated a preexisting enthusiasm for anatomy educators to evaluate and challenge traditional approaches to pedagogy as measured by the increasing number of published articles related to anatomy education (Smith & Pawlina, ). Changes to anatomy curricula during the pandemic were emergency driven and many were reactive rather than proactive. Emergency responses saw anatomists adapt their conventional teaching approaches by delivering lecture content online and by adopting new synchronous and asynchronous online strategies to make‐up for lost contact time (Longhurst et al., ). Pather et al.  indicated that such changes resulted in a loss of integrated “hands‐on” experiences that impacted academic workload, student roles, as well as anatomists' personal educational philosophies. Nonetheless, some evaluations of these pandemic‐driven pedagogical changes have been positive such as more time for self‐directed study and an increase in available blended learning resources (Srinivasan, ; Yoo et al., ). To accommodate for social distancing guidelines and to ensure cadaveric participation in the anatomy laboratory was maintained, a transition to dyad pedagogy was implemented at Trinity College Dublin. Dyad pedagogy is a goal‐directed teaching method that arranges students in pairs and can be seen as both interactive and reciprocal in nature helping to accommodate the strengths and weaknesses of each member (Sherman & Márquez, ). Dyad pedagogy has been used in the area of simulation‐based procedural skills training for medical students and surgical residents, and as measured by procedural performance, dyad practice has been shown to be as effective as individual practice (Shanks et al., ; Räder et al., ; Tolsgaard et al., ; Kowalewski et al., ). Working in pairs in this setting has been shown to permit more efficient use of simulators, is more cost‐effective than individual practice, and in the case of laparoscopic cholecystectomy, reduces surgical operating time (Kowalewski et al., ). Notably, dyad training for procedural skills has also been shown to significantly reduce stress and anxiety among students while learning (Abbott et al., ). The dyad approach can be seen as one that promotes collaborative learning and certainly, collaborative practices in health science education are becoming increasingly common, varied and generally well accepted (Pluta et al., ). In clinical settings, students are frequently encouraged to adopt and develop collaborative skills by participating in peer cooperation such as dividing learning tasks among peers, peer monitoring such as observing, and peer tutoring such as researching relevant topics and teaching them to each other (Sevenhuysen et al., ). As a collaborative approach, dyad pedagogy has been explored widely in nursing programs as a method of improving the quality and efficiency of clinical instruction and for creating supportive learning environments (Ruth‐Sahd, ; Austria et al., ; Ott & Succheralli, ). In a study by Ott and Succheralli , where student nursing dyads were expected to provide complete care to their assigned patient by functioning as a team, students reported that the dyad system had had a positive impact on their experiences of teamwork and clinical confidence. Other studies showed that working in dyads reduced student anxiety, increased confidence and task efficiency, improved patient outcomes, and helped to instill very early in the education process the importance of teamwork (Ruth‐Sahd, ; Austria et al., ). More recent studies have expanded upon this design by formulating interprofessional learning dyads, that is medical student–nursing student pairs. Preliminary research in the area has shown that working in interprofessional dyads helps medical students gain an awareness of their profession's strengths and weaknesses and can lead them to a more holistic understandings of treatment (Hansen et al., ). But what of the cognitive aspects of dyad pedagogy? Cognitive load theory may help to explain the advantage of using dyads as a process that unites memory and collaborative information processing (Kirschner et al., ; Räder et al., ). Complex tasks, such as memorizing a great amount of anatomical detail during a practical session, risks overloading the learner's working memory. By collaborating with a partner, however, this load is shared and therefore lessened for each individual. The exercise of collaboration itself, however, may increase cognitive load. The product of the two is a cognitive load equilibrium, but this balance may be greatly swayed by other elements such as gender, personality congruence, relationships between dyads, and previous experience (Xue et al., ; Wang et al., ). One way of objectively measuring collaborative behavior has been via interpersonal brain synchronization studies. Sun et al.  for example studied the effect of member experience on dyad cooperation using functional near‐infrared spectroscopy‐based hyperscanning. Student–student dyads and teacher–student dyads were examined. The results revealed that members with differing experiences (teacher‐student dyads) performed better on a joint‐drawing task than those with similar experiences (student–student dyads), and interpersonal brain synchronization of the left frontopolar region was found in teacher–student dyads only. Another study by Xue et al.  which compared interpersonal brain synchronization between highly creative (high) and less creative (low) individuals when solving realistic presented problems, found that dyads consisting of two low‐creativity members could perform just as well as dyads of two high‐creativity members. Moreover, stronger interpersonal brain synchronization between group members was evoked in low–low dyads, suggesting that better cooperation results in enhanced performance. These studies provide valuable insights for real‐world dynamics where people must collaborate effectively. For anatomy, the concept of dyad pedagogy has been examined at Downstate Health Sciences University, New York (Sherman & Márquez, ; Márquez & Sherman, ; Noronha et al., ; Blumenberg et al., ; Blumenberg & Márquez, ). One area in particular has looked at the integration of dyad pedagogy and technology to bolster anatomy learning by having student dyads create video projects in the anatomy laboratory. With access to a myriad of learning modalities including textbooks, lecture slides, and the internet, students define muscles and innervations of a particular region of the body and explain issues associated with injury to these structures. The presentation is then video‐recorded using high‐definition recording devices and thereafter posted to the Intranet (a private online network) for their classmates to use during review and study (Noronaha et al., ). The result is an online atlas of anatomy videos with clinical insights that augment the learning of gross anatomy (Blumenbery & Márquez, ). As measured by online activity logs, their value is evidenced by the frequency in which these videos are viewed, specifically when approaching examination periods. About 25% of medical students were found to view the videos five times or more and usage increased substantially in the days before an exam, implying active utilization of the videos as a study tool. In their approaches, dyad pedagogy has been considered a powerful method for acquiring and integrating anatomical knowledge that students can take beyond the classroom and into the workplace. These include problem‐solving skills, research, oral and written presentation, decision‐making, judgment, working collaboratively, and an ability to self‐learn (Sherman, ). With evidence‐based pedagogy now at the forefront of anatomy education (Evans et al., ; Smith & Pawlina, ), evaluating student preference should be considered a constructive tool for designing anatomy curricula (Davis et al., ; Phillips et al., ). Dyad pedagogical approaches may be particularly applicable for anatomy practical sessions that relate to the axial skeleton, due to the unilateral and/or central nature of these regions, that is, one trachea, one heart, one liver, one bladder. By comparison, larger groups may be more appropriate in practical sessions that explore musculoskeletal regions. Dyad pedagogy may however limit experiences of variant anatomy and restrict interactions among peers, both of which are important practices for the future doctor (Sprunger, ; Cullinane & Barry, ). Although dyad pedagogy has been explored in procedural skills, clinical settings, and as a method of optimizing resources in a high‐technology enabled anatomy laboratory, it remains unknown whether this pedagogical method is a suitable substitute to small group learning and whether such an approach is beneficial when supplemented with online blended learning resources. Moreover, the benefits of small group anatomy practical sessions from the student's perspective are unclear. This study describes the process of pivoting to a blended thorax, abdomen, and pelvis anatomy practical session curriculum in response to social distancing guidelines while maintaining adequate cadaveric contact time in the anatomy laboratory via the dyad pedagogical approach. The study sought to quantify medical student opinion of pair learning for cadaveric thorax, abdomen, and pelvis anatomy practical sessions at Trinity College Dublin. The study design relates to level 1 of the Kirkpatrick model for evaluating training programs and therefore assesses the degree to which participants find anatomy practical sessions engaging and relevant and aims to measure students' initial reactions to dyad pedagogy (Kirkpatrick & Kirkpatrick, ). First year medical students voluntarily responded to an anonymized, self‐administered online questionnaire via Qualtrics Survey Tool (Qualtrics, Provo, UT). Participants were informed that no personally identifiable information would be associated with their responses and that they may withdraw at any time by closing the web browser. The School of Medicine Research Ethics Committee Trinity College Dublin granted approval for the use of survey data in this study. Approval number 20210206. Course structure At Trinity College Dublin, the anatomy curriculum for medical students is delivered over four modules and takes a regional‐based approach. Two modules are taken during the first year: musculoskeletal anatomy (September–December), and thorax, abdomen, and pelvis anatomy (January–April). In the second year, a further two modules are taken; anatomy of the head and neck in the fall, followed by neuroanatomy in the spring. Over the two years, approximately 200 hours of anatomy content is delivered with 25% delivered as didactic lectures and 75% delivered as practical sessions in the anatomy laboratory. The anatomy practical sessions are primarily dissection based in which small groups of students dissect a cadaver under the supervision of an anatomy demonstrator. Students are alphabetically assigned to groups by the senior executive officer to the discipline and students remain in these groups for one year. Other teaching resources such as models, osteological specimen, radiological images, and digital learning platforms are also used. In general, 180–200 students are enrolled in the course each year and 182 students entered the medical program in 2020. Practical teaching and assessment of thorax, abdomen, and pelvis anatomy Prior to the Covid‐19 pandemic The anatomy laboratory at Trinity College Dublin is comprised of 12 stations with each station occupying a cadaveric dissection table, a dry table for variable learning using models, osteological specimen, and anatomy atlases, one 42‐inch display screen, and one 23‐inch interactive display screen. Each station is separated using station dividers and the layout is depicted in Figure . Prior to the Covid‐19 pandemic, the thorax, abdomen, and pelvis anatomy curriculum at Trinity College Dublin was delivered as a 12‐week course comprised of 11 three‐hour practical sessions and 20 one‐hour didactic lectures. Practical sessions were comprised of eight to ten students per station and involved small subgroup rotations between cadaveric dissection, digital learning, and dry‐table learning activities (Figure ). The use of station‐based rotations in the anatomy laboratory has previously been reported in the literature and is noted for maximizing student engagement and for providing students with multiple means of representation (Drake, ; Goldina & Barattini, ; Balta et al., ). Cadaveric dissection involved a subgroup of approximately three students participating in dissection of a donor body for one hour using a designated dissection manual that was uploaded to the virtual learning platform and displayed on the 42‐inch monitor during the practical session. Assistance and supervision was provided by an anatomy demonstrator. Digital learning involved another subgroup of students engaging in an interactive PowerPoint presentation (Microsoft Corp., Redmond, WA) on a 23‐inch display screen. The PowerPoint presentation involved cadaveric and radiological images and asked students to work as a team to identify anatomical structures and relate their knowledge to clinically relevant scenarios (O'Keeffe et al., ). The final dry‐table rotation at the center of the anatomy laboratory involved the use of anatomical models, osteological specimen and atlases to review anatomical content. During this rotation, students have the liberty to use these resources as they so wish. Each of the rotations was weighted at one hour and faculty cover of the anatomy laboratory was one anatomy demonstrator to four stations (approximately 36 students). Furthermore, students had the opportunity to engage in additional self‐directed study in the anatomy laboratory outside designated class time. Student performance of practical anatomy was measured using traditional ‘anatomy spot examinations’ housed in the anatomy laboratory followed by an end‐of‐module multiple choice questionnaire. Students completed three in‐house ‘anatomy spot examinations’. The first two were continuous assessments and completed during weeks four and nine of the module with each assessment accounting for 10% of the final grade. Each of these continuous assessments were comprised of five questions with four parts; parts 1 and 2 asked students to identify anatomical structures tagged on cadaveric material; part 3 asked students to provide information regarding arterial supply, venous drainage, innervation, function, embryological origin, and/or anatomical relations; and part 4 assessed students' ability to apply clinical knowledge. Each part was weighted with one mark summing to a total potential mark of 20. Students were allocated four minutes per question. The third and final anatomy spot examination was held during week 12 and was comprised of ten questions with five parts; parts 1 to 4 followed the same format as the continuous assessments and part 5 ranged from basic identification of anatomical structures to clinically applied anatomy. Each part was once again weighted with one mark summing to a total potential mark of 50 and accounting for 40% of the total grade. Five minutes were allocated per question. Lastly, students completed an end‐of‐module 50‐item multiple choice questionnaire that accounted for 40% of the final grade and was 90 minutes in duration. During the Covid‐19 pandemic The thorax, abdomen, and pelvis anatomy module 2021 was adjusted in response to social distancing and was divided into three units, thorax (3 weeks), abdomen (4 weeks), and pelvis (1 week), followed by 1 week of self‐directed revision. Students were assigned to dyads and one hour per week, for eight consecutive weeks, was spent in the anatomy laboratory (two students per station). For anatomy laboratory layout with social distancing guidelines see Figure . Cadavers were “semi‐prosected” by anatomy faculty prior to attendance by students, that is, cadavers were dissected so that all major anatomical landmarks were visible and students were given the opportunity to participate in more detailed dissection to expose smaller structures. After the 1‐h practical, a new set of student pairs entered the anatomy laboratory and continued the dissection that had been completed by students prior. Students were able to revisit this regional dissection for the first 5–10 minutes of the following week. Faculty cover of the anatomy laboratory was one anatomy demonstrator to four stations (eight students) with the total number of students in the anatomy laboratory per practical session summating to 24 students. The lecture delivery, which covers basic anatomy, clinical relations, and embryological development was presented in the same pre‐Covid‐19 structure albeit online using a combination of prerecorded and live lectures using Panopto video hosting platform (Panopto Inc., Seattle, WA) and Collaborate Ultra (Blackboard Inc., New York, NY). To compensate for the reduction of time spent in the anatomy laboratory, pre‐ and post‐practical session learning activities were uploaded to the virtual learning platform Blackboard. The pre‐practical session activity included links to “ Acland's Video Atlas of Human Anatomy ” (Acland, ) and a detailed pre‐practical guide that listed the aims and objectives of the practical session as well as a comprehensive list of anatomical structures to be identified during the 1‐h practical session. The post‐practical session activity substituted the digital learning element that was provided during practical sessions pre‐Covid‐19 and included a self‐test PowerPoint presentation (Microsoft Corp., Redmond, WA) which asked students to label diagrams, review radiological images, and relate their anatomical knowledge to clinically relevant scenarios. Student performance of practical and theoretical anatomy followed the same pre‐Covid‐19 format, however, these assessments were transferred to the virtual learning environment. The traditional in‐house “spot anatomy examinations” were substituted with cadaveric images and students were assessed on their ability to identify, relate, and clinically evaluate anatomical content. The end‐of‐module multiple choice questionnaire was also transferred online to the virtual learning environment. The same time allowances were allocated and all online assessments were remotely proctored using Proctorio, a remote proctoring service (Proctorio Inc., Scottsdale, AZ). Search strategy to identify available instrument A systematic literature search of PubMed (United States National Library of Medicine, National Institutes of Health, Bethesda, MD) (1988–2022) and Embase ® (Elsevier, Inc., New York, NY) (1970–2022) attempted to identify articles relevant to dyad pedagogy and student satisfaction in anatomy. Key words used in the PubMed search were re‐executed in Embase ® . All articles that matched our search terminology failed to identify a survey instrument that addressed the specific evaluation needs. A valid and reliable instrument to measure student perceptions of dyad pedagogy in practical anatomy was therefore developed. The questionnaire The first part of this questionnaire gathered demographic data including gender, age, previous anatomy and dissection experience, future career interest, and the number of anatomy practical sessions attended by the participant (six items). Using a five‐point Likert scale, the second part asked participants to what extent they agreed with a series of statements regarding students' preparedness for practical sessions, feelings of connectedness to faculty and peers, usefulness of accompanying online learning resources, and the extent to which they agreed with the mode of assessment (1 = strongly disagree, 2 = disagree, 3 = neither agree nor disagree, 4 = agree, 5 = strongly agree; 10 items). The third part asked students whether they had seen or missed anatomical structures during the practical sessions as a measure of the amount of detailed dissection achieved (two items; Selçuk et al., ). Lastly, the final section asked students whether they enjoyed the pair‐based system (one item), and a space was provided for participants to express supplementary thoughts and opinions. The questionnaire was modeled on items previously published in anatomy student perception studies (Vasan et al., ; Jeyakumar et al., ). The Cronbach's alpha coefficients for Vasan et al.  and Jeyakumar et al.  were 0.908 and 0.810, respectively, and the original “Pair‐Learning in Practical Anatomy Survey” is available in the File. Data analysis Reliability of the questionnaire was assessed using Cronbach's alpha coefficient; values greater than 0.7 were considered acceptable (Peterson, ; Santos, ). Cronbach's alpha is a statistical measure of internal consistency that ranges from 0 to 1. As the statistic approaches 1, a greater degree of internal consistency between items in the Likert scale is indicated and therefore signifies reliability of the instrument. The Kendall's tau‐b coefficient was used to assess the validity of the questionnaire. Kendall's tau‐b is a nonparametric rank correlation coefficient that measures the strength of the association between sets of paired data. Significant tau‐b coefficients indicate construct validity of the questionnaire (Sokal & Rohlf, ). Descriptive summary statistics (frequencies and percentages) were calculated for basic demographic data. A principal components analysis with varimax rotation was conducted to identify sub‐measures within the pair learning in practical anatomy questionnaire. Varimax rotations maximize the sum of the variances within a model and helps to clarify relationships among factors. It is the most frequently reported rotational method used in published studies (Thompson, ). The Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy and the Bartlett's test of sphericity were used to determine whether the analysis should proceed with exploratory factor analysis. The KMO index ranges from 0 to 1, with KMO >0.50 considered necessary for factor analysis. Similarly, Bartlett's test of sphericity should be statistically significant ( P < 0.05; Williams et al., ). Factors that yielded eigenvalues greater than 1 were retained as factors within the model (Kaiser, 0; Taherdoost, ). Items with cross‐loadings, that is loadings of 0.3 or above on two factors, were eliminated. Reliability of the factors was assessed using Cronbach's alpha coefficient (Peterson, ; Santos, ). Likert‐scale responses often depart from the normal distribution, the Mann–Whitney U test was therefore utilized to compare responses between students with previous cadaveric anatomy experience versus those without, and between students interested in surgical/radiological careers or other specialties. As the Mann–Whitney U test is an ordinal test, medians are recommended as the reported measure of central tendency (Field, ), however, means and standard deviations are also reported here. To evaluate the effect size of any significant differences observed in the Mann–Whitney U test the correlation coefficient ( r ) was calculated with r > 0.10 representing a small effect size, r > 0.3, medium; and r > 0.5, large (Rosenthal, ). Differences in responses between age groups and between genders were examined using the Kruskal–Wallis H test. The Kruskal–Wallis H test, also known as the “One‐Way ANOVA on Ranks” is a nonparametric test used to compare two or more independent samples of equal or different size by comparing median values. It is particularly suitable for ordinal data and where there is a considerable difference in the number of subjects for each comparative group (MacFarland & Yates, ). Dunn post‐hoc tests with Bonferroni adjustments were performed for statistically significant Kruskal–Wallis values. The Dunn post‐hoc test is a nonparametric pairwise multiple comparisons procedure based on ranked data and is recommended for groups with unequal sample sizes (Elliott & Hynan, ). Significant values ( P < 0.05) indicate differences between groups. Bonferroni adjustments correct for multiple comparisons and are recommended to avoid the occurrence of type I errors (Armstrong, ). Effect sizes were calculated using eta‐squared; eta‐squared < 0.01 small, eta‐squared < 0.06 medium, and eta‐squared < 0.14 large. Differences of statistical significance were set as P < 0.05. Statistical Package for Social Sciences (SPSS), version 26 (IMB Corp., Armonk, NY) was used for analysis of quantitative data. The open‐ended qualitative responses were collated, and an inductive content analysis was performed. Inductive content analysis is used in cases where there are no previous studies dealing with the phenomenon (Elo & Kyngäs, ). The two authors independently open coded the responses. This involved the process of reading text and writing down headings in the margins to describe all aspects of the content (Bernard, ). The independent headings formulated by the authors were thereafter collated from the margins by the lead investigator to form categories and each category was named using a content‐characteristic word. These categories were further refined by constructing subcategories. The identified categories were reviewed by the second author and relevant categories were retained. At Trinity College Dublin, the anatomy curriculum for medical students is delivered over four modules and takes a regional‐based approach. Two modules are taken during the first year: musculoskeletal anatomy (September–December), and thorax, abdomen, and pelvis anatomy (January–April). In the second year, a further two modules are taken; anatomy of the head and neck in the fall, followed by neuroanatomy in the spring. Over the two years, approximately 200 hours of anatomy content is delivered with 25% delivered as didactic lectures and 75% delivered as practical sessions in the anatomy laboratory. The anatomy practical sessions are primarily dissection based in which small groups of students dissect a cadaver under the supervision of an anatomy demonstrator. Students are alphabetically assigned to groups by the senior executive officer to the discipline and students remain in these groups for one year. Other teaching resources such as models, osteological specimen, radiological images, and digital learning platforms are also used. In general, 180–200 students are enrolled in the course each year and 182 students entered the medical program in 2020. Prior to the Covid‐19 pandemic The anatomy laboratory at Trinity College Dublin is comprised of 12 stations with each station occupying a cadaveric dissection table, a dry table for variable learning using models, osteological specimen, and anatomy atlases, one 42‐inch display screen, and one 23‐inch interactive display screen. Each station is separated using station dividers and the layout is depicted in Figure . Prior to the Covid‐19 pandemic, the thorax, abdomen, and pelvis anatomy curriculum at Trinity College Dublin was delivered as a 12‐week course comprised of 11 three‐hour practical sessions and 20 one‐hour didactic lectures. Practical sessions were comprised of eight to ten students per station and involved small subgroup rotations between cadaveric dissection, digital learning, and dry‐table learning activities (Figure ). The use of station‐based rotations in the anatomy laboratory has previously been reported in the literature and is noted for maximizing student engagement and for providing students with multiple means of representation (Drake, ; Goldina & Barattini, ; Balta et al., ). Cadaveric dissection involved a subgroup of approximately three students participating in dissection of a donor body for one hour using a designated dissection manual that was uploaded to the virtual learning platform and displayed on the 42‐inch monitor during the practical session. Assistance and supervision was provided by an anatomy demonstrator. Digital learning involved another subgroup of students engaging in an interactive PowerPoint presentation (Microsoft Corp., Redmond, WA) on a 23‐inch display screen. The PowerPoint presentation involved cadaveric and radiological images and asked students to work as a team to identify anatomical structures and relate their knowledge to clinically relevant scenarios (O'Keeffe et al., ). The final dry‐table rotation at the center of the anatomy laboratory involved the use of anatomical models, osteological specimen and atlases to review anatomical content. During this rotation, students have the liberty to use these resources as they so wish. Each of the rotations was weighted at one hour and faculty cover of the anatomy laboratory was one anatomy demonstrator to four stations (approximately 36 students). Furthermore, students had the opportunity to engage in additional self‐directed study in the anatomy laboratory outside designated class time. Student performance of practical anatomy was measured using traditional ‘anatomy spot examinations’ housed in the anatomy laboratory followed by an end‐of‐module multiple choice questionnaire. Students completed three in‐house ‘anatomy spot examinations’. The first two were continuous assessments and completed during weeks four and nine of the module with each assessment accounting for 10% of the final grade. Each of these continuous assessments were comprised of five questions with four parts; parts 1 and 2 asked students to identify anatomical structures tagged on cadaveric material; part 3 asked students to provide information regarding arterial supply, venous drainage, innervation, function, embryological origin, and/or anatomical relations; and part 4 assessed students' ability to apply clinical knowledge. Each part was weighted with one mark summing to a total potential mark of 20. Students were allocated four minutes per question. The third and final anatomy spot examination was held during week 12 and was comprised of ten questions with five parts; parts 1 to 4 followed the same format as the continuous assessments and part 5 ranged from basic identification of anatomical structures to clinically applied anatomy. Each part was once again weighted with one mark summing to a total potential mark of 50 and accounting for 40% of the total grade. Five minutes were allocated per question. Lastly, students completed an end‐of‐module 50‐item multiple choice questionnaire that accounted for 40% of the final grade and was 90 minutes in duration. During the Covid‐19 pandemic The thorax, abdomen, and pelvis anatomy module 2021 was adjusted in response to social distancing and was divided into three units, thorax (3 weeks), abdomen (4 weeks), and pelvis (1 week), followed by 1 week of self‐directed revision. Students were assigned to dyads and one hour per week, for eight consecutive weeks, was spent in the anatomy laboratory (two students per station). For anatomy laboratory layout with social distancing guidelines see Figure . Cadavers were “semi‐prosected” by anatomy faculty prior to attendance by students, that is, cadavers were dissected so that all major anatomical landmarks were visible and students were given the opportunity to participate in more detailed dissection to expose smaller structures. After the 1‐h practical, a new set of student pairs entered the anatomy laboratory and continued the dissection that had been completed by students prior. Students were able to revisit this regional dissection for the first 5–10 minutes of the following week. Faculty cover of the anatomy laboratory was one anatomy demonstrator to four stations (eight students) with the total number of students in the anatomy laboratory per practical session summating to 24 students. The lecture delivery, which covers basic anatomy, clinical relations, and embryological development was presented in the same pre‐Covid‐19 structure albeit online using a combination of prerecorded and live lectures using Panopto video hosting platform (Panopto Inc., Seattle, WA) and Collaborate Ultra (Blackboard Inc., New York, NY). To compensate for the reduction of time spent in the anatomy laboratory, pre‐ and post‐practical session learning activities were uploaded to the virtual learning platform Blackboard. The pre‐practical session activity included links to “ Acland's Video Atlas of Human Anatomy ” (Acland, ) and a detailed pre‐practical guide that listed the aims and objectives of the practical session as well as a comprehensive list of anatomical structures to be identified during the 1‐h practical session. The post‐practical session activity substituted the digital learning element that was provided during practical sessions pre‐Covid‐19 and included a self‐test PowerPoint presentation (Microsoft Corp., Redmond, WA) which asked students to label diagrams, review radiological images, and relate their anatomical knowledge to clinically relevant scenarios. Student performance of practical and theoretical anatomy followed the same pre‐Covid‐19 format, however, these assessments were transferred to the virtual learning environment. The traditional in‐house “spot anatomy examinations” were substituted with cadaveric images and students were assessed on their ability to identify, relate, and clinically evaluate anatomical content. The end‐of‐module multiple choice questionnaire was also transferred online to the virtual learning environment. The same time allowances were allocated and all online assessments were remotely proctored using Proctorio, a remote proctoring service (Proctorio Inc., Scottsdale, AZ). The anatomy laboratory at Trinity College Dublin is comprised of 12 stations with each station occupying a cadaveric dissection table, a dry table for variable learning using models, osteological specimen, and anatomy atlases, one 42‐inch display screen, and one 23‐inch interactive display screen. Each station is separated using station dividers and the layout is depicted in Figure . Prior to the Covid‐19 pandemic, the thorax, abdomen, and pelvis anatomy curriculum at Trinity College Dublin was delivered as a 12‐week course comprised of 11 three‐hour practical sessions and 20 one‐hour didactic lectures. Practical sessions were comprised of eight to ten students per station and involved small subgroup rotations between cadaveric dissection, digital learning, and dry‐table learning activities (Figure ). The use of station‐based rotations in the anatomy laboratory has previously been reported in the literature and is noted for maximizing student engagement and for providing students with multiple means of representation (Drake, ; Goldina & Barattini, ; Balta et al., ). Cadaveric dissection involved a subgroup of approximately three students participating in dissection of a donor body for one hour using a designated dissection manual that was uploaded to the virtual learning platform and displayed on the 42‐inch monitor during the practical session. Assistance and supervision was provided by an anatomy demonstrator. Digital learning involved another subgroup of students engaging in an interactive PowerPoint presentation (Microsoft Corp., Redmond, WA) on a 23‐inch display screen. The PowerPoint presentation involved cadaveric and radiological images and asked students to work as a team to identify anatomical structures and relate their knowledge to clinically relevant scenarios (O'Keeffe et al., ). The final dry‐table rotation at the center of the anatomy laboratory involved the use of anatomical models, osteological specimen and atlases to review anatomical content. During this rotation, students have the liberty to use these resources as they so wish. Each of the rotations was weighted at one hour and faculty cover of the anatomy laboratory was one anatomy demonstrator to four stations (approximately 36 students). Furthermore, students had the opportunity to engage in additional self‐directed study in the anatomy laboratory outside designated class time. Student performance of practical anatomy was measured using traditional ‘anatomy spot examinations’ housed in the anatomy laboratory followed by an end‐of‐module multiple choice questionnaire. Students completed three in‐house ‘anatomy spot examinations’. The first two were continuous assessments and completed during weeks four and nine of the module with each assessment accounting for 10% of the final grade. Each of these continuous assessments were comprised of five questions with four parts; parts 1 and 2 asked students to identify anatomical structures tagged on cadaveric material; part 3 asked students to provide information regarding arterial supply, venous drainage, innervation, function, embryological origin, and/or anatomical relations; and part 4 assessed students' ability to apply clinical knowledge. Each part was weighted with one mark summing to a total potential mark of 20. Students were allocated four minutes per question. The third and final anatomy spot examination was held during week 12 and was comprised of ten questions with five parts; parts 1 to 4 followed the same format as the continuous assessments and part 5 ranged from basic identification of anatomical structures to clinically applied anatomy. Each part was once again weighted with one mark summing to a total potential mark of 50 and accounting for 40% of the total grade. Five minutes were allocated per question. Lastly, students completed an end‐of‐module 50‐item multiple choice questionnaire that accounted for 40% of the final grade and was 90 minutes in duration. The thorax, abdomen, and pelvis anatomy module 2021 was adjusted in response to social distancing and was divided into three units, thorax (3 weeks), abdomen (4 weeks), and pelvis (1 week), followed by 1 week of self‐directed revision. Students were assigned to dyads and one hour per week, for eight consecutive weeks, was spent in the anatomy laboratory (two students per station). For anatomy laboratory layout with social distancing guidelines see Figure . Cadavers were “semi‐prosected” by anatomy faculty prior to attendance by students, that is, cadavers were dissected so that all major anatomical landmarks were visible and students were given the opportunity to participate in more detailed dissection to expose smaller structures. After the 1‐h practical, a new set of student pairs entered the anatomy laboratory and continued the dissection that had been completed by students prior. Students were able to revisit this regional dissection for the first 5–10 minutes of the following week. Faculty cover of the anatomy laboratory was one anatomy demonstrator to four stations (eight students) with the total number of students in the anatomy laboratory per practical session summating to 24 students. The lecture delivery, which covers basic anatomy, clinical relations, and embryological development was presented in the same pre‐Covid‐19 structure albeit online using a combination of prerecorded and live lectures using Panopto video hosting platform (Panopto Inc., Seattle, WA) and Collaborate Ultra (Blackboard Inc., New York, NY). To compensate for the reduction of time spent in the anatomy laboratory, pre‐ and post‐practical session learning activities were uploaded to the virtual learning platform Blackboard. The pre‐practical session activity included links to “ Acland's Video Atlas of Human Anatomy ” (Acland, ) and a detailed pre‐practical guide that listed the aims and objectives of the practical session as well as a comprehensive list of anatomical structures to be identified during the 1‐h practical session. The post‐practical session activity substituted the digital learning element that was provided during practical sessions pre‐Covid‐19 and included a self‐test PowerPoint presentation (Microsoft Corp., Redmond, WA) which asked students to label diagrams, review radiological images, and relate their anatomical knowledge to clinically relevant scenarios. Student performance of practical and theoretical anatomy followed the same pre‐Covid‐19 format, however, these assessments were transferred to the virtual learning environment. The traditional in‐house “spot anatomy examinations” were substituted with cadaveric images and students were assessed on their ability to identify, relate, and clinically evaluate anatomical content. The end‐of‐module multiple choice questionnaire was also transferred online to the virtual learning environment. The same time allowances were allocated and all online assessments were remotely proctored using Proctorio, a remote proctoring service (Proctorio Inc., Scottsdale, AZ). A systematic literature search of PubMed (United States National Library of Medicine, National Institutes of Health, Bethesda, MD) (1988–2022) and Embase ® (Elsevier, Inc., New York, NY) (1970–2022) attempted to identify articles relevant to dyad pedagogy and student satisfaction in anatomy. Key words used in the PubMed search were re‐executed in Embase ® . All articles that matched our search terminology failed to identify a survey instrument that addressed the specific evaluation needs. A valid and reliable instrument to measure student perceptions of dyad pedagogy in practical anatomy was therefore developed. The first part of this questionnaire gathered demographic data including gender, age, previous anatomy and dissection experience, future career interest, and the number of anatomy practical sessions attended by the participant (six items). Using a five‐point Likert scale, the second part asked participants to what extent they agreed with a series of statements regarding students' preparedness for practical sessions, feelings of connectedness to faculty and peers, usefulness of accompanying online learning resources, and the extent to which they agreed with the mode of assessment (1 = strongly disagree, 2 = disagree, 3 = neither agree nor disagree, 4 = agree, 5 = strongly agree; 10 items). The third part asked students whether they had seen or missed anatomical structures during the practical sessions as a measure of the amount of detailed dissection achieved (two items; Selçuk et al., ). Lastly, the final section asked students whether they enjoyed the pair‐based system (one item), and a space was provided for participants to express supplementary thoughts and opinions. The questionnaire was modeled on items previously published in anatomy student perception studies (Vasan et al., ; Jeyakumar et al., ). The Cronbach's alpha coefficients for Vasan et al.  and Jeyakumar et al.  were 0.908 and 0.810, respectively, and the original “Pair‐Learning in Practical Anatomy Survey” is available in the File. Reliability of the questionnaire was assessed using Cronbach's alpha coefficient; values greater than 0.7 were considered acceptable (Peterson, ; Santos, ). Cronbach's alpha is a statistical measure of internal consistency that ranges from 0 to 1. As the statistic approaches 1, a greater degree of internal consistency between items in the Likert scale is indicated and therefore signifies reliability of the instrument. The Kendall's tau‐b coefficient was used to assess the validity of the questionnaire. Kendall's tau‐b is a nonparametric rank correlation coefficient that measures the strength of the association between sets of paired data. Significant tau‐b coefficients indicate construct validity of the questionnaire (Sokal & Rohlf, ). Descriptive summary statistics (frequencies and percentages) were calculated for basic demographic data. A principal components analysis with varimax rotation was conducted to identify sub‐measures within the pair learning in practical anatomy questionnaire. Varimax rotations maximize the sum of the variances within a model and helps to clarify relationships among factors. It is the most frequently reported rotational method used in published studies (Thompson, ). The Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy and the Bartlett's test of sphericity were used to determine whether the analysis should proceed with exploratory factor analysis. The KMO index ranges from 0 to 1, with KMO >0.50 considered necessary for factor analysis. Similarly, Bartlett's test of sphericity should be statistically significant ( P < 0.05; Williams et al., ). Factors that yielded eigenvalues greater than 1 were retained as factors within the model (Kaiser, 0; Taherdoost, ). Items with cross‐loadings, that is loadings of 0.3 or above on two factors, were eliminated. Reliability of the factors was assessed using Cronbach's alpha coefficient (Peterson, ; Santos, ). Likert‐scale responses often depart from the normal distribution, the Mann–Whitney U test was therefore utilized to compare responses between students with previous cadaveric anatomy experience versus those without, and between students interested in surgical/radiological careers or other specialties. As the Mann–Whitney U test is an ordinal test, medians are recommended as the reported measure of central tendency (Field, ), however, means and standard deviations are also reported here. To evaluate the effect size of any significant differences observed in the Mann–Whitney U test the correlation coefficient ( r ) was calculated with r > 0.10 representing a small effect size, r > 0.3, medium; and r > 0.5, large (Rosenthal, ). Differences in responses between age groups and between genders were examined using the Kruskal–Wallis H test. The Kruskal–Wallis H test, also known as the “One‐Way ANOVA on Ranks” is a nonparametric test used to compare two or more independent samples of equal or different size by comparing median values. It is particularly suitable for ordinal data and where there is a considerable difference in the number of subjects for each comparative group (MacFarland & Yates, ). Dunn post‐hoc tests with Bonferroni adjustments were performed for statistically significant Kruskal–Wallis values. The Dunn post‐hoc test is a nonparametric pairwise multiple comparisons procedure based on ranked data and is recommended for groups with unequal sample sizes (Elliott & Hynan, ). Significant values ( P < 0.05) indicate differences between groups. Bonferroni adjustments correct for multiple comparisons and are recommended to avoid the occurrence of type I errors (Armstrong, ). Effect sizes were calculated using eta‐squared; eta‐squared < 0.01 small, eta‐squared < 0.06 medium, and eta‐squared < 0.14 large. Differences of statistical significance were set as P < 0.05. Statistical Package for Social Sciences (SPSS), version 26 (IMB Corp., Armonk, NY) was used for analysis of quantitative data. The open‐ended qualitative responses were collated, and an inductive content analysis was performed. Inductive content analysis is used in cases where there are no previous studies dealing with the phenomenon (Elo & Kyngäs, ). The two authors independently open coded the responses. This involved the process of reading text and writing down headings in the margins to describe all aspects of the content (Bernard, ). The independent headings formulated by the authors were thereafter collated from the margins by the lead investigator to form categories and each category was named using a content‐characteristic word. These categories were further refined by constructing subcategories. The identified categories were reviewed by the second author and relevant categories were retained. Questionnaire validity and reliability The Cronbach's alpha value of 0.75 indicated an acceptable correlation coefficient for the cumulative Likert‐scale items. The alpha value, which is over the 0.70 threshold indicates that the instrument concerning student satisfaction of pair learning in practical anatomy is reliable. Significance testing of the Kendall's tau‐b statistic showed significant positive associations P < 0.01) between all items. Coefficients ranged from 0.216 to 0.561. The results demonstrate validity of the survey instrument (Sokal & Rohlf, ). Scores ranged from 28 to 49 with higher scores indicating greater satisfaction with pair learning in practical anatomy sessions. Survey response and cohort characteristics Ninety‐three first year medical students voluntarily participated in the study (51% response rate). Females accounted for 73.1% ( n = 68) of the study population with males accounting for 25.8% ( n = 24). One participant identified as other (1.1%, n = 1). Eighty‐one percent of participants fell within the 17–20‐year‐old category ( n = 76), 14% in the 21–24‐year‐old category ( n = 13), and 4.3% were aged 25 to 28 ( n = 4). Approximately 9% of participants had had previous experience with cadaveric anatomy and all participants (100%) attended more than 50% of the practical sessions in the anatomy laboratory indicating regular attendance. Approximately half of all participants surveyed indicated that they would be interested in a surgical or radiological career (52.7%). Twenty‐six participants (27.95%) contributed to the open‐ended responses. Exploratory factor analysis A principal component factor analysis with varimax rotation was conducted on the 10 items of the pair learning in practical anatomy questionnaire. The Kaiser–Meyer–Olkin measure of sampling adequacy was 0.683, which surpassed the 0.5 threshold and the Bartlett's test of sphericity was statistically significant ( P < 0.001). Exploratory factor analysis was therefore performed. Three factors with eigenvalues greater than 1.0 were identified and found to explain 32%, 15%, and 12% of the variance, respectively. An initial five items were eliminated due to cross‐loadings of 0.3 or above. The item “ During the pair‐based anatomy practicals, I had enough face‐to‐face time with my demonstrator ” had factor loadings between 0.4 and 0.6 on both Factor 1 and Factor 3; “ My anatomy lab partner and I worked well together ” had factor loadings between 0.5 and 0.7 on both Factor 1 and Factor 2; and “ Given the one‐hour pair‐based anatomy practical sessions, I still feel the practical examination provides a fair assessment ” had factor loadings between 0.4 and 0.7 on both Factor 1 and Factor 3. The principal components analysis with varimax rotation of the revised seven‐items was rerun. Two factors with eigenvalues greater than 1.0 were identified and were determined to explain 50.80% of the variance. An additional item was eliminated due to cross‐loadings; item “ I was well prepared for my practical each week ” had factor loadings between 0.4 and 0.5 on both Factor 1 and Factor 2. For the final stage, a principal component factor analysis of the remaining six items, using varimax rotation was conducted. Two factors explained 54.42% of the variance. All items in this analysis had primary loadings over 0.5. Only one item had a cross‐loading above 0.3 (pair learning helped my understanding of anatomy), however, this item had a strong primary loading of 0.698 on the second factor. The factor loading matrix for this final solution is presented in Table . Follow‐up reliability analysis of the identified factors extracted poor Cronbach's alpha coefficients; 0.552 for Factor 1 (3 items) and 0.410 for Factor 2 (3 items). The six items comprising the two factors were determined to represent “ Connectedness and Preparedness ” and “ Understanding and Online Learning ”; however, no follow‐up comparisons analyses were performed on either of the two constructs due to insupportable reliability. Perceptions about connectedness and preparedness in pair‐learning given reduced time in the anatomy laboratory Despite a significant reduction in time allocated to the anatomy laboratory in comparison to previous years, students appeared to maintain sufficient relationships with their peers and demonstrators. Students agreed that pair learning helped their understanding of anatomy (4.54 ± 0.69) and that they and their partner worked well together (4.48 ± 0.34). Figure illustrates the mean Likert‐scale scores for the questionnaire items. Importantly, students concurred that they had enough time with the donor body at their stations (4.73 ± 0.68) indicating that reduced time with donors does not impact students' ability to build an appropriate professional relationship with their donor. Likewise, reduced contact with the donor body does not impact students' ability to dissect and identify small anatomical structures when skin and fascia have been readily dissected. This is reaffirmed with the third part of the questionnaire which asked students whether they had seen or not seen the left anterior descending artery and the major duodenal papilla during practical sessions. More than half of the students (64.5%) reported that they saw the left anterior descending artery on the donor body at their station. Notwithstanding, only 44.1% indicated that they saw the major duodenal papilla. With reduced time in the anatomy laboratory at the forefront of this pair‐learning strategy, students were asked to respond to a series of statements concerning preparedness for practical sessions. As indicated in Figure , students reported that both they (3.89 ± 0.70) and their partners (3.89 ± 0.95) were well‐prepared for the practical sessions each week supporting the notion that reduced time promotes proactive learning. There was also agreement among students that the online‐practical examination provided a fair assessment (3.75 ± 0.96). Perceptions about online learning as an alternative to rotational‐based practical sessions There was consensus among students that the pre‐ and post‐practical session learning activities on Blackboard were useful tools for private study and revision (4.62 ± 0.64). That said, however, students reported that they learned better during practical sessions as compared to online lectures (item 5; 2.11 ± 0.96) which highlights the notion that not all learning material can be effectively substituted online. As shown in Table , students with previous cadaveric anatomy experience (1.38 ± 0.52) rated item 5 significantly lower than those with no previous experience (2.18 ± 0.97) suggesting that students with no prior experience are more welcome to the idea of substituting in‐person anatomy laboratory time with online resources. Comparisons analysis The perceptions of students toward pair learning in practical anatomy were compared across career interest and previous cadaveric anatomy experience. Pair‐learning satisfaction scores for students with previous cadaveric anatomy experience (median = 44.50; mean = 43.86 ± 1.96) did not differ significantly from students with no previous experience (median = 42; mean = 41.53 ± 4.67), U = 244.50, P = 0.19. Likewise, no differences were observed between students interested in surgical/radiological careers (median = 43; mean = 41.71 ± 4.62) as compared to students interested in other medical disciplines (median = 42; mean = 41.75 ± 4.52), U = 1062, P = 0.90. The breakdown of responses for students with previous cadaveric anatomy experience versus those with no previous experience is summarized in Table . Although statistically insignificant ( P > 0.05), all students with previous cadaveric anatomy experience rated all items higher than those with no previous experience. This excluded item 5, “I learn better from the online lectures than in my pair‐based anatomy practical sessions,” with those with previous experience rating the item lower (median = 1, mean = 1.38 ± 0.52) than those with no previous experience (median = 2, mean = 2.18 ± 0.97). This difference was statistically significant and represented a medium effect size U = 169, P = 0.01, r = 0.457. The breakdown of responses for individual items for students interested in surgical/radiological careers versus those interested in other disciplines is summarized in Table . No significant differences were observed. A Kruskal–Wallis H test showed that there was no statistically significant difference in pair‐learning satisfaction scores between age groups χ 2 (2) = 0.577, P = 0.75, with a mean rank satisfaction score of 46.04 for 17 to 20 year olds ( n = 76), 52.08 for 21 to 24 year olds ( n = 13), and 48.75 for 25 to 28 year olds ( n = 4). However, similar to the results found for previous cadaveric anatomy experience, using the Kruskal–Wallis H test an analysis of responses for individual items across age groups indicated a significant difference for item 5 “I learn better from the online lectures than in my pair‐based anatomy practical sessions” χ 2 (2) = 8.502, P = 0.01, with a mean rank score of 50.57 for 17 to 20 year olds, 31.35 for 21 to 24 year olds, and 30.00 for 25 to 28 year olds. Pairwise comparisons using the post‐hoc Dunn test with Bonferroni adjustments indicated that this difference was significant between 17 to 20 year olds and 21 to 24 year olds ( P = 0.03). This suggests that the younger students in this sample were more comfortable with learning anatomical information online via pre‐recorded lectures than in‐person during anatomy laboratory practical sessions. All other items compared across age groups were insignificant ( P > 0.05). There were no significant differences in scores between genders χ 2 (2) = 0.168, P = 0.92, with a mean rank score of 45.21 for males, 47.68 for females, and 43.50 for other. Inductive content analysis of open‐ended reponses Twenty‐six participants (27.95%) contributed to the open‐ended responses. Analysis of these responses revealed two categories: (1) benefits of pair learning, and (2) suggestions for improvement of the pair‐learning system (see Table ). Five subcategories were also distinguished. For the category “benefits of pair‐learning,” students indicated that the pair‐based system provided them with greater hands‐on experience with the donor body and that the additional space in the anatomy laboratory enabled specific and valuable learning opportunities: “The pairing system gave us some good opportunities to learn more specific things and dissect what we wanted”, “So much hands‐on experience that we wouldn't get in bigger groups”. One student cited that “sacrificing the extra two hours” was beneficial in terms of ensuring “quality” preceded “quantity.” A suggested area for improvement included more opportunities to experience anatomical variation “ we were only able to use one donor and so couldn't really see any variations .” Due to social distancing guidelines imposed by the Covid‐19 pandemic, students were requested to stay at their designated donor stations to avoid unnecessary contact with other students, and movement around the anatomy laboratory was facilitated only under the supervision of an anatomy demonstrator. Students cited that they would “ prefer to be able to move around ” and observe “ some anatomical variation between cadavers .” Others took initiative and requested this from their demonstrators: “ it was very interesting to be able to walk around the dissection theatre to look at other donor bodies, which I feel would have been more difficult with more people in the room” . Another area for improvement included the need for cadavers to be dissected to a more detailed standard “I wish more of the organs/tissues were dissected for better understanding .” The Cronbach's alpha value of 0.75 indicated an acceptable correlation coefficient for the cumulative Likert‐scale items. The alpha value, which is over the 0.70 threshold indicates that the instrument concerning student satisfaction of pair learning in practical anatomy is reliable. Significance testing of the Kendall's tau‐b statistic showed significant positive associations P < 0.01) between all items. Coefficients ranged from 0.216 to 0.561. The results demonstrate validity of the survey instrument (Sokal & Rohlf, ). Scores ranged from 28 to 49 with higher scores indicating greater satisfaction with pair learning in practical anatomy sessions. Ninety‐three first year medical students voluntarily participated in the study (51% response rate). Females accounted for 73.1% ( n = 68) of the study population with males accounting for 25.8% ( n = 24). One participant identified as other (1.1%, n = 1). Eighty‐one percent of participants fell within the 17–20‐year‐old category ( n = 76), 14% in the 21–24‐year‐old category ( n = 13), and 4.3% were aged 25 to 28 ( n = 4). Approximately 9% of participants had had previous experience with cadaveric anatomy and all participants (100%) attended more than 50% of the practical sessions in the anatomy laboratory indicating regular attendance. Approximately half of all participants surveyed indicated that they would be interested in a surgical or radiological career (52.7%). Twenty‐six participants (27.95%) contributed to the open‐ended responses. A principal component factor analysis with varimax rotation was conducted on the 10 items of the pair learning in practical anatomy questionnaire. The Kaiser–Meyer–Olkin measure of sampling adequacy was 0.683, which surpassed the 0.5 threshold and the Bartlett's test of sphericity was statistically significant ( P < 0.001). Exploratory factor analysis was therefore performed. Three factors with eigenvalues greater than 1.0 were identified and found to explain 32%, 15%, and 12% of the variance, respectively. An initial five items were eliminated due to cross‐loadings of 0.3 or above. The item “ During the pair‐based anatomy practicals, I had enough face‐to‐face time with my demonstrator ” had factor loadings between 0.4 and 0.6 on both Factor 1 and Factor 3; “ My anatomy lab partner and I worked well together ” had factor loadings between 0.5 and 0.7 on both Factor 1 and Factor 2; and “ Given the one‐hour pair‐based anatomy practical sessions, I still feel the practical examination provides a fair assessment ” had factor loadings between 0.4 and 0.7 on both Factor 1 and Factor 3. The principal components analysis with varimax rotation of the revised seven‐items was rerun. Two factors with eigenvalues greater than 1.0 were identified and were determined to explain 50.80% of the variance. An additional item was eliminated due to cross‐loadings; item “ I was well prepared for my practical each week ” had factor loadings between 0.4 and 0.5 on both Factor 1 and Factor 2. For the final stage, a principal component factor analysis of the remaining six items, using varimax rotation was conducted. Two factors explained 54.42% of the variance. All items in this analysis had primary loadings over 0.5. Only one item had a cross‐loading above 0.3 (pair learning helped my understanding of anatomy), however, this item had a strong primary loading of 0.698 on the second factor. The factor loading matrix for this final solution is presented in Table . Follow‐up reliability analysis of the identified factors extracted poor Cronbach's alpha coefficients; 0.552 for Factor 1 (3 items) and 0.410 for Factor 2 (3 items). The six items comprising the two factors were determined to represent “ Connectedness and Preparedness ” and “ Understanding and Online Learning ”; however, no follow‐up comparisons analyses were performed on either of the two constructs due to insupportable reliability. Despite a significant reduction in time allocated to the anatomy laboratory in comparison to previous years, students appeared to maintain sufficient relationships with their peers and demonstrators. Students agreed that pair learning helped their understanding of anatomy (4.54 ± 0.69) and that they and their partner worked well together (4.48 ± 0.34). Figure illustrates the mean Likert‐scale scores for the questionnaire items. Importantly, students concurred that they had enough time with the donor body at their stations (4.73 ± 0.68) indicating that reduced time with donors does not impact students' ability to build an appropriate professional relationship with their donor. Likewise, reduced contact with the donor body does not impact students' ability to dissect and identify small anatomical structures when skin and fascia have been readily dissected. This is reaffirmed with the third part of the questionnaire which asked students whether they had seen or not seen the left anterior descending artery and the major duodenal papilla during practical sessions. More than half of the students (64.5%) reported that they saw the left anterior descending artery on the donor body at their station. Notwithstanding, only 44.1% indicated that they saw the major duodenal papilla. With reduced time in the anatomy laboratory at the forefront of this pair‐learning strategy, students were asked to respond to a series of statements concerning preparedness for practical sessions. As indicated in Figure , students reported that both they (3.89 ± 0.70) and their partners (3.89 ± 0.95) were well‐prepared for the practical sessions each week supporting the notion that reduced time promotes proactive learning. There was also agreement among students that the online‐practical examination provided a fair assessment (3.75 ± 0.96). There was consensus among students that the pre‐ and post‐practical session learning activities on Blackboard were useful tools for private study and revision (4.62 ± 0.64). That said, however, students reported that they learned better during practical sessions as compared to online lectures (item 5; 2.11 ± 0.96) which highlights the notion that not all learning material can be effectively substituted online. As shown in Table , students with previous cadaveric anatomy experience (1.38 ± 0.52) rated item 5 significantly lower than those with no previous experience (2.18 ± 0.97) suggesting that students with no prior experience are more welcome to the idea of substituting in‐person anatomy laboratory time with online resources. The perceptions of students toward pair learning in practical anatomy were compared across career interest and previous cadaveric anatomy experience. Pair‐learning satisfaction scores for students with previous cadaveric anatomy experience (median = 44.50; mean = 43.86 ± 1.96) did not differ significantly from students with no previous experience (median = 42; mean = 41.53 ± 4.67), U = 244.50, P = 0.19. Likewise, no differences were observed between students interested in surgical/radiological careers (median = 43; mean = 41.71 ± 4.62) as compared to students interested in other medical disciplines (median = 42; mean = 41.75 ± 4.52), U = 1062, P = 0.90. The breakdown of responses for students with previous cadaveric anatomy experience versus those with no previous experience is summarized in Table . Although statistically insignificant ( P > 0.05), all students with previous cadaveric anatomy experience rated all items higher than those with no previous experience. This excluded item 5, “I learn better from the online lectures than in my pair‐based anatomy practical sessions,” with those with previous experience rating the item lower (median = 1, mean = 1.38 ± 0.52) than those with no previous experience (median = 2, mean = 2.18 ± 0.97). This difference was statistically significant and represented a medium effect size U = 169, P = 0.01, r = 0.457. The breakdown of responses for individual items for students interested in surgical/radiological careers versus those interested in other disciplines is summarized in Table . No significant differences were observed. A Kruskal–Wallis H test showed that there was no statistically significant difference in pair‐learning satisfaction scores between age groups χ 2 (2) = 0.577, P = 0.75, with a mean rank satisfaction score of 46.04 for 17 to 20 year olds ( n = 76), 52.08 for 21 to 24 year olds ( n = 13), and 48.75 for 25 to 28 year olds ( n = 4). However, similar to the results found for previous cadaveric anatomy experience, using the Kruskal–Wallis H test an analysis of responses for individual items across age groups indicated a significant difference for item 5 “I learn better from the online lectures than in my pair‐based anatomy practical sessions” χ 2 (2) = 8.502, P = 0.01, with a mean rank score of 50.57 for 17 to 20 year olds, 31.35 for 21 to 24 year olds, and 30.00 for 25 to 28 year olds. Pairwise comparisons using the post‐hoc Dunn test with Bonferroni adjustments indicated that this difference was significant between 17 to 20 year olds and 21 to 24 year olds ( P = 0.03). This suggests that the younger students in this sample were more comfortable with learning anatomical information online via pre‐recorded lectures than in‐person during anatomy laboratory practical sessions. All other items compared across age groups were insignificant ( P > 0.05). There were no significant differences in scores between genders χ 2 (2) = 0.168, P = 0.92, with a mean rank score of 45.21 for males, 47.68 for females, and 43.50 for other. Twenty‐six participants (27.95%) contributed to the open‐ended responses. Analysis of these responses revealed two categories: (1) benefits of pair learning, and (2) suggestions for improvement of the pair‐learning system (see Table ). Five subcategories were also distinguished. For the category “benefits of pair‐learning,” students indicated that the pair‐based system provided them with greater hands‐on experience with the donor body and that the additional space in the anatomy laboratory enabled specific and valuable learning opportunities: “The pairing system gave us some good opportunities to learn more specific things and dissect what we wanted”, “So much hands‐on experience that we wouldn't get in bigger groups”. One student cited that “sacrificing the extra two hours” was beneficial in terms of ensuring “quality” preceded “quantity.” A suggested area for improvement included more opportunities to experience anatomical variation “ we were only able to use one donor and so couldn't really see any variations .” Due to social distancing guidelines imposed by the Covid‐19 pandemic, students were requested to stay at their designated donor stations to avoid unnecessary contact with other students, and movement around the anatomy laboratory was facilitated only under the supervision of an anatomy demonstrator. Students cited that they would “ prefer to be able to move around ” and observe “ some anatomical variation between cadavers .” Others took initiative and requested this from their demonstrators: “ it was very interesting to be able to walk around the dissection theatre to look at other donor bodies, which I feel would have been more difficult with more people in the room” . Another area for improvement included the need for cadavers to be dissected to a more detailed standard “I wish more of the organs/tissues were dissected for better understanding .” The aim of this study was to examine student perceptions of pair learning as a contemporary hybrid teaching method for thorax, abdomen, and pelvis cadaveric anatomy learning. This work demonstrated that first year medical students are satisfied with short one‐hour pair‐based anatomy practical sessions, supplemented with online pre‐ and post‐practical session learning resources. Although students recognized the merits of more time in the anatomy laboratory, including opportunities for self‐directed study and added exposure to anatomical variation, they felt that having two students per station enabled sufficient hands‐on time with the donor body and fostered learning opportunities that would not be possible with larger groups. Strong preferences for quality one‐on‐one time with the donor body supported by useful online resources suggest this modality should be a key consideration in course design for anatomy curricula. Dyad dynamics and factors that influence collaboration This cohort of students generally expressed a positive view of their dyads' functionality during practical sessions. Students agreed that both they and their partners worked well together and that the pair‐based system helped their understanding of anatomy. Likewise, students indicated that both they and their partner were well‐prepared for the practical session each week. And although students were requested to stay at their respective stations to minimize social contact, students expressed that they still felt connected to their peers. Peer–peer interactions are known to be greatly influenced by personality and gender congruency. For instance, dyads with congruent levels of extroversion have been shown to interact more frequently (Wang et al., ), and “uncertainty reduction theory” proposes that similarity enhances friendship formation and maintenance therefore promoting cooperation while reducing stress and anxiety during peer–peer interactions (Basinger et al., ). While characteristics concerning personality type were not examined as part of this study, positive peer experiences may be associated with high frequencies of the conscientious personality type. The contentious personality type, which is included in the Big Five Personality Traits Model (Goldberg, ), has been shown to be a significant predictor of performance in medical school (Doherty & Nugent, ), and high conscientious individuals have been shown to produce higher grades in a gross anatomy course (Hintz et al., ). Further investigations into dyad dynamics and personality type as it relates to anatomy may be an interesting area of future inquiry. Although approximately half of students in the study sample indicated that they would be interested in a surgical/radiological career, this did not appear to influence satisfaction with the dyad system. Such a finding is contrasted by Jeyakumar et al.  who identified career interest as a positive predictor of regular attendance and participation in dissection, and thus positive experiences associated with practical sessions. The dyad system can therefore be considered an adaptive approach that encourages active participation in the anatomy laboratory and can accommodate for the strengths and weaknesses of each individual in a pair. Consequently, a supportive learning environment is achieved and discrepancies between student opinions of what is an effective use of time are lessened. Of relevance here may be grouping procedures, that is, how dyads were allocated. Methods for assigning student groups have been noted to affect the social structure of a classroom and thus learning. Notably, seminal educational research by O'Reilly and Illenberg  expressed that student grouping based on hierarchical characteristics result in lower examination performance and more negative attitudes toward learning than diffuse classrooms (groupings of students with varying age profiles and heterogeneous academic capabilities). The higher the mean test score for any classroom group, the more diffuse the social structure (O'Reilly & Illenberg, ). In this present study, students were assigned to dyads alphabetically which makes it unlikely that a hierarchical structure based on academic performance was in effect. Nonetheless, based on high mean satisfaction scores within the study sample and that approximately 50% of students were surgical/radiological career focused, we can assume that many dyads were diffused pairings. It must be acknowledged that although a hybrid dyad pedagogical approach with online resources enables students to meet the primary learning objectives, consideration must be given as to whether students are being underprivileged by other group dynamics that are so efficiently facilitated via small group learning (Bay et al., ). Members of dyads with differing experiences and strengths have repeatedly been shown to outperform similarly paired individuals in tasks of creativity and problem solving (Xue et al., ; Sun et al., ). Triads (groups of three persons) have also been shown to cover more content than dyads or students working independently (Spaulding, ). Thus, larger groups have more differing people, resulting in a greater pool of experiences and thus potentially better outcomes. A comparison across examination scores with students in dyads versus small groups may provide intriguing results and help to formulate more concrete understandings of the reciprocal and interactive capacity of dyad pedagogy. At Trinity College Dublin, a great emphasis is placed on the human body donation program as a way of fostering healthy and professional relationships between students and their donor bodies. It was important that this teaching philosophy was maintained in an era of such widespread change. Weeks et al. has alluded to the relationship between student and cadaver as similar to that of the clinician and their patient and as such is one that fosters empathy and respect toward donors and future patients. Other writers have recognized that anatomy teaching is moving in a more humanistic direction with the growth of many medical schools offering commemoration services following the dissecting process (Ferguson et al., ; Pawlina et al., ; Jones et al., ; Jones & King, ). At our institution, cadavers are not anonymized, and students learn of the donor's name, partial medical history, and cause and date of death. Identification of donors can provide students with an opportunity to learn about and practice patient confidentiality. A concern with the dyad system among faculty was whether shorter practical sessions enabled this relationship to develop. As evidenced by participant responses, student expressed that a one‐hour practical sessions were sufficient in terms providing appropriate and adequate face‐to‐face time with their respective donors. Likewise, open‐ended responses reaffirmed that quality time rather than quantity time with the donor body enabled ample hands‐on time and this facilitated learning. In line with humanizing trends that are become increasingly evident in contemporary anatomy, the dyad approach is one that fosters rather than hinders this relationship. Transitioning from dissection to semi‐prosection and a view of student anatomical self‐efficacy Cadaveric dissection is regarded by many anatomists as an unrivaled teaching method with benefits that extend far beyond the mere learning of anatomy (Winkelmann, ; Korf et al., ; Hu et al., ). It aligns well with modern medical education trends that promote collaborative work, ethical consciousness, and communication skills and must therefore be considered as an opportunity to nurture such graduate attributes (Rizzolo, ; Azer & Eizenberg, ; Sherman, ). Notwithstanding, limited curricular time, a lack of qualified anatomy demonstrators, and extrinsic factors such as that of the recent Covid‐19 pandemic are continually posing difficulties for this teaching modality. Anatomists' perceived benefits of dissection, however, are perhaps outweighed by the preferences of students. This current study found that some students maintain the view that dissection is an ineffective use of limited curricular time, and indeed previous student preference studies have shown that students generally believe prosection to be more efficient (Dinsmore et al., ; Davis et al., ; Dissabandara et al., ; Wisco et al., ). Similarly, data pertaining to examination performance has shown no superiority for dissection over prosection‐based curricula (Wilson et al., ; Williams et al., ; Lackey‐Cornelison et al., ). This communicates that student preference should continually be considered when designing teaching programs. Student preference must maintain its value in educational research and moreover be used appropriately and efficiently for informing educators on what is best suited to students' needs. Students' lack of confidence in performing dissection is also evidenced by this study. Burgoon explains that this phenomenon can be referred to as “low anatomical self‐efficacy,” that is, an individual perceives within themselves an inability to successfully complete tasks such as dissecting, learning anatomical knowledge, and applying anatomical knowledge to clinical scenarios (Burgoon, ). Qualitative feedback from previous studies has reflected that reinforcing proper dissection techniques should be a priority during initial laboratory sessions to ensure that novice dissectors are equipped with the knowledge and skill to benefit from dissection, thus improving self‐efficacy (Jeyakumar et al., ). In this study, no such concerns were raised, it must therefore be inferred that such lack in confidence must instead be attributable to time constraints. Students indicated that they were dissatisfied with the semi‐prosected format because they “couldn't see many structures” and “had to dissect some structures” themselves which was “very time consuming” and prevented “better understanding.” It is possible that students who were not active in performing dissection did not acknowledge the immersive experience that dissection has to offer and, as a result, were somewhat less engaged and more likely to report negative experiences. Although the dyad approach presented here was executed at the cost of traditional prolonged dissecting time, semi‐prosection of cadavers was intended to provide students with the best of both worlds. That is, an opportunity to dissect with the advantage of having the majority of the dissection readily completed. Nonetheless, we uphold the view that active dissection should continue to be applied in practical sessions and take strength in the fact that it engages all three domains of learning; cognitive, affective, and psychomotor (Kuyatt & Baker, ; Hadie, ). However, the pressures imposed by social distancing and the additional anxieties that have coexisted for students learning anatomy during this pandemic need also be acknowledged. Students willing to dissect praised the semi‐prosected arrangement of cadavers, indicating that it gave them “ some good opportunities to learn more specific things and dissect what [they] wanted ”. Such students took authorship in their own learning, worked collaboratively, and reaped the benefits of an immersive experience allowing them and their partners to develop the competencies aforementioned, albeit in a highly time‐constrained environment. Arguably, these students may represent those interested in surgical or radiological disciplines as has been noted in previous student perception studies (McWatt et al., ). Although this 1‐h dyad approach is a reactive Covid‐19 measure to ensure students had the opportunity to dissect in a dissection‐based course, ‘semi‐prosection,’ as we have termed here, may be an interesting standpoint for future pedagogical research enabling students to reap the benefits of both dissection and prosection. Strain on academics and online resources Applying dyad pedagogy in the laboratory meant that the approach was not reliant on the presence of additional demonstrators during practical sessions. Arguably, the approach can be seen to enable large groups of students to have very small‐group experiences with their educators which may not have otherwise been possible with additional students in the room. Students indicated that they were satisfied with the staff–student ratio and that large groups would have limited their confidence in asking questions. The strategy also enabled students to receive directive and facilitative feedback from demonstrators. “Directive feedback” was used to inform students of their knowledge shortfalls with the aim of enabling students to achieve their desired grades. “Facilitative feedback,” such as assisting students in developing their dissection technique or encouraging dyads to work collaboratively, was used to guide students on their professional developmental trajectories (Lachman, ). Demonstrators were positioned to guide and motivate their students both academically and professionally in a way that cannot be facilitated successfully with large student groups. The question remains however whether the dyad strategy puts extra strains on academics, demonstrators, and support staff. In terms of time spent teaching in the laboratory, the hours remain the same. Where one anatomy demonstrator was accessible to approximately 36 students over a three‐hour period prior to the pandemic, the same demonstrator was now accessible to eight students over a one hour period. An average of three hours was required by each demonstrator to prepare their teaching materials and complete semi‐prosections of their assigned cadavers. However, the weight of semi‐prosection was shared with technical staff and thus alleviated the additional strain placed on demonstrators. Notwithstanding, it must be acknowledged that all demonstrators regardless of previous experience, must revise for teaching sessions. Through the preparation of semi‐prosection, demonstrators were able to revise using cadavers, creating an opportunity to familiarize themselves with the variations and intricacies of each cadaver prior to student attendance in the laboratory. This represents a productive use of time, time that would otherwise have been spent reviewing atlases and models. This system may be particularly useful for medical demonstrators with surgical career intentions where time to study independently with a cadaver is greatly valued (Willan et al., ). Likewise, the process of prosection may be beneficial for near‐peer tutors as way of providing deeper learning of anatomy through teaching (Evans & Cuffe, ). The use of online resources represents a multimodal approach to learning anatomy. Although the Covid‐19 pandemic has accelerated the use of a blended approaches to learning (Bao, ; Mukhtar et al., ), in anatomy, this is not a new conception. In their critical review on best teaching practices in anatomy education, Estai and Bunt  state that “no single teaching tool has been found to meet curriculum requirements” and propose that the best way to teach modern anatomy is by combining multiple pedagogical resources to complement one another. Multimodal approaches have received support from other anatomists, particularly for those that supplement in‐person sessions with online quizzes and activities (Rizzolo et al., ). In this current study, there was prodigious agreement from students that the pre‐ and post‐practical session activities were useful tools for private study and revision indicating that a transition to hybrid learning may be beneficial for anatomy. Using a strength, weakness, opportunity, threat (SWOT) analysis, Longhurst et al.  identified “incorporation of blended learning in future curriculum development” as the most frequently cited opportunity by anatomy faculty in the United Kingdom and Republic of Ireland in response to the Covid‐19 pandemic. Academics also suggested that the pandemic presented them with an opportunity to develop resources for upcoming years, allowing them to integrate blended learning techniques into their curricula. Other reviews of blended learning approaches in anatomy have reported that such techniques improve not only academic performance but also motivation, attitude, and enhance learning experiences (Liew et al., ; Khalil et al., ). These studies, together with our finding that students perceive online resources as valuable, creates a need for anatomists and academics to prioritize time to create high‐quality blended learning resources. Limitation of the study Limitations of the current study include its cross‐sectional nature and a potential straight‐lining response bias that can often be associated with agree/disagree questionnaire matrices. The study only assessed student perception which is a subjective measure and can be swayed by experience. Despite this, anonymity was maintained which increases the validity of the findings. Anonymity, however, prevented comparisons between perceptions and examination performance which may have been beneficial in terms of comparing perceptions of group dynamics across student cohorts. Although questions were modeled on previously published studies, the questionnaire did not comprehensively examine all aspects of dyad pedagogy. Such aspects could have been identified by utilizing student focus groups and pilot testing. Factors identified in the questionnaire revealed low Cronbach alpha values. Each of the factors could have been strengthened through revision and rewriting of items with lower primary loadings. Future studies investigating longitudinal changes in student perception toward dyad pedagogy as they transition into their second preclinical year and thereafter may provide key insights into knowledge retention and the importance of teamwork and collaboration. The cost and time associated with preparing semi‐prosections may be a limitation to the implementation of short one‐hour pair‐based practical sessions across medical schools. Finally, the conclusions of this study reflect a first‐year preclinical medical program during an unprecedented worldwide pandemic with high student anxiety and thus must be evaluated in this context. This cohort of students generally expressed a positive view of their dyads' functionality during practical sessions. Students agreed that both they and their partners worked well together and that the pair‐based system helped their understanding of anatomy. Likewise, students indicated that both they and their partner were well‐prepared for the practical session each week. And although students were requested to stay at their respective stations to minimize social contact, students expressed that they still felt connected to their peers. Peer–peer interactions are known to be greatly influenced by personality and gender congruency. For instance, dyads with congruent levels of extroversion have been shown to interact more frequently (Wang et al., ), and “uncertainty reduction theory” proposes that similarity enhances friendship formation and maintenance therefore promoting cooperation while reducing stress and anxiety during peer–peer interactions (Basinger et al., ). While characteristics concerning personality type were not examined as part of this study, positive peer experiences may be associated with high frequencies of the conscientious personality type. The contentious personality type, which is included in the Big Five Personality Traits Model (Goldberg, ), has been shown to be a significant predictor of performance in medical school (Doherty & Nugent, ), and high conscientious individuals have been shown to produce higher grades in a gross anatomy course (Hintz et al., ). Further investigations into dyad dynamics and personality type as it relates to anatomy may be an interesting area of future inquiry. Although approximately half of students in the study sample indicated that they would be interested in a surgical/radiological career, this did not appear to influence satisfaction with the dyad system. Such a finding is contrasted by Jeyakumar et al.  who identified career interest as a positive predictor of regular attendance and participation in dissection, and thus positive experiences associated with practical sessions. The dyad system can therefore be considered an adaptive approach that encourages active participation in the anatomy laboratory and can accommodate for the strengths and weaknesses of each individual in a pair. Consequently, a supportive learning environment is achieved and discrepancies between student opinions of what is an effective use of time are lessened. Of relevance here may be grouping procedures, that is, how dyads were allocated. Methods for assigning student groups have been noted to affect the social structure of a classroom and thus learning. Notably, seminal educational research by O'Reilly and Illenberg  expressed that student grouping based on hierarchical characteristics result in lower examination performance and more negative attitudes toward learning than diffuse classrooms (groupings of students with varying age profiles and heterogeneous academic capabilities). The higher the mean test score for any classroom group, the more diffuse the social structure (O'Reilly & Illenberg, ). In this present study, students were assigned to dyads alphabetically which makes it unlikely that a hierarchical structure based on academic performance was in effect. Nonetheless, based on high mean satisfaction scores within the study sample and that approximately 50% of students were surgical/radiological career focused, we can assume that many dyads were diffused pairings. It must be acknowledged that although a hybrid dyad pedagogical approach with online resources enables students to meet the primary learning objectives, consideration must be given as to whether students are being underprivileged by other group dynamics that are so efficiently facilitated via small group learning (Bay et al., ). Members of dyads with differing experiences and strengths have repeatedly been shown to outperform similarly paired individuals in tasks of creativity and problem solving (Xue et al., ; Sun et al., ). Triads (groups of three persons) have also been shown to cover more content than dyads or students working independently (Spaulding, ). Thus, larger groups have more differing people, resulting in a greater pool of experiences and thus potentially better outcomes. A comparison across examination scores with students in dyads versus small groups may provide intriguing results and help to formulate more concrete understandings of the reciprocal and interactive capacity of dyad pedagogy. At Trinity College Dublin, a great emphasis is placed on the human body donation program as a way of fostering healthy and professional relationships between students and their donor bodies. It was important that this teaching philosophy was maintained in an era of such widespread change. Weeks et al. has alluded to the relationship between student and cadaver as similar to that of the clinician and their patient and as such is one that fosters empathy and respect toward donors and future patients. Other writers have recognized that anatomy teaching is moving in a more humanistic direction with the growth of many medical schools offering commemoration services following the dissecting process (Ferguson et al., ; Pawlina et al., ; Jones et al., ; Jones & King, ). At our institution, cadavers are not anonymized, and students learn of the donor's name, partial medical history, and cause and date of death. Identification of donors can provide students with an opportunity to learn about and practice patient confidentiality. A concern with the dyad system among faculty was whether shorter practical sessions enabled this relationship to develop. As evidenced by participant responses, student expressed that a one‐hour practical sessions were sufficient in terms providing appropriate and adequate face‐to‐face time with their respective donors. Likewise, open‐ended responses reaffirmed that quality time rather than quantity time with the donor body enabled ample hands‐on time and this facilitated learning. In line with humanizing trends that are become increasingly evident in contemporary anatomy, the dyad approach is one that fosters rather than hinders this relationship. Cadaveric dissection is regarded by many anatomists as an unrivaled teaching method with benefits that extend far beyond the mere learning of anatomy (Winkelmann, ; Korf et al., ; Hu et al., ). It aligns well with modern medical education trends that promote collaborative work, ethical consciousness, and communication skills and must therefore be considered as an opportunity to nurture such graduate attributes (Rizzolo, ; Azer & Eizenberg, ; Sherman, ). Notwithstanding, limited curricular time, a lack of qualified anatomy demonstrators, and extrinsic factors such as that of the recent Covid‐19 pandemic are continually posing difficulties for this teaching modality. Anatomists' perceived benefits of dissection, however, are perhaps outweighed by the preferences of students. This current study found that some students maintain the view that dissection is an ineffective use of limited curricular time, and indeed previous student preference studies have shown that students generally believe prosection to be more efficient (Dinsmore et al., ; Davis et al., ; Dissabandara et al., ; Wisco et al., ). Similarly, data pertaining to examination performance has shown no superiority for dissection over prosection‐based curricula (Wilson et al., ; Williams et al., ; Lackey‐Cornelison et al., ). This communicates that student preference should continually be considered when designing teaching programs. Student preference must maintain its value in educational research and moreover be used appropriately and efficiently for informing educators on what is best suited to students' needs. Students' lack of confidence in performing dissection is also evidenced by this study. Burgoon explains that this phenomenon can be referred to as “low anatomical self‐efficacy,” that is, an individual perceives within themselves an inability to successfully complete tasks such as dissecting, learning anatomical knowledge, and applying anatomical knowledge to clinical scenarios (Burgoon, ). Qualitative feedback from previous studies has reflected that reinforcing proper dissection techniques should be a priority during initial laboratory sessions to ensure that novice dissectors are equipped with the knowledge and skill to benefit from dissection, thus improving self‐efficacy (Jeyakumar et al., ). In this study, no such concerns were raised, it must therefore be inferred that such lack in confidence must instead be attributable to time constraints. Students indicated that they were dissatisfied with the semi‐prosected format because they “couldn't see many structures” and “had to dissect some structures” themselves which was “very time consuming” and prevented “better understanding.” It is possible that students who were not active in performing dissection did not acknowledge the immersive experience that dissection has to offer and, as a result, were somewhat less engaged and more likely to report negative experiences. Although the dyad approach presented here was executed at the cost of traditional prolonged dissecting time, semi‐prosection of cadavers was intended to provide students with the best of both worlds. That is, an opportunity to dissect with the advantage of having the majority of the dissection readily completed. Nonetheless, we uphold the view that active dissection should continue to be applied in practical sessions and take strength in the fact that it engages all three domains of learning; cognitive, affective, and psychomotor (Kuyatt & Baker, ; Hadie, ). However, the pressures imposed by social distancing and the additional anxieties that have coexisted for students learning anatomy during this pandemic need also be acknowledged. Students willing to dissect praised the semi‐prosected arrangement of cadavers, indicating that it gave them “ some good opportunities to learn more specific things and dissect what [they] wanted ”. Such students took authorship in their own learning, worked collaboratively, and reaped the benefits of an immersive experience allowing them and their partners to develop the competencies aforementioned, albeit in a highly time‐constrained environment. Arguably, these students may represent those interested in surgical or radiological disciplines as has been noted in previous student perception studies (McWatt et al., ). Although this 1‐h dyad approach is a reactive Covid‐19 measure to ensure students had the opportunity to dissect in a dissection‐based course, ‘semi‐prosection,’ as we have termed here, may be an interesting standpoint for future pedagogical research enabling students to reap the benefits of both dissection and prosection. Applying dyad pedagogy in the laboratory meant that the approach was not reliant on the presence of additional demonstrators during practical sessions. Arguably, the approach can be seen to enable large groups of students to have very small‐group experiences with their educators which may not have otherwise been possible with additional students in the room. Students indicated that they were satisfied with the staff–student ratio and that large groups would have limited their confidence in asking questions. The strategy also enabled students to receive directive and facilitative feedback from demonstrators. “Directive feedback” was used to inform students of their knowledge shortfalls with the aim of enabling students to achieve their desired grades. “Facilitative feedback,” such as assisting students in developing their dissection technique or encouraging dyads to work collaboratively, was used to guide students on their professional developmental trajectories (Lachman, ). Demonstrators were positioned to guide and motivate their students both academically and professionally in a way that cannot be facilitated successfully with large student groups. The question remains however whether the dyad strategy puts extra strains on academics, demonstrators, and support staff. In terms of time spent teaching in the laboratory, the hours remain the same. Where one anatomy demonstrator was accessible to approximately 36 students over a three‐hour period prior to the pandemic, the same demonstrator was now accessible to eight students over a one hour period. An average of three hours was required by each demonstrator to prepare their teaching materials and complete semi‐prosections of their assigned cadavers. However, the weight of semi‐prosection was shared with technical staff and thus alleviated the additional strain placed on demonstrators. Notwithstanding, it must be acknowledged that all demonstrators regardless of previous experience, must revise for teaching sessions. Through the preparation of semi‐prosection, demonstrators were able to revise using cadavers, creating an opportunity to familiarize themselves with the variations and intricacies of each cadaver prior to student attendance in the laboratory. This represents a productive use of time, time that would otherwise have been spent reviewing atlases and models. This system may be particularly useful for medical demonstrators with surgical career intentions where time to study independently with a cadaver is greatly valued (Willan et al., ). Likewise, the process of prosection may be beneficial for near‐peer tutors as way of providing deeper learning of anatomy through teaching (Evans & Cuffe, ). The use of online resources represents a multimodal approach to learning anatomy. Although the Covid‐19 pandemic has accelerated the use of a blended approaches to learning (Bao, ; Mukhtar et al., ), in anatomy, this is not a new conception. In their critical review on best teaching practices in anatomy education, Estai and Bunt  state that “no single teaching tool has been found to meet curriculum requirements” and propose that the best way to teach modern anatomy is by combining multiple pedagogical resources to complement one another. Multimodal approaches have received support from other anatomists, particularly for those that supplement in‐person sessions with online quizzes and activities (Rizzolo et al., ). In this current study, there was prodigious agreement from students that the pre‐ and post‐practical session activities were useful tools for private study and revision indicating that a transition to hybrid learning may be beneficial for anatomy. Using a strength, weakness, opportunity, threat (SWOT) analysis, Longhurst et al.  identified “incorporation of blended learning in future curriculum development” as the most frequently cited opportunity by anatomy faculty in the United Kingdom and Republic of Ireland in response to the Covid‐19 pandemic. Academics also suggested that the pandemic presented them with an opportunity to develop resources for upcoming years, allowing them to integrate blended learning techniques into their curricula. Other reviews of blended learning approaches in anatomy have reported that such techniques improve not only academic performance but also motivation, attitude, and enhance learning experiences (Liew et al., ; Khalil et al., ). These studies, together with our finding that students perceive online resources as valuable, creates a need for anatomists and academics to prioritize time to create high‐quality blended learning resources. Limitations of the current study include its cross‐sectional nature and a potential straight‐lining response bias that can often be associated with agree/disagree questionnaire matrices. The study only assessed student perception which is a subjective measure and can be swayed by experience. Despite this, anonymity was maintained which increases the validity of the findings. Anonymity, however, prevented comparisons between perceptions and examination performance which may have been beneficial in terms of comparing perceptions of group dynamics across student cohorts. Although questions were modeled on previously published studies, the questionnaire did not comprehensively examine all aspects of dyad pedagogy. Such aspects could have been identified by utilizing student focus groups and pilot testing. Factors identified in the questionnaire revealed low Cronbach alpha values. Each of the factors could have been strengthened through revision and rewriting of items with lower primary loadings. Future studies investigating longitudinal changes in student perception toward dyad pedagogy as they transition into their second preclinical year and thereafter may provide key insights into knowledge retention and the importance of teamwork and collaboration. The cost and time associated with preparing semi‐prosections may be a limitation to the implementation of short one‐hour pair‐based practical sessions across medical schools. Finally, the conclusions of this study reflect a first‐year preclinical medical program during an unprecedented worldwide pandemic with high student anxiety and thus must be evaluated in this context. In summary, this article examines student perceptions of short one‐hour pair‐based anatomy practical sessions supplemented with hybrid online learning resources as it relates to thorax, abdomen, and pelvis anatomy. Data from this study indicate that students rate online pre‐ and post‐practical session learning resources as valuable and are generally satisfied with pair learning as a pedagogical method in the anatomy laboratory. The observed trend in anatomical education is that there is a consistent increase in transactional distance between students and their educators. This issue may be ameliorated by pair‐learning strategies which allows a large group of students to have small group experiences with their educators. Together with the ongoing climate of diminishing devoted time to anatomy, these results highlight the indispensability of student perception and the importance of evidence‐based pedagogy. supinfo Click here for additional data file.
Microsecretory adenocarcinoma of the external ear canal
2139dfb2-8438-47e0-92d8-f07ff6915bbb
10084110
Anatomy[mh]
INTRODUCTION Microsecretory adenocarcinoma (MSA) is a recently described salivary gland tumor characterized by a unique set of histomorphologic and immunohistochemical features and a recurrent MEF2C::SS18 translocation, which was first described in 2019. Since its description, 24 definitive cases of MSA have been reported. , , MSA most often arises as a painless mass in the oral cavity, especially within the palate and buccal mucosa, with only one reported extraoral case arising in the parotid gland. Tumors show a characteristic histomorphology and are well‐circumscribed with subtle infiltration of the surrounding tissue. They are composed of intercalated duct‐like cells with regular, hyperchromatic ovoid nuclei. Tumors grow in a microcystic tubular pattern with abundant basophilic luminal secretions and a variably cellular fibromyxoid stroma. Immunohistochemical staining of these tumors reliably shows expression of S100 protein, p63, and SOX10 with lack of expression of p40, calponin, and mammaglobin and variable smooth muscle actin (SMA) expression. , , , Here, we report a case of MSA of the external ear canal in an 89‐year‐old woman, which showed characteristic histopathologic and immunohistochemical findings along with an SS18 rearrangement but arose in a unique extraoral location. 1.1 Case report An 89‐year‐old woman with medical history significant for prior lumpectomy for unspecified breast cancer presented for treatment of benign paroxysmal positional vertigo. She was noted to have a painless mass of the right external ear canal. Further examination showed an obstructing, friable lesion causing cerumen impaction. The patient underwent excisional biopsy of the mass, and her short‐term postoperative course was uneventful. No long‐term follow‐up data are available at this time. Slides as well as a formalin‐fixed paraffin‐embedded tissue block from the patient's excisional biopsy were sent to our institution for consultation. On H&E staining, the specimen was composed of several fragments of skin showing acanthosis and pseudohorn cysts, reminiscent of a seborrheic keratosis (Figure ). Distinct from the overlying epithelial process, the underlying soft tissue showed diffuse involvement by an infiltrative lesion composed of tubules and cords with bland cells having a small to moderate amount of eosinophilic cytoplasm and uniform, small round to oval nuclei (Figure ). Atypia and mitotic activity were absent. The tubules contained prominent pale basophilic intraluminal secretions and were embedded in a paucicellular fibromyxoid stroma. Tumor necrosis, lymphovascular space invasion, and perineural invasion were not seen. The histomorphologic findings raised the differential of a salivary gland‐type neoplasm and included ceruminous carcinoma, secretory carcinoma, polymorphous adenocarcinoma, mucoepidermoid carcinoma, and MSA. Immunohistochemical staining was performed to help differentiate amongst these diagnoses. The infiltrating cells showed strong, diffuse expression of pan‐keratin (Leica BOND III, clone CAM5.2 [Becton and Dickinson]; clone OSCAR [BioLegend]; clone K902 [ENZO]; clone MNF116 [Agilent]; clone AE1/AE3 [Agilent]) and S100 protein (Leica BOND III, clone EP32 [Leica]) (Figure ). Expression of p63 and TLE1 (both nuclear and cytoplasmic) were also seen (Leica BOND III, clones A4A [Biocare] and F4 [Santa Cruz], respectively) (Figure ). The infiltrating cells completely lacked expression of p40 (Leica BOND III, clone BC28 [Leica]), mammaglobin and pan‐TRK (Leica BOND III, clones 31A5 [Cell Marque]; Ventana Benchmark ULTRA, clone EPR17341 [Ventana], respectively), as well as synaptophysin and chromogranin (Leica BOND III, clones 27G12 [Leica] and 5H7 [Leica], respectively) (Figure ). Break‐apart fluorescent in situ hybridization (FISH) was performed for MAML2 (Empire Genomics Dual Color Break‐apart DNA probe) and SS18 genes (Vysis LSI SYT Dual Color Break‐apart DNA probe) using a count of at least 100 cells and a threshold of 10% disruption for a diagnosis of gene rearrangement. FISH showed an intact MAML2 gene (100% intact); however, SS18 rearrangement was identified (60% disrupted), confirming a diagnosis of MSA (Figure ). At the time of writing, the tissue block was unavailable for further molecular evaluation of the tumor. Case report An 89‐year‐old woman with medical history significant for prior lumpectomy for unspecified breast cancer presented for treatment of benign paroxysmal positional vertigo. She was noted to have a painless mass of the right external ear canal. Further examination showed an obstructing, friable lesion causing cerumen impaction. The patient underwent excisional biopsy of the mass, and her short‐term postoperative course was uneventful. No long‐term follow‐up data are available at this time. Slides as well as a formalin‐fixed paraffin‐embedded tissue block from the patient's excisional biopsy were sent to our institution for consultation. On H&E staining, the specimen was composed of several fragments of skin showing acanthosis and pseudohorn cysts, reminiscent of a seborrheic keratosis (Figure ). Distinct from the overlying epithelial process, the underlying soft tissue showed diffuse involvement by an infiltrative lesion composed of tubules and cords with bland cells having a small to moderate amount of eosinophilic cytoplasm and uniform, small round to oval nuclei (Figure ). Atypia and mitotic activity were absent. The tubules contained prominent pale basophilic intraluminal secretions and were embedded in a paucicellular fibromyxoid stroma. Tumor necrosis, lymphovascular space invasion, and perineural invasion were not seen. The histomorphologic findings raised the differential of a salivary gland‐type neoplasm and included ceruminous carcinoma, secretory carcinoma, polymorphous adenocarcinoma, mucoepidermoid carcinoma, and MSA. Immunohistochemical staining was performed to help differentiate amongst these diagnoses. The infiltrating cells showed strong, diffuse expression of pan‐keratin (Leica BOND III, clone CAM5.2 [Becton and Dickinson]; clone OSCAR [BioLegend]; clone K902 [ENZO]; clone MNF116 [Agilent]; clone AE1/AE3 [Agilent]) and S100 protein (Leica BOND III, clone EP32 [Leica]) (Figure ). Expression of p63 and TLE1 (both nuclear and cytoplasmic) were also seen (Leica BOND III, clones A4A [Biocare] and F4 [Santa Cruz], respectively) (Figure ). The infiltrating cells completely lacked expression of p40 (Leica BOND III, clone BC28 [Leica]), mammaglobin and pan‐TRK (Leica BOND III, clones 31A5 [Cell Marque]; Ventana Benchmark ULTRA, clone EPR17341 [Ventana], respectively), as well as synaptophysin and chromogranin (Leica BOND III, clones 27G12 [Leica] and 5H7 [Leica], respectively) (Figure ). Break‐apart fluorescent in situ hybridization (FISH) was performed for MAML2 (Empire Genomics Dual Color Break‐apart DNA probe) and SS18 genes (Vysis LSI SYT Dual Color Break‐apart DNA probe) using a count of at least 100 cells and a threshold of 10% disruption for a diagnosis of gene rearrangement. FISH showed an intact MAML2 gene (100% intact); however, SS18 rearrangement was identified (60% disrupted), confirming a diagnosis of MSA (Figure ). At the time of writing, the tissue block was unavailable for further molecular evaluation of the tumor. DISCUSSION MSA is a recently described salivary gland carcinoma characterized by MEF2C::SS18 translocation and a distinct set of histomorphologic and immunohistochemical features (summarized in Tables and ), which was first reported in 2019. These tumors usually arise in the oral cavity, with one reported extraoral case in the parotid gland. Clinically, these lesions typically present as painless masses. , , While prognostic information regarding this entity is limited, the current literature reports this low‐grade carcinoma is sometimes associated with local tissue infiltration but has no reported cases of metastasis or recurrence following surgical resection. Macroscopically, MSA is well‐circumscribed but unencapsulated and can show infiltration of the surrounding soft tissue microscopically. , , , Perineural invasion has been reported in a single case, but lymphovascular space invasion and tumor necrosis have not been documented. On histomorphology, the tumor is characterized by bland intercalated duct‐like cells that form anastomosing microcysts, tubules, and cords with rare single cells and abundant intraluminal basophilic secretions, embedded in a variably cellular fibrous to myxohyaline stroma which can show central sclerosis. , , The epithelial lining cells are predominantly attenuated, but occasionally plump, with eosinophilic to clear cytoplasm, monomorphic hyperchromatic ovoid nuclei, and indistinct nucleoli. , , , Immunohistochemically, MSAs characteristically express S100 protein, SOX 10, and p63 and lack expression of p40, mammaglobin, and calponin, with variable expression of SMA. , , , MSA also demonstrates a unique gene fusion between MEF2C and SS18 , which was originally identified via RNA sequencing. , , , , Recent studies have also proven the utility of SS18 break‐apart FISH, , the modality used for diagnosis of the present case. SS18 rearrangement by FISH is most often seen in synovial sarcoma, characterized by SS18::SSX fusions and expression of nuclear transducin‐like enhancer of split (TLE1) by immunohistochemistry. , , , , , , Interestingly, the present case of MSA also showed nuclear TLE1 positivity. In non‐neoplastic tissue, TLE1 functions as a transcriptional corepressor involved in hematopoietic, neuronal, and terminal epithelial cellular differentiation. , TLE1 plays a role in Notch, NF‐κB, and Wnt/β‐catenin signaling pathways, the latter of which has been implicated in synovial sarcoma along with TLE1 overexpression. , Numerous studies have shown TLE1 immunohistochemistry is a sensitive but not entirely specific diagnostic biomarker of synovial sarcoma, as TLE1 expression has also been reported in schwannoma, neurofibroma, malignant peripheral nerve sheath tumors, spindle cell melanoma, and other cutaneous malignancies. , , , , , , TLE1 positivity has not been previously reported in MSA and raises questions about the molecular and diagnostic implications of this marker. However, further investigation and clarification are needed, as expression of TLE1 by immunohistochemistry is not entirely specific for SS18 ‐rearranged neoplasms. Here, we present a case of MSA with characteristic histomorphologic, immunohistochemical, and cytogenetic findings arising in a unique extraoral location and setting. As with several prior reported cases, this case presented as a painless mass that was incidentally identified during routine care for an unrelated reason. , , In addition, a single prior case of MSA is reported to have arisen in the parotid gland, with all remaining cases arising within the oral cavity , ; occasional cases have been reported in association with pseudoepitheliomatous hyperplasia of the overlying oral squamous epithelium. In summary, the histomorphologic, immunohistochemical, and cytogenetic findings confirmed a diagnosis of MSA arising in a unique extraoral location. Dr Megan E. Dibbern and Dr Edward B. Stelow each made substantial contributions to conception/design of this project, and all listed authors made substantial contributions to the acquisition/interpretation of data for this manuscript. Dr Megan E. Dibbern and Dr Edward B. Stelow were involved in drafting the manuscript; all listed authors have been involved in revising this manuscript critically for intellectual content and have given final approval of the submitted manuscript for publication. All authors agree to be accountable for all aspects of the work and in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The authors declare no conflict of interest.
A
186a1609-f845-4ca2-bd7d-3f54bb62ce35
10084189
Anatomy[mh]
INTRODUCTION The respiratory system of modern birds, consisting of a pair of small, rigid lungs connected to an elaborate system of air sacs that pervade the body, has been described in detail in a number of living taxa (Baer, ; Bezuidenhout et al., ; Campana, ; Duncker, ; Hogg, ; King & Kelly, ; Müller, ; O'Connor, ) (Figure ). This system of air sacs is organized hierarchically, with large, regional sacs branching into smaller diverticula, which in turn divide into smaller, often‐anastomosing, units. The air sac system permeates the entire body to a greater or lesser degree, invading bones, spaces between muscles, and spaces under the skin. The vertebrae of birds are pneumatized by diverticula of the cervical and thoracoabdominal air sacs, as well as diverticula that emanate directly from the lung. In general, diverticula of the cervical air sacs pneumatize the cervical and anterior dorsal vertebrae, diverticula of the lungs pneumatize the mid‐dorsals, and diverticula of the abdominal air sacs pneumatize the posterior dorsal vertebrae, synsacrum, and caudal vertebrae (Bezuidenhout et al., ; Cover, ; Hogg, ; King, ; Müller, ; O'Connor, ; O'Connor & Claessens, ). These diverticula follow nerves and blood vessels as they spread along the vertebral column. The main cervical diverticula, the canali intertransversarii (intertransverse canals), follow the vertebral arteries and pass through the transverse foramina of the cervical vertebrae (Müller, ). Other diverticula have been observed on the anterodorsal surface of the vertebrae, forming supravertebral diverticula (Cover, ). In some instances, these connect with the intertransverse diverticula, via anterior diverticular prolongations (Cover, ). Branches of the intertransverse diverticula may extend medially into the intervertebral foramina, where they contact the spinal cord and enter the neural canal, forming structures that have been called supramedullary diverticula (Müller, ; O'Connor, ). Occasionally, these diverticula also merge with the supravertebral diverticula. This elaborate and highly variable network of anastomosing diverticula around the outside of the vertebra and inside of the vertebral canal may even extend inside the bone of the vertebral body and arch. However, supramedullary diverticula are poorly understood, and have not been the subject of much previous study. Several other authors have described and illustrated these diverticula in some extant birds, or provided osteological evidence of their presence in fossil taxa. Here we review these prior descriptions and detail the various terms that have been used to refer to these pneumatic diverticula inside the neural canal or otherwise in close contact with the spinal cord (Table ). However, hereafter and throughout this paper we refer to any diverticula in contact with the spinal cord as “paramedullary diverticula,” for reasons discussed in greater detail below. Where prior authors have used other terms for these structures, those terms are noted. In addition to a literature review, we present the first phylogenetically broad, detailed study of paramedullary diverticula in extant Aves (Figure ). We have identified four methods for investigating paramedullary diverticula. Each is detailed below. 1.1 Gross anatomical dissection In many taxa, especially larger‐bodied ones such as turkeys and the ratites, paramedullary diverticula are visible in gross dissection. This is particularly true when the vertebrae are disarticulated from each other or transversely sectioned (see Figure ) although we have also observed paramedullary diverticula in articulated vertebrae by dissecting into the space between the zygapophyses of adjacent vertebrae from a dorsal approach. 1.2 Physical endocasts By far, the most‐used method of exploring and documenting the form and extent of the diverticula is to create physical endocasts by injecting all or part of the respiratory system with a casting material, typically latex or resin (e.g., Bezuidenhout et al., ; Campana, ; Cover, ; Müller, ; O'Connor, ; Stanislaus, ). This method allows for very fine diverticula to be preserved and studied, but in long, blind‐ended diverticula such as the paramedullary diverticula, it can be difficult to achieve complete filling of the diverticular network. Furthermore, there is typically no distinct sign of incomplete filling—an incompletely filled diverticulum is simply absent from the cast, the same as a diverticulum that does not typically exist in the taxon under study, or which has not yet developed. Still, physical endocasts remain relatively straightforward and inexpensive to produce. 1.3 CTs and digital endocasts Because paramedullary diverticula are full of air and therefore completely radio‐lucent, they show up quite well in computed tomography (CT) images (Figure ). To date, only a handful of images of paramedullary diverticula have been published, and these are mostly “incidental hits” in CT images taken for other purposes—see for example the CT cross‐section of an ostrich neck in Wedel ( ). 1.4 Osteological traces Documentation of skeletal pneumatization by physical examination of dry bones is by now a well‐established practice (Hogg, ). Pneumatic foramina tend to be larger than neurovascular foramina, they lead to internal air spaces that differ in size and geometry from the trabecular spaces in non‐pneumatized bone, and they are often found in association with other pneumatic traces such as tracks and fossae (Britt, ; O'Connor, ). Although they have been little‐documented to date, pneumatic foramina inside the neural canal are osteological traces of paramedullary diverticula. As with other osteological correlates of pneumaticity, pneumatic foramina inside the neural canal tend to be highly variable among taxa, among individuals within a population or species, and serially along the vertebral column of a single individual. To our knowledge, no‐one has previously attempted a phylogenetically broad survey of paramedullary diverticula using CT images (or any other medium for that matter). These structures have been previously described in only a handful of genera from six major clades: duck ( Anas platyrhynchos ), Anseriformes (Sappey, ); chicken ( Gallus gallus ), Galliformes (Campana, ); rock dove ( Columba livia ), Columbiformes (Baer, ; Müller, ); hummingbirds, Trochilidae (Stanislaus, ); turkeys ( Meleagris gallopavo ), Galliformes (Cover, ); and ostriches ( Struthio camelus ), Struthioniformes (Bezuidenhout et al., ). Of these, only four describe paramedullary diverticula in detail (the studies on Anas , Gallus , Columba and Meleagris ). In the current study, using CT data, we report on the morphology and variation of supramedullary diverticula in 57 specimens representative of 29 taxa from 17 major avian clades, thereby substantially expanding our knowledge of this little‐known aspect of vertebrate morphology. Gross anatomical dissection In many taxa, especially larger‐bodied ones such as turkeys and the ratites, paramedullary diverticula are visible in gross dissection. This is particularly true when the vertebrae are disarticulated from each other or transversely sectioned (see Figure ) although we have also observed paramedullary diverticula in articulated vertebrae by dissecting into the space between the zygapophyses of adjacent vertebrae from a dorsal approach. Physical endocasts By far, the most‐used method of exploring and documenting the form and extent of the diverticula is to create physical endocasts by injecting all or part of the respiratory system with a casting material, typically latex or resin (e.g., Bezuidenhout et al., ; Campana, ; Cover, ; Müller, ; O'Connor, ; Stanislaus, ). This method allows for very fine diverticula to be preserved and studied, but in long, blind‐ended diverticula such as the paramedullary diverticula, it can be difficult to achieve complete filling of the diverticular network. Furthermore, there is typically no distinct sign of incomplete filling—an incompletely filled diverticulum is simply absent from the cast, the same as a diverticulum that does not typically exist in the taxon under study, or which has not yet developed. Still, physical endocasts remain relatively straightforward and inexpensive to produce. CTs and digital endocasts Because paramedullary diverticula are full of air and therefore completely radio‐lucent, they show up quite well in computed tomography (CT) images (Figure ). To date, only a handful of images of paramedullary diverticula have been published, and these are mostly “incidental hits” in CT images taken for other purposes—see for example the CT cross‐section of an ostrich neck in Wedel ( ). Osteological traces Documentation of skeletal pneumatization by physical examination of dry bones is by now a well‐established practice (Hogg, ). Pneumatic foramina tend to be larger than neurovascular foramina, they lead to internal air spaces that differ in size and geometry from the trabecular spaces in non‐pneumatized bone, and they are often found in association with other pneumatic traces such as tracks and fossae (Britt, ; O'Connor, ). Although they have been little‐documented to date, pneumatic foramina inside the neural canal are osteological traces of paramedullary diverticula. As with other osteological correlates of pneumaticity, pneumatic foramina inside the neural canal tend to be highly variable among taxa, among individuals within a population or species, and serially along the vertebral column of a single individual. To our knowledge, no‐one has previously attempted a phylogenetically broad survey of paramedullary diverticula using CT images (or any other medium for that matter). These structures have been previously described in only a handful of genera from six major clades: duck ( Anas platyrhynchos ), Anseriformes (Sappey, ); chicken ( Gallus gallus ), Galliformes (Campana, ); rock dove ( Columba livia ), Columbiformes (Baer, ; Müller, ); hummingbirds, Trochilidae (Stanislaus, ); turkeys ( Meleagris gallopavo ), Galliformes (Cover, ); and ostriches ( Struthio camelus ), Struthioniformes (Bezuidenhout et al., ). Of these, only four describe paramedullary diverticula in detail (the studies on Anas , Gallus , Columba and Meleagris ). In the current study, using CT data, we report on the morphology and variation of supramedullary diverticula in 57 specimens representative of 29 taxa from 17 major avian clades, thereby substantially expanding our knowledge of this little‐known aspect of vertebrate morphology. LITERATURE REVIEW The earliest detailed description of paramedullary diverticula that we have been able to find is that of Sappey ( ), who described and illustrated these structures in the duck. Sappey referred to the paramedullary diverticulum as the “canal aérifère intra‐rachidien” or “intra‐spinal air canal.” Specific details in Sappey's description include connections between the paramedullary diverticula and the intertranverse diverticula, and pneumatization of the neural arch from the paramedullary diverticula (Sappey, : plate 3; Figure of current publication). Campana ( ) described and illustrated paramedullary diverticula in his magisterial description of the respiratory system in the chicken, Gallus domesticus , in which he referred to them as “canaux pneumatique intra‐rachidiens,” or “intra‐spinal pneumatic canals.” In this publication, Campana's figures 25–30 all show at least a portion of the paramedullary diverticula (see Figure of the current paper for reproductions of several of these). Campana ( ) described the paramedullary diverticula as originating from the intertransverse diverticula, and he noted numerous anastomoses among the paramedullary diverticula and other pneumatic diverticula adjacent to the vertebrae. Baer ( ) described and illustrated (in his plate 21) paramedullary diverticula in the pigeon, and their connections to the intertransverse canals. In contrast to Sappey ( ) and Campana ( ), he seems to have considered all diverticula in the neck of the pigeon to be derived from the clavicular sac rather than the cervical sacs; he referred to the paramedullary diverticula as “spinaler theil des clavicularen sackes” or “spinal part of the clavicular sac.” He also described tiny extensions of the paramedullary diverticula that surround the costo‐vertebral articulations in the thoracic region (Baer, , p. 434). Müller ( ) described and illustrated paramedullary diverticula in the pigeon, Columba livia . He established detailed terminology for the various structures he observed. Canalis intertransversariius (intertransverse canals) are bilateral tubes running laterally along the vertebral column. In the cervical vertebrae, these pass through the transverse foramina together with the vertebral arteries. Diverticula supervertebralae are diverticular expansions of the air sac system on the antero‐dorsal surface of vertebrae. Finally, the terms diverticulum supramedullaire (supramedullary diverticula) and canalis supramedullaris (supramedullary canal) were reserved for extensions into the vertebral canal which contact the spinal cord (either as separate, paired, or continuous, anastomosing structures, respectively). Müller's description of the paramedullary diverticula is concise, detailed, and worth quoting in full (Müller, , p. 377): The medullary diverticula are given off from the cervical canal just in front of the foramina transversaria. They consist of extravertebral and intravertebral portions. The extravertebral portions are small and simple vesicles. The intravertebral portions, which I name diverticula supramedullaria (fig. 12, DSPM 1; figs. 11 and 12, DSPM 2), enter the medullary canal through the intervertebral foramina, and extend dorsally from the spinal cord. Within the medullary canal they widen out and impinge upon the corresponding diverticula of the opposite side. They partly unite with these as well as with the adjacent diverticula (in front and behind) of the same side, to form a continuous canal, sickle‐shaped in transverse section, and lying above the medulla, the canalis supramedullaris (figs. 3, 4, 5, 7, and 12, MEA). The partial absorption of the walls of these diverticula which leads to the formation of this canal, takes place during the growth of the bird, and posteriorly, near the thorax, where the canal is widest, is usually quite completed in middle‐aged birds. Anteriorly this absorption decreases as the medullary diverticula become smaller, the completely formed supramedullary canal usually extending no farther than the third or fourth cervical vertebra. Anterior to that it is replaced by two rows of isolated diverticula (fig. 12). The posterior end of the supramedullary canal lies near the last cervical vertebra. Occasionally it communicates here with the corresponding canal of the thoracic vertebrae. Müller ( , pp. 377–378) went on to describe a similar system in the thoracic vertebrae of the pigeon. He explicitly described the paramedullary diverticula in the thoracic vertebrae as arising from the posterior portions of the cervical air sac, and says that the dorsal ribs are pneumatized by “fine tubules” extending from the paramedullary airways. We will revisit these points below, in the context of more recent descriptive work. In his description of the lungs of hummingbirds (Trochilidae), Stanislaus (1937, figure 5) illustrated a single midline paramedullary diverticulum arising from paired connections to the cervical air sacs, with lateral extensions that do not seem, from the figure, to form a continuous intertransverse canal on either side. However, the paramedullary and other vertebral diverticula are only illustrated en passant and not described in detail, as the paper was focused on the external and internal anatomy of the lungs and bronchi. Cover (1953, figure 2) illustrated the cervical diverticula in the turkey, Meleagris gallopavo , including paramedullary diverticula. Cover did not cite Campana ( ) or Müller ( ) and does not seem to have been aware of their prior work. He used a new and completely different system of nomenclature for the various diverticula in and around the cervical vertebrae (see Table ). Cover's “cervical extensions” are synonymous with Müller's “intertransverse canals,” as these diverticula represent branches, or extensions, of the cervical air sacs. Cover's “anterior prolongation” is partially analogous to Müller's “supravertebral diverticula.” While Müller's term refers to diverticula on the anterodorsal surface of the vertebra, Cover's describes both this anterodorsal portion as well as a lateral connection with the intertransverse diverticula. This divergence in definitions is likely a consequence of taxonomic variation in diverticular morphology, and the specific taxonomic focus of each study. Müller did not observe a connection between the intertransverse and supravertebral diverticula in the pigeon (the only taxon included in his study). Cover, in contrast, only observed diverticula with such a connection in his description of the respiratory system in the wild turkey. Finally, Müller's “ diverticula supramedullaires ” are designated “intraspinal connections” and “dorsal confluence” by Cover (1953, caption of figure 2). Again, this difference hints at previously undescribed variation in diverticular morphology among different taxon, upon which we elaborate in the current study. In Cover ( ), the in‐text description of the paramedullary diverticula is limited to a single sentence (Cover, , p. 241): “An anastomosing radicle passes from the junction through the vertebral canal dorsal to the spinal cord.” Cover ( , p. 241) also described extra‐vertebral diverticula of the cervical air sacs passing “caudally along the sides of the vertebrae as far as the fourth coccygeal,” a point that will become important later on. King ( ) reviewed the then‐existing literature on the cervical diverticula in birds, as part of a larger paper on the structure and function of the lungs and air sacs. He consistently referred to paramedullary diverticula as “dorsal tubes,” although this seems to have been deliberately informal and not an attempt to establish novel anatomical terminology. His descriptions of the paramedullary diverticula generally follow those of Müller ( ) and Cover ( ). In particular, King ( ) followed Müller ( ) in describing paramedullary diverticula in the postcervical vertebral column as having originated from the cervical air sacs—a point disputed by later authors. Bezuidenhout et al. ( , p. 324) described the paramedullary diverticula in the ostrich ( Struthio camelus ) as follows: The cranial vertebral diverticula were tubular structures that accompanied the vertebral blood vessels through the transverse canal of the cervical vertebrae. They extended to the level of the axis (C2). Along the way they gave off supravertebral diverticula which lay around the articular processes and supramedullary diverticula that passed through the intervertebral foramina to form a continuous tube dorsally to the spinal cord. The most recent detailed description of the paramedullary diverticula is that of O'Connor ( ), who used a combination of anglicized Müller terminology in together with original jargon, sometimes renaming Müller's structures and in other cases naming structures not previously described. O'Connor applied the term “lateral vertebral diverticulum” in place of Müller's “ canalis intertransversariius ,” but adapts his “ diverticulum supravertebrale ” and “ diverticulum supramedullaire ” as “supravertebral diverticula” and “supramedullary diverticula” respectively. He also coined a term for the diverticula that connect the supramedullary and intertransverse canals at intervertebral joints, referring to them as “anastomosing diverticula.” O'Connor ( , table 1) defined the paramedullary diverticula (supramedullary diverticula or “SMDv” of his usage) as follows (table 1 in O'Connor, ): “longitudinal system variably occupying the extradural space within the vertebral canal. The SMDv is made up of contributions from (1) the cervical air sac, (2) pulmonary diverticula of the lung, and (3) perirenal diverticula of the abdominal air sac.” O'Connor ( ) went on to provide detailed descriptions of the paramedullary diverticula derived from the cervical air sacs (pp. 1210–1211) and the abdominal air sacs (pp. 1211–1212), making the following key points: Paramedullary diverticula may form a single dorsal tube, two or more parallel tubes, or a jacket that completely surrounds the spinal cord, which O'Connor ( , p. 1210) refers to as a “peridural diverticulum.” The paramedullary diverticula are the source of the supravertebral diverticula; this is contra Cover , who described the paramedullary diverticulum as arising from the supravertebral diverticulum (“anterior prolongation” of his usage). No diverticula of the cervical air sacs, including the cervical paramedullary diverticula, extend farther caudally than the mid‐thoracic region. Paramedullary diverticula in the thoracic, synsacral, and caudal regions of the vertebral column arise from the lungs or abdominal air sacs, although they may anastomose with diverticula of the cervical air sacs in the mid‐thoracic region. This is contra Cover ( ) and King ( ), both of whom explicitly described postcervical vertebral diverticula as having arisen from the cervical air sacs. If paramedullary diverticula derived from the cervical and abdominal air sacs anastomose, “cranial (cervical air sac diverticula) and caudal (abdominal air sac diverticula) components of the air sac system can communicate with one another via the vertebral canal” (p. 1211). Dorsal ribs may be pneumatized by lateral vertebral diverticula of the cervical air sacs, or by pulmonary diverticula of the lung itself (p. 1211), but not by the paramedullary diverticula (contra Müller, ). From the foregoing descriptions and discussions, several points remain unresolved, which are discussed below. 2.1 Do paramedullary diverticula of the cervical air sac extend into the thoracic region, or to other more posterior regions of the vertebral column? Müller ( ) actually described two different sets of paramedullary diverticula originating from the cervical air sacs in the pigeon. The first arises at each intervertebral joint in the cervical column as a bilateral, medial extension of the intertransverse diverticulum. This system terminates “near the last cervical vertebra” (p. 377). Müller ( : pp. 377–378) described the second system as follows: From the distal end of the pars ovalis [of the cervical air sac] of either side a ventrally flattened tube arises. This passes between the vertebral muscles and through the intervertebral foramen in front of the first thoracic vertebra into the spinal canal, where it unites with the corresponding tubule from the opposite side, both together forming a duct similar to the canalis supramedullaris. This duct extends backward but does not reach the last thoracic vertebra. It is very variable, and sends fine branches into the vertebrae and the ribs. The second system is described as having an inside‐to‐outside developmental sequence, with the paramedullary diverticula giving rise to extravertebral diverticula and pneumatizing the dorsal ribs. In contrast, O'Connor ( ) argued that the dorsal ribs were pneumatized by lateral vertebral diverticula of the cervical air sacs, or by pulmonary diverticula of the lungs, and that the paramedullary diverticula in the thoracic region could arise from the pulmonary diverticula or as anterior extensions from the abdominal air sacs. It is interesting to note that Müller ( , p. 378) allowed that, “It has sometimes seemed to me that the ribs were pneumatized directly from the lungs,” which is consistent with the findings of O'Connor ( ). We also note that in the absence of developmental information, it is impossible to tell whether an anastomosing system developed from outside‐to‐inside or vice versa. Possibly the paramedullary diverticula in the thoracic region of the pigeon do in fact arise from extra‐vertebral diverticula of the lungs, as described by O'Connor ( ), and Müller ( ) simply got the arrow of developmental history backwards. There is also the contention of Cover ( ) and later secondary sources (e.g., King, , ) that extra‐vertebral diverticula of the cervical air sac extend back as far as the tail. This possibility is strongly contradicted by O'Connor ( ), who found that no cervical diverticula of any form persisted farther posteriorly than the mid‐thoracic region. This may be another case of ontogenetic confusion, in which the anastomosis of several, originally independent sets of vertebral diverticula give rise to a continuous airway, which earlier authors mistakenly attributed entirely to the cervical air sacs. Finally, we note that Schachner et al. ( ) report that the primary source of the various vertebral diverticula found in the cervical region is in fact the cranial portion of the lungs, rather than the cervical air sacs as described in other birds (though the authors note that connections between the cervical air sacs and vertebral diverticula are still very likely). These various findings indicate that ontogenetic data and greater interspecific sampling are needed to resolve these issues. 2.2 Do paramedullary diverticula more typically form as a single tube, paired tubes, or multiple tubes, and how does this vary among regions of the vertebral column and among taxa? This question probably does not represent differences of fact or opinion among previous authors, but rather the actual diversity of morphology of paramedullary diverticula among different taxa of birds. Müller ( ) described paired tubes in the cervical region of the pigeon, and a single unpaired tube in the thoracic region. Stanislaus ( ) and Cover ( ) illustrated unpaired tubes in the necks of hummingbirds and turkeys, respectively. O'Connor ( ) described the cervical paramedullary diverticula as forming a single tube in some taxa (e.g., ducks), parallel tubes in others (e.g., ostriches), and a complete jacket enclosing the spinal cord in still others (e.g., storks and pelicans). We should therefore be alert to variation in the form and extent of the paramedullary diverticula along the vertebral column in individual birds, and also to pronounced variations among taxa. 2.3 Museum acronyms Acronyms are as follows: LACM, Los Angeles County Museum, Los Angeles, CA, USA; MVZ, Museum of Vertebrate Zoology, University of California, Berkeley, CA, USA. Do paramedullary diverticula of the cervical air sac extend into the thoracic region, or to other more posterior regions of the vertebral column? Müller ( ) actually described two different sets of paramedullary diverticula originating from the cervical air sacs in the pigeon. The first arises at each intervertebral joint in the cervical column as a bilateral, medial extension of the intertransverse diverticulum. This system terminates “near the last cervical vertebra” (p. 377). Müller ( : pp. 377–378) described the second system as follows: From the distal end of the pars ovalis [of the cervical air sac] of either side a ventrally flattened tube arises. This passes between the vertebral muscles and through the intervertebral foramen in front of the first thoracic vertebra into the spinal canal, where it unites with the corresponding tubule from the opposite side, both together forming a duct similar to the canalis supramedullaris. This duct extends backward but does not reach the last thoracic vertebra. It is very variable, and sends fine branches into the vertebrae and the ribs. The second system is described as having an inside‐to‐outside developmental sequence, with the paramedullary diverticula giving rise to extravertebral diverticula and pneumatizing the dorsal ribs. In contrast, O'Connor ( ) argued that the dorsal ribs were pneumatized by lateral vertebral diverticula of the cervical air sacs, or by pulmonary diverticula of the lungs, and that the paramedullary diverticula in the thoracic region could arise from the pulmonary diverticula or as anterior extensions from the abdominal air sacs. It is interesting to note that Müller ( , p. 378) allowed that, “It has sometimes seemed to me that the ribs were pneumatized directly from the lungs,” which is consistent with the findings of O'Connor ( ). We also note that in the absence of developmental information, it is impossible to tell whether an anastomosing system developed from outside‐to‐inside or vice versa. Possibly the paramedullary diverticula in the thoracic region of the pigeon do in fact arise from extra‐vertebral diverticula of the lungs, as described by O'Connor ( ), and Müller ( ) simply got the arrow of developmental history backwards. There is also the contention of Cover ( ) and later secondary sources (e.g., King, , ) that extra‐vertebral diverticula of the cervical air sac extend back as far as the tail. This possibility is strongly contradicted by O'Connor ( ), who found that no cervical diverticula of any form persisted farther posteriorly than the mid‐thoracic region. This may be another case of ontogenetic confusion, in which the anastomosis of several, originally independent sets of vertebral diverticula give rise to a continuous airway, which earlier authors mistakenly attributed entirely to the cervical air sacs. Finally, we note that Schachner et al. ( ) report that the primary source of the various vertebral diverticula found in the cervical region is in fact the cranial portion of the lungs, rather than the cervical air sacs as described in other birds (though the authors note that connections between the cervical air sacs and vertebral diverticula are still very likely). These various findings indicate that ontogenetic data and greater interspecific sampling are needed to resolve these issues. Do paramedullary diverticula more typically form as a single tube, paired tubes, or multiple tubes, and how does this vary among regions of the vertebral column and among taxa? This question probably does not represent differences of fact or opinion among previous authors, but rather the actual diversity of morphology of paramedullary diverticula among different taxa of birds. Müller ( ) described paired tubes in the cervical region of the pigeon, and a single unpaired tube in the thoracic region. Stanislaus ( ) and Cover ( ) illustrated unpaired tubes in the necks of hummingbirds and turkeys, respectively. O'Connor ( ) described the cervical paramedullary diverticula as forming a single tube in some taxa (e.g., ducks), parallel tubes in others (e.g., ostriches), and a complete jacket enclosing the spinal cord in still others (e.g., storks and pelicans). We should therefore be alert to variation in the form and extent of the paramedullary diverticula along the vertebral column in individual birds, and also to pronounced variations among taxa. Museum acronyms Acronyms are as follows: LACM, Los Angeles County Museum, Los Angeles, CA, USA; MVZ, Museum of Vertebrate Zoology, University of California, Berkeley, CA, USA. MATERIALS AND METHODS 3.1 Nomenclature Though comparatively few previous publications report on paramedullary (formerly “supramedullary”) diverticula, the nomenclature created to describe diverticular anastomoses in and around avian vertebrae has a complex history and is at times both convoluted and redundant. A summary of the history of terms used, along with a clarification of nomenclature applied in this paper, can be found in Table . In the current study, we primarily use anglicized versions of Müller ( ). We also retain usage of Cover's ( ) “anterior prolongation,” adapting the term to apply to the diverticula connecting the supravertebral and intertransverse diverticula. However, based on the results of the current study, we adopt a new term in place of “supramedullary diverticula” or “intraspinal connections.” As described in detail below, we document high levels of variation in the morphology and arrangement of these structures, and find that they are neither exclusively “supra” relative to the spinal cord, nor are they necessarily restricted to the vertebral canal as implied by the descriptor “intraspinal.” In his expansive paper on the avian air sac system, O'Connor ( ) also mentions such variation, even suggesting that in some cases “peridural” is a more apt descriptor of these diverticula. Based on our detailed observations, we suggest the name of these structures be changed to the more general “paramedullary diverticula” and refer to them as such throughout this work. Because paramedullary diverticula exhibit such a varied gradation of morphologies, we do not apply O'Connor's term “anastomosing diverticula”; evidence presented here instead suggests that these are merely one morphological variant of paramedullary diverticula. 3.2 Data collection Naturally deceased specimens were received as donations from the OK Corral Ostrich Farm in Oro Grande, CA (the source of the ostriches); Avian Resources in San Dimas, CA (the source of all other exotic taxa not native to California); and the Lindsay Wildlife Museum in Walnut Creek, CA, and the Society for Prevention of Animal Abuse in Monterey Co., CA (the source of all California native taxa). This study is intended as a preliminary general survey of paramedullary airways, and includes all available specimens regardless of age. Some taxa are represented by partial growth series, others only by adults or only chicks. It will be the focus of future work to explore the ontogeny of these structures in greater detail. Precise ages of most individuals at time of death is unknown in most cases; however, specimens were classified in the following qualitative growth stages based on identifications made by the wildlife hospitals, body size, and general external morphology (primarily the condition of the feathers): neonate, downy chick, pin‐feathered chick, prefledgling chick, fledgling chick, sub‐adult, and adult (Table ). MicroCT scanning of small‐bodied specimens (approximately 15 cm or less in length) occurred at the Center for Molecular and Genomic Imaging (CMGI) at the University of California, Davis, on a Zeiss Xradia microXCT‐200 (slice thickness = 48 μm) and the Oregon Health Sciences University in Portland, Oregon on a Perkin Elmer Quantum FX microCT scanner (slice thickness = 100 μm). Scans of larger specimens were acquired at the University of Utah Medical Center Research Park in Salt Lake City, UT, on a 164 slice Siemens single source medical CT scanner (slice thickness = 0.6–1 mm). We examined two‐dimensional slices from these conventional CT and microCT scans of dead specimens in the program ImageJ (v1.53g). Most specimens have been skeletonized and are reposited in this museum (several were dissected for other study and disposed of). Osteological specimens in the Ornithology collections of the Natural History Museum of Los Angeles County were also observed in searching for further examples of osteological correlates of paramedullary airways. Nomenclature Though comparatively few previous publications report on paramedullary (formerly “supramedullary”) diverticula, the nomenclature created to describe diverticular anastomoses in and around avian vertebrae has a complex history and is at times both convoluted and redundant. A summary of the history of terms used, along with a clarification of nomenclature applied in this paper, can be found in Table . In the current study, we primarily use anglicized versions of Müller ( ). We also retain usage of Cover's ( ) “anterior prolongation,” adapting the term to apply to the diverticula connecting the supravertebral and intertransverse diverticula. However, based on the results of the current study, we adopt a new term in place of “supramedullary diverticula” or “intraspinal connections.” As described in detail below, we document high levels of variation in the morphology and arrangement of these structures, and find that they are neither exclusively “supra” relative to the spinal cord, nor are they necessarily restricted to the vertebral canal as implied by the descriptor “intraspinal.” In his expansive paper on the avian air sac system, O'Connor ( ) also mentions such variation, even suggesting that in some cases “peridural” is a more apt descriptor of these diverticula. Based on our detailed observations, we suggest the name of these structures be changed to the more general “paramedullary diverticula” and refer to them as such throughout this work. Because paramedullary diverticula exhibit such a varied gradation of morphologies, we do not apply O'Connor's term “anastomosing diverticula”; evidence presented here instead suggests that these are merely one morphological variant of paramedullary diverticula. Data collection Naturally deceased specimens were received as donations from the OK Corral Ostrich Farm in Oro Grande, CA (the source of the ostriches); Avian Resources in San Dimas, CA (the source of all other exotic taxa not native to California); and the Lindsay Wildlife Museum in Walnut Creek, CA, and the Society for Prevention of Animal Abuse in Monterey Co., CA (the source of all California native taxa). This study is intended as a preliminary general survey of paramedullary airways, and includes all available specimens regardless of age. Some taxa are represented by partial growth series, others only by adults or only chicks. It will be the focus of future work to explore the ontogeny of these structures in greater detail. Precise ages of most individuals at time of death is unknown in most cases; however, specimens were classified in the following qualitative growth stages based on identifications made by the wildlife hospitals, body size, and general external morphology (primarily the condition of the feathers): neonate, downy chick, pin‐feathered chick, prefledgling chick, fledgling chick, sub‐adult, and adult (Table ). MicroCT scanning of small‐bodied specimens (approximately 15 cm or less in length) occurred at the Center for Molecular and Genomic Imaging (CMGI) at the University of California, Davis, on a Zeiss Xradia microXCT‐200 (slice thickness = 48 μm) and the Oregon Health Sciences University in Portland, Oregon on a Perkin Elmer Quantum FX microCT scanner (slice thickness = 100 μm). Scans of larger specimens were acquired at the University of Utah Medical Center Research Park in Salt Lake City, UT, on a 164 slice Siemens single source medical CT scanner (slice thickness = 0.6–1 mm). We examined two‐dimensional slices from these conventional CT and microCT scans of dead specimens in the program ImageJ (v1.53g). Most specimens have been skeletonized and are reposited in this museum (several were dissected for other study and disposed of). Osteological specimens in the Ornithology collections of the Natural History Museum of Los Angeles County were also observed in searching for further examples of osteological correlates of paramedullary airways. RESULTS Paramedullary diverticula are common, but not omnipresent, among extant birds. While most groups in our dataset possess these diverticula (19 genera), they were completely absent in others (5 genera) (Table ). Notably, for all clades for which we were able to sample more than one genus (Accipitriformes, Charadriiformes, Passeriformes, Pelicaniformes, and Piciformes) members are ubiquitously characterized by either the presence or absence of diverticula in contact with the spinal cord. Thus, based on current sampling, this character has strong, unequivocal phylogenetic signal. Much variation in morphology exists not only when comparing taxa or individual specimens, but often within the vertebral column of a single individual. We describe the following four common morphologies (Figure ): (i), intertransverse diverticula branch and contact spinal cord at intervertebral joints; (ii), branching of intertransverse diverticula at intervertebral joints extends partially into the vertebral canal, but is discontinuous with diverticula of the following joint; (iii), supramedullary diverticula form sets of tubes that are continuous through vertebral canals of at least two consecutive vertebrae; (iv), continuous diverticula through vertebral canal anastomose with supravertebral diverticula. It is important to note, first, that these discrete descriptions in fact represent a continuum of morphologies; second, that within a single individual, combinations at least two of these morphologies were often observed; and third, that other combinations exist (e.g., continuous paramedullary diverticula connected only with supravertebral diverticula), though these are rarer. Additionally, we observed notable variation in the shape, arrangement, and orientation of diverticula relative to the spinal cord (Figure ). One common morphology was a pair of tubes within the vertebral canal. Indeed, paired paramedullary diverticula were observed in at least some portion of the vertebral column in every taxon where such structures were present, except the hummingbird, pelican, Western gull, and black‐crowned night heron. These paired paramedullary diverticula are in most instances located dorsal to the spinal cord, but were observed lateral to the cord in the great‐horned owl, bushtit, and Western scrub jay, and ventral to the cord in the violet turaco. Not uncommonly, we observed three tubes dorsal to the spinal cord, rather than two. Sometimes the diverticula merge into one, single C‐shaped tube in contact with the spinal cord, often dorsally but occasionally latero‐dorsal (and therefore asymmetrical). We note, though, that in at least in some examples where this was observed, we believe there were in fact two or three tubes closely adpressed together, separated by layers of epithelium so thin they were not visible on the CT scans. Perhaps the most striking morphology was seen in pelicans, in which the diverticula completely surround the spinal cord (as also reported by O'Connor, ) through the entire vertebral column (Figure ), except the anterior and mid‐sacrals. Not only do these diverticula enter the vertebral canal, but frequently they also pneumatize the bone of the vertebral arch and body from within the canal. This was seen in the violet turaco, eclectus parrot, and pelican. In the pelican, the paramedullary diverticula enter the vertebral body via a foramen in the floor of the canal, and expand such that a section of the body of the vertebra was highly pneumatized and mainly consisted of air in a jacket of thin‐walled bone (Figure ). 4.1 Descriptions of paramedullary diverticula by clade 4.1.1 Accipitriformes Buteo jamaicensis (red‐tailed hawk): In the cervical vertebrae of red‐tailed hawks paramedullary diverticula are present as paired tubes dorsal to the spinal cord, within the vertebral canal but discontinuous in consecutive vertebrae (morphology ii; Figure ). These structures persist into the dorsal vertebrae, where they remain paired, discontinuous and dorsal to the spinal cord relative to most vertebrae in this region. They are only variably present in the dorsal vertebrae, and completely absent in the synsacrum. One individual examined had a minute, continuous, single canal dorsal to the spinal cord in the free caudal vertebrae. The other specimen lacked diverticula in the caudal region. Cathartes aura (turkey vulture): Paramedullary diverticula are present in the cervical region as discontinuous, short tubes in the turkey vulture (morphology ii; Figure ). They are paired and occur dorsal to the spinal cord. In the dorsal vertebrae they become more reduced, losing the posterior expansions into the vertebral canal but maintaining anterior expansions. They are present in the anterior dorsals but absent in the posterior section of this region. Paramedullary diverticula are absent in the synsacrum and caudal vertebrae. 4.1.2 Apodiformes Calypte anna (Anna's hummingbird): In Anna's hummingbirds, cervical vertebrae exhibit variably continuous (morphology iii) and discontinuous (morphology ii) paramedullary diverticula, which frequently anastomose with supravertebral diverticula (morphology iv). These occur as a single, kidney‐shaped tube dorsal to the spinal cord. Though hummingbird body size is very tiny, diverticula within the vertebral canal in this region are substantial, occupying on average about one third of the area of the space. Such diverticula are absent in all other vertebral regions. 4.1.3 Anseriformes Anas platyrhynchos (mallard duck): In this study, mallard ducks were represented only by downy chicks; however, paramedullary diverticula are already present and well‐developed in the cervical region where they appear as paired tubes dorsal to the spinal cord. These structures are substantial, occupying about one third of the area of the vertebral canal. In anterior cervicals, when present, they are discontinuous and exist only at intervertebral joints (morphology i). In the mid‐cervicals, the diverticula are also continuous through consecutive vertebrae. They are absent in all other vertebral regions. 4.1.4 Charadriiformes Larus occidentalis (Western gull): In the cervical region of Western gulls, paramedullary diverticula occur as a single, large, C‐shaped tube that encases the spinal cord dorsally and laterally. In one observed individual, they form continuous tubes from vertebra to vertebra in the anterior cervicals. In a second individual they were absent. In the posterior cervicals of both individuals, they become discontinuous, but still enter the vertebral canal (morphology ii). In the dorsal region paramedullary diverticula are small in the anterior vertebrae and absent in the posterior vertebrae. They are completely absent in the synsacrum and caudal region. Uria aalge (common murre): In common murres, paramedullary diverticula are large (one fourth to one third of vertebral canal area) and exhibit a range of morphologies across the cervical vertebrae. Anteriorly, they arise as paired tubes dorsal to the spinal cord. In the mid‐cervicals, these grade into a single C‐shaped canal continuous through consecutive vertebrae (morphology iii). This in turn diminishes abruptly, and the posterior cervicals completely lack such diverticula. These structures are also absent in all more posterior vertebrae. 4.1.5 Columbiformes Zenaida macroura (mourning dove): In the cervical region of mourning doves, substantial paramedullary diverticula were observed. While absent in the anterior vertebrae, they appear in mid‐ and posterior cervicals as paired tubes dorsal to the cord and oblong in shape (in transverse view). They first appear only at intervertebral joints (morphology i) but soon expand to invade the vertebral canal (morphology ii). Sometimes they become large enough to contact along the dorsal midline, forming a C‐shaped canal. In the anterior dorsal vertebrae, paramedullary diverticula are paired and in contact with the cord only at intervertebral joints (morphology i). In the posterior dorsal vertebrae very large paramedullary diverticula (approximately two‐thirds of area of vertebral canal) were observed as a single, C‐shaped tube dorsal to the spinal cord continuous through consecutive vertebrae (morphology i). 4.1.6 Falconiformes Falco sparverius (American kestrel): Paramedullary diverticula are absent in American kestrels. 4.1.7 Galliformes Meleagris gallopavo (wild turkey): The paramedullary diverticula in the cervical region of the wild turkey are paired and discontinuous through vertebral bodies. In the more posterior vertebrae of this region, they merge with the supravertebral diverticula at intervertebral joints but remain discontinuous, exhibiting a combination of morphologies ii and iv. In the dorsal region, paramedullary diverticula become larger and merge to form a C‐shaped canal at intervertebral joints, connected directly to the cervical air sacs. These structures invade the canal anteriorly, but only slightly; inside of the canal the size is highly reduced. Small connections to supravertebral diverticula were also observed. In the mid‐dorsals, paramedullary diverticula decrease in size overall and appear as thin, C‐shaped tubes at intervertebral joints, connected directly to large, bilateral diverticula emanating from the lungs directly. Going into the vertebral canal they appear as paired, squished tubes and are discontinuous through consecutive vertebrae (morphology iii). In posterior dorsals these structures become highly reduced, and are absent in the sacral and caudal vertebrae. 4.1.8 Gaviiformes Gavia pacifica (Pacific loon): Paramedullary diverticula are absent in Pacific loons. Gavia immer (common loon): Paramedullary diverticula are absent in common loons. Gavia adamsii (yellow‐billed loon): Paramedullary diverticula are absent in yellow‐billed loons. 4.1.9 Musophagiformes Musophaga violacea (violet turaco): Paramedullary diverticula in the violet turaco exhibit a range of unusual and elaborate morphologies, and are present in all four vertebral regions. In the cervicals, they are paired, ventral to the spinal cord, and very tiny in the atlas. These merge to form a single, thin, C‐shaped tube ventral to the cord in the axis. In other cervical vertebrae, paramedullary diverticula are paired, small, and lateroventral, but shift to a more lateral position and become continuous in the posterior neck (morphology iii). In the dorsal region, these diverticula become larger and discontinuous, persisting as paired tubes now dorsal to the spinal cord. In one vertebra, we observed a unique morphology of two bilateral pairs of small, circular diverticula, forming a total of four separate tubes. These subsequently merge to form a single, C‐shaped continuous tube dorsal to the spinal cord in the mid‐dorsals, and become discontinuous and paired again in posterior dorsals. In the mid‐ and posterior dorsal vertebra, paramedullary diverticula connect directly to the lungs. In the synsacrum, diverticula are not continuous with those from the dorsal vertebrae. Anteriorly, a single, unpaired diverticulum is the first structure apparent, though paired tubes rapidly appear. These canals are flattened and appear both ventral and dorsal to the cord. Moving posteriorly, they eventually expand to meet laterally, fully jacketing the last sacral vertebra. Paramedullary diverticula in this region are connected to perirenal diverticula. The spinal cord is bounded dorsally by a single, very large tube through most of the free caudals, and posteriorly becomes fully jacketed. This morphology persists into the pygostyle, where the spinal cord is very small and completely surrounded by paramedullary diverticula. These also appear to originate from the perirenal diverticula. 4.1.10 Passeriformes Aphelocoma californica (scrub jay): In scrub jays, paramedullary diverticula were only observed in the cervical vertebrae. Here, they are paired tubes lateral to the spinal cord present in posterior cervicals only. They are discontinuous and variably enter the vertebral canal to a greater or lesser degree (ranging between morphologies i and ii). In the single adult observed, the canals were uniformly small. In a prefledgling individual also observed, they become much larger moving posteriorly in this region, hinting at possible variation through ontogeny. Melozone crisallis (California towhee): Of the two California towhees included in this study, one had paramedullary diverticula only in the cervical vertebrae. In the other, they were present in the cervicals and first two dorsal vertebrae. In the cervical region, diverticula are paired and continuous anteriorly, expanding to become very large at intervertebral joints and appearing as smaller, squished tubes inside of the canal. Notably, in the anterior cervicals, paramedullary diverticula primarily connect to the supravertebral diverticula and only merge with small, intertransverse diverticula mid‐way through the neck (a mosaic combination of morphologies iii and iv). Eventually, in the posterior cervicals, the intertransverse diverticula become dominant and invade the vertebral canal increasingly less, until paramedullary diverticula are absent in all more posterior vertebrae. Psaltriparus minimus (bushtit): Six bushtits were included in this study. Of these, four were pin‐feathered chicks (very immature), all of which lacked paramedullary diverticula. However, two older individuals (fledglings) had paramedullary diverticula in the cervical vertebrae. These structures do not arise until mid‐neck, where a single, asymmetrical tube first appears before becoming bilaterally paired. In this taxon, the canals are sizeable, occupying about one third of the area of the vertebral canal. They invade the canal and are variably continuous between consecutive vertebrae (a combination of morphologies ii and iii). Spinus psaltria (lesser goldfinch): Paramedullary diverticula in the lesser goldfinch are minute and only present in some cervical vertebrae. They are absent in all other vertebral regions. They were observed as small, paired tubes alternately continuous and discontinuous between consecutive vertebrae (morphologies ii and iii) in the posterior‐most cervicals only. 4.1.11 Pelicaniformes Nycticorax nycticorax (black‐crowned night heron): In the black‐crowned night heron, paramedullary diverticula are present only in the cervical vertebrae and absent in all other regions. General diverticular morphology around these vertebrae was very elaborate, with particularly prominent intertransverse canals and large paramedullary diverticula. In anterior and mid‐cervicals, the paramedullary diverticula invade the vertebral canal and grade between serially continuous and discontinuous (morphologies ii and iii). In posterior cervicals, they appear as a single C‐shaped tube dorsal to the spinal cord and discontinuous between consecutive vertebrae (morphology ii). There were no notable differences between the fledgling chick and adult. Pelecanus occidentalis (brown pelican): Paramedullary diverticula in the brown pelican are large, elaborate, and present in most of the vertebral column. In the cervical vertebrae, these structures first appear in the anterior cervicals posterior to the atlas and axis. They completely encircle the spinal cord, merge with both the intertransverse and supravertebral diverticula, and are serially continuous throughout the vertebrae of this region (morphology iv). In the dorsal vertebrae, this morphology persists. Within the vertebral canal, these diverticula pneumatize the vertebral body via foramina in the floor of the canal in three consecutive vertebrae. Paramedullary diverticula are absent in posterior dorsals and through most of the synsacrum. They were only observed in the posterior‐most sacral vertebrae, as miniscule, paired tubes. In the free caudal vertebrae, they once again expand to nearly or fully encircle the spinal cord (varying between O‐ and U‐shaped). Phalacrocorax penicillatus (Brandt's cormorant): Paramedullary diverticula are restricted to anterior vertebral regions in Brandt's cormorant. They are present in the cervical vertebrae, and occasionally extend into the anterior‐most dorsal vertebrae. In the cervical region, they first appear as small, paired tubes dorsal to the cord in the mid‐cervicals. These quickly expand to become C‐shaped and extend much further into vertebral canals. However, the size of the canals changes dramatically as they extend, becoming much smaller at the mid‐point of each vertebra. Thus, they appear as barely continuous, connecting to each other at their point of smallest volume (morphology ii grades into morphology iii). In the posterior cervicals, they once again become small, paired, and discontinuous. In one individual observed, they ceased at the end of the cervical series. In another, they appeared as small, paired, and present only at intervertebral joints in the anterior‐most dorsals. 4.1.12 Piciformes Dryobates nuttallii (Nuttall's woodpecker): Paramedullary diverticula are absent in Nuttall's woodpeckers. Melanerpes formicivorus (acorn woodpecker): Paramedullary diverticula are absent in the acorn woodpeckers. 4.1.13 Podicipediformes Aechmorphus occidentalis (Western grebe): Paramedullary diverticula are absent in Western grebes. Aechmorphus clarkii (Clarke's grebe): Paramedullary diverticula are absent in Clarke's grebes. 4.1.14 Procellariiformes Puffinus griseus (sooty shearwater): In the sooty shearwater, paramedullary airways are present in the cervical and dorsal vertebrae, but absent in the more posterior regions. In the cervical vertebrae these structures first appear mid‐neck as a single, C‐shaped tube dorsal and lateral to the spinal cord. They are intermittently continuous and discontinuous (morphologies ii and iii) throughout the rest of the cervical series. In the anterior dorsal vertebrae, paramedullary airways are present as large, paired tubes that frequently merge to form a C‐shaped tube. They are discontinuous and connected to the cervical air sacs. In the mid‐dorsal vertebrae, the size of these diverticula is reduced and they are connected directly to diverticula from the lungs. They are absent in the posterior dorsals. 4.1.15 Psittaciformes Pyrrhura molinae (green‐cheeked conure): In the green‐cheeked conure paramedullary diverticula are present throughout the cervical and dorsal vertebrae, appear variably in the posterior sacral vertebrae, and are absent in all free caudals. In the cervicals, paramedullary diverticula are pared, discontinuous, and quite small. They are only present intermittently throughout this region. In the dorsal vertebrae, they become much more substantial. They expand to form a larger pair of canals that sometimes contact at the mid‐line to form a C‐shaped structure. Anteriorly, they remain discontinuous but do invade the vertebral canal (morphology ii). In mid‐ and posterior dorsals, these diverticula become continuous (morphology iii) but narrow substantially mid‐canal. Overall, the paramedullary diverticula of this region are much larger than in the cervical vertebrae. In one individual, there were small, paired canals in the last two sacral vertebrae. Eclectus roratus (eclectus parrot): Paramedullary diverticula are very large in the eclectus parrot, and were observed throughout the cervical and dorsal vertebrae, as well as the posterior sacrals. They are absent in the caudal vertebrae. In the neck, the atlas, axis, and C3 are all encircled by a thick layer of paramedullary diverticula. This morphs into paired, discontinuous diverticula throughout the rest of the cervical series (morphology ii). In the thoracic region, these structures become substantially enlarged, forming a fat, single, tube dorsal to the spinal cord and continuous through the region (morphology iii). This is primarily connected to diverticula branching directly from the lungs, though anteriorly there are also connections to the cervical air sacs. We also noted one dorsal vertebral arch that was pneumatized by paramedullary airways via a foramen in the roof of the vertebral canal. These diverticula disappear in the synsacrum, but briefly reappear as large, paired tubes in the last two sacral vertebrae. 4.1.16 Strigiformes Bubo virginianus (great‐horned owl): In the great‐horned owl, paramedullary airways are present in all cervical vertebrae (except the atlas and axis) as paired, discontinuous tubes that invade the vertebral canal (morphology ii). In the thoracic region, this morphology persists with a moderate reduction in the size of the canals, sometimes merging at the midline to form a C‐shaped canal. They are absent through most sacral vertebrae, but are present as tiny, paired tubes lateral to the spinal cord at the very end of the synsacrum. Paramedullary diverticula are absent in the caudal vertebrae. Notably, the appearance of these structures was very similar between the two adults and individual pin‐feathered chick that were included in the study. 4.1.17 Struthioniformes Struthio camelus (ostrich): A whole‐body CT scan was only available for a downy ostrich chick, though paramedullary diverticula are already prominent and elaborate in the cervical region even at this relatively early ontogenetic stage. Excepting the atlas and axis, they are present in all other vertebrae of this region. Most commonly, they exist as paired tubes dorsal to the cord, which merge to form a C‐shaped canal posteriorly. They are continuous through consecutive vertebrae (morphology iii) and occasionally merge with supravertebral diverticula (morphology iv). Posteriorly, this morphs into discontinuous paired tubes (morphology ii). In the dorsal vertebrae, this morphology persists with intermittent connections to the supravertebral diverticula at intervertebral joints. Anteriorly they are paired, ovoid structures. In the mid‐ and posterior dorsals three canals dorsal and lateral to the spinal cord appear. Descriptions of paramedullary diverticula by clade 4.1.1 Accipitriformes Buteo jamaicensis (red‐tailed hawk): In the cervical vertebrae of red‐tailed hawks paramedullary diverticula are present as paired tubes dorsal to the spinal cord, within the vertebral canal but discontinuous in consecutive vertebrae (morphology ii; Figure ). These structures persist into the dorsal vertebrae, where they remain paired, discontinuous and dorsal to the spinal cord relative to most vertebrae in this region. They are only variably present in the dorsal vertebrae, and completely absent in the synsacrum. One individual examined had a minute, continuous, single canal dorsal to the spinal cord in the free caudal vertebrae. The other specimen lacked diverticula in the caudal region. Cathartes aura (turkey vulture): Paramedullary diverticula are present in the cervical region as discontinuous, short tubes in the turkey vulture (morphology ii; Figure ). They are paired and occur dorsal to the spinal cord. In the dorsal vertebrae they become more reduced, losing the posterior expansions into the vertebral canal but maintaining anterior expansions. They are present in the anterior dorsals but absent in the posterior section of this region. Paramedullary diverticula are absent in the synsacrum and caudal vertebrae. 4.1.2 Apodiformes Calypte anna (Anna's hummingbird): In Anna's hummingbirds, cervical vertebrae exhibit variably continuous (morphology iii) and discontinuous (morphology ii) paramedullary diverticula, which frequently anastomose with supravertebral diverticula (morphology iv). These occur as a single, kidney‐shaped tube dorsal to the spinal cord. Though hummingbird body size is very tiny, diverticula within the vertebral canal in this region are substantial, occupying on average about one third of the area of the space. Such diverticula are absent in all other vertebral regions. 4.1.3 Anseriformes Anas platyrhynchos (mallard duck): In this study, mallard ducks were represented only by downy chicks; however, paramedullary diverticula are already present and well‐developed in the cervical region where they appear as paired tubes dorsal to the spinal cord. These structures are substantial, occupying about one third of the area of the vertebral canal. In anterior cervicals, when present, they are discontinuous and exist only at intervertebral joints (morphology i). In the mid‐cervicals, the diverticula are also continuous through consecutive vertebrae. They are absent in all other vertebral regions. 4.1.4 Charadriiformes Larus occidentalis (Western gull): In the cervical region of Western gulls, paramedullary diverticula occur as a single, large, C‐shaped tube that encases the spinal cord dorsally and laterally. In one observed individual, they form continuous tubes from vertebra to vertebra in the anterior cervicals. In a second individual they were absent. In the posterior cervicals of both individuals, they become discontinuous, but still enter the vertebral canal (morphology ii). In the dorsal region paramedullary diverticula are small in the anterior vertebrae and absent in the posterior vertebrae. They are completely absent in the synsacrum and caudal region. Uria aalge (common murre): In common murres, paramedullary diverticula are large (one fourth to one third of vertebral canal area) and exhibit a range of morphologies across the cervical vertebrae. Anteriorly, they arise as paired tubes dorsal to the spinal cord. In the mid‐cervicals, these grade into a single C‐shaped canal continuous through consecutive vertebrae (morphology iii). This in turn diminishes abruptly, and the posterior cervicals completely lack such diverticula. These structures are also absent in all more posterior vertebrae. 4.1.5 Columbiformes Zenaida macroura (mourning dove): In the cervical region of mourning doves, substantial paramedullary diverticula were observed. While absent in the anterior vertebrae, they appear in mid‐ and posterior cervicals as paired tubes dorsal to the cord and oblong in shape (in transverse view). They first appear only at intervertebral joints (morphology i) but soon expand to invade the vertebral canal (morphology ii). Sometimes they become large enough to contact along the dorsal midline, forming a C‐shaped canal. In the anterior dorsal vertebrae, paramedullary diverticula are paired and in contact with the cord only at intervertebral joints (morphology i). In the posterior dorsal vertebrae very large paramedullary diverticula (approximately two‐thirds of area of vertebral canal) were observed as a single, C‐shaped tube dorsal to the spinal cord continuous through consecutive vertebrae (morphology i). 4.1.6 Falconiformes Falco sparverius (American kestrel): Paramedullary diverticula are absent in American kestrels. 4.1.7 Galliformes Meleagris gallopavo (wild turkey): The paramedullary diverticula in the cervical region of the wild turkey are paired and discontinuous through vertebral bodies. In the more posterior vertebrae of this region, they merge with the supravertebral diverticula at intervertebral joints but remain discontinuous, exhibiting a combination of morphologies ii and iv. In the dorsal region, paramedullary diverticula become larger and merge to form a C‐shaped canal at intervertebral joints, connected directly to the cervical air sacs. These structures invade the canal anteriorly, but only slightly; inside of the canal the size is highly reduced. Small connections to supravertebral diverticula were also observed. In the mid‐dorsals, paramedullary diverticula decrease in size overall and appear as thin, C‐shaped tubes at intervertebral joints, connected directly to large, bilateral diverticula emanating from the lungs directly. Going into the vertebral canal they appear as paired, squished tubes and are discontinuous through consecutive vertebrae (morphology iii). In posterior dorsals these structures become highly reduced, and are absent in the sacral and caudal vertebrae. 4.1.8 Gaviiformes Gavia pacifica (Pacific loon): Paramedullary diverticula are absent in Pacific loons. Gavia immer (common loon): Paramedullary diverticula are absent in common loons. Gavia adamsii (yellow‐billed loon): Paramedullary diverticula are absent in yellow‐billed loons. 4.1.9 Musophagiformes Musophaga violacea (violet turaco): Paramedullary diverticula in the violet turaco exhibit a range of unusual and elaborate morphologies, and are present in all four vertebral regions. In the cervicals, they are paired, ventral to the spinal cord, and very tiny in the atlas. These merge to form a single, thin, C‐shaped tube ventral to the cord in the axis. In other cervical vertebrae, paramedullary diverticula are paired, small, and lateroventral, but shift to a more lateral position and become continuous in the posterior neck (morphology iii). In the dorsal region, these diverticula become larger and discontinuous, persisting as paired tubes now dorsal to the spinal cord. In one vertebra, we observed a unique morphology of two bilateral pairs of small, circular diverticula, forming a total of four separate tubes. These subsequently merge to form a single, C‐shaped continuous tube dorsal to the spinal cord in the mid‐dorsals, and become discontinuous and paired again in posterior dorsals. In the mid‐ and posterior dorsal vertebra, paramedullary diverticula connect directly to the lungs. In the synsacrum, diverticula are not continuous with those from the dorsal vertebrae. Anteriorly, a single, unpaired diverticulum is the first structure apparent, though paired tubes rapidly appear. These canals are flattened and appear both ventral and dorsal to the cord. Moving posteriorly, they eventually expand to meet laterally, fully jacketing the last sacral vertebra. Paramedullary diverticula in this region are connected to perirenal diverticula. The spinal cord is bounded dorsally by a single, very large tube through most of the free caudals, and posteriorly becomes fully jacketed. This morphology persists into the pygostyle, where the spinal cord is very small and completely surrounded by paramedullary diverticula. These also appear to originate from the perirenal diverticula. 4.1.10 Passeriformes Aphelocoma californica (scrub jay): In scrub jays, paramedullary diverticula were only observed in the cervical vertebrae. Here, they are paired tubes lateral to the spinal cord present in posterior cervicals only. They are discontinuous and variably enter the vertebral canal to a greater or lesser degree (ranging between morphologies i and ii). In the single adult observed, the canals were uniformly small. In a prefledgling individual also observed, they become much larger moving posteriorly in this region, hinting at possible variation through ontogeny. Melozone crisallis (California towhee): Of the two California towhees included in this study, one had paramedullary diverticula only in the cervical vertebrae. In the other, they were present in the cervicals and first two dorsal vertebrae. In the cervical region, diverticula are paired and continuous anteriorly, expanding to become very large at intervertebral joints and appearing as smaller, squished tubes inside of the canal. Notably, in the anterior cervicals, paramedullary diverticula primarily connect to the supravertebral diverticula and only merge with small, intertransverse diverticula mid‐way through the neck (a mosaic combination of morphologies iii and iv). Eventually, in the posterior cervicals, the intertransverse diverticula become dominant and invade the vertebral canal increasingly less, until paramedullary diverticula are absent in all more posterior vertebrae. Psaltriparus minimus (bushtit): Six bushtits were included in this study. Of these, four were pin‐feathered chicks (very immature), all of which lacked paramedullary diverticula. However, two older individuals (fledglings) had paramedullary diverticula in the cervical vertebrae. These structures do not arise until mid‐neck, where a single, asymmetrical tube first appears before becoming bilaterally paired. In this taxon, the canals are sizeable, occupying about one third of the area of the vertebral canal. They invade the canal and are variably continuous between consecutive vertebrae (a combination of morphologies ii and iii). Spinus psaltria (lesser goldfinch): Paramedullary diverticula in the lesser goldfinch are minute and only present in some cervical vertebrae. They are absent in all other vertebral regions. They were observed as small, paired tubes alternately continuous and discontinuous between consecutive vertebrae (morphologies ii and iii) in the posterior‐most cervicals only. 4.1.11 Pelicaniformes Nycticorax nycticorax (black‐crowned night heron): In the black‐crowned night heron, paramedullary diverticula are present only in the cervical vertebrae and absent in all other regions. General diverticular morphology around these vertebrae was very elaborate, with particularly prominent intertransverse canals and large paramedullary diverticula. In anterior and mid‐cervicals, the paramedullary diverticula invade the vertebral canal and grade between serially continuous and discontinuous (morphologies ii and iii). In posterior cervicals, they appear as a single C‐shaped tube dorsal to the spinal cord and discontinuous between consecutive vertebrae (morphology ii). There were no notable differences between the fledgling chick and adult. Pelecanus occidentalis (brown pelican): Paramedullary diverticula in the brown pelican are large, elaborate, and present in most of the vertebral column. In the cervical vertebrae, these structures first appear in the anterior cervicals posterior to the atlas and axis. They completely encircle the spinal cord, merge with both the intertransverse and supravertebral diverticula, and are serially continuous throughout the vertebrae of this region (morphology iv). In the dorsal vertebrae, this morphology persists. Within the vertebral canal, these diverticula pneumatize the vertebral body via foramina in the floor of the canal in three consecutive vertebrae. Paramedullary diverticula are absent in posterior dorsals and through most of the synsacrum. They were only observed in the posterior‐most sacral vertebrae, as miniscule, paired tubes. In the free caudal vertebrae, they once again expand to nearly or fully encircle the spinal cord (varying between O‐ and U‐shaped). Phalacrocorax penicillatus (Brandt's cormorant): Paramedullary diverticula are restricted to anterior vertebral regions in Brandt's cormorant. They are present in the cervical vertebrae, and occasionally extend into the anterior‐most dorsal vertebrae. In the cervical region, they first appear as small, paired tubes dorsal to the cord in the mid‐cervicals. These quickly expand to become C‐shaped and extend much further into vertebral canals. However, the size of the canals changes dramatically as they extend, becoming much smaller at the mid‐point of each vertebra. Thus, they appear as barely continuous, connecting to each other at their point of smallest volume (morphology ii grades into morphology iii). In the posterior cervicals, they once again become small, paired, and discontinuous. In one individual observed, they ceased at the end of the cervical series. In another, they appeared as small, paired, and present only at intervertebral joints in the anterior‐most dorsals. 4.1.12 Piciformes Dryobates nuttallii (Nuttall's woodpecker): Paramedullary diverticula are absent in Nuttall's woodpeckers. Melanerpes formicivorus (acorn woodpecker): Paramedullary diverticula are absent in the acorn woodpeckers. 4.1.13 Podicipediformes Aechmorphus occidentalis (Western grebe): Paramedullary diverticula are absent in Western grebes. Aechmorphus clarkii (Clarke's grebe): Paramedullary diverticula are absent in Clarke's grebes. 4.1.14 Procellariiformes Puffinus griseus (sooty shearwater): In the sooty shearwater, paramedullary airways are present in the cervical and dorsal vertebrae, but absent in the more posterior regions. In the cervical vertebrae these structures first appear mid‐neck as a single, C‐shaped tube dorsal and lateral to the spinal cord. They are intermittently continuous and discontinuous (morphologies ii and iii) throughout the rest of the cervical series. In the anterior dorsal vertebrae, paramedullary airways are present as large, paired tubes that frequently merge to form a C‐shaped tube. They are discontinuous and connected to the cervical air sacs. In the mid‐dorsal vertebrae, the size of these diverticula is reduced and they are connected directly to diverticula from the lungs. They are absent in the posterior dorsals. 4.1.15 Psittaciformes Pyrrhura molinae (green‐cheeked conure): In the green‐cheeked conure paramedullary diverticula are present throughout the cervical and dorsal vertebrae, appear variably in the posterior sacral vertebrae, and are absent in all free caudals. In the cervicals, paramedullary diverticula are pared, discontinuous, and quite small. They are only present intermittently throughout this region. In the dorsal vertebrae, they become much more substantial. They expand to form a larger pair of canals that sometimes contact at the mid‐line to form a C‐shaped structure. Anteriorly, they remain discontinuous but do invade the vertebral canal (morphology ii). In mid‐ and posterior dorsals, these diverticula become continuous (morphology iii) but narrow substantially mid‐canal. Overall, the paramedullary diverticula of this region are much larger than in the cervical vertebrae. In one individual, there were small, paired canals in the last two sacral vertebrae. Eclectus roratus (eclectus parrot): Paramedullary diverticula are very large in the eclectus parrot, and were observed throughout the cervical and dorsal vertebrae, as well as the posterior sacrals. They are absent in the caudal vertebrae. In the neck, the atlas, axis, and C3 are all encircled by a thick layer of paramedullary diverticula. This morphs into paired, discontinuous diverticula throughout the rest of the cervical series (morphology ii). In the thoracic region, these structures become substantially enlarged, forming a fat, single, tube dorsal to the spinal cord and continuous through the region (morphology iii). This is primarily connected to diverticula branching directly from the lungs, though anteriorly there are also connections to the cervical air sacs. We also noted one dorsal vertebral arch that was pneumatized by paramedullary airways via a foramen in the roof of the vertebral canal. These diverticula disappear in the synsacrum, but briefly reappear as large, paired tubes in the last two sacral vertebrae. 4.1.16 Strigiformes Bubo virginianus (great‐horned owl): In the great‐horned owl, paramedullary airways are present in all cervical vertebrae (except the atlas and axis) as paired, discontinuous tubes that invade the vertebral canal (morphology ii). In the thoracic region, this morphology persists with a moderate reduction in the size of the canals, sometimes merging at the midline to form a C‐shaped canal. They are absent through most sacral vertebrae, but are present as tiny, paired tubes lateral to the spinal cord at the very end of the synsacrum. Paramedullary diverticula are absent in the caudal vertebrae. Notably, the appearance of these structures was very similar between the two adults and individual pin‐feathered chick that were included in the study. 4.1.17 Struthioniformes Struthio camelus (ostrich): A whole‐body CT scan was only available for a downy ostrich chick, though paramedullary diverticula are already prominent and elaborate in the cervical region even at this relatively early ontogenetic stage. Excepting the atlas and axis, they are present in all other vertebrae of this region. Most commonly, they exist as paired tubes dorsal to the cord, which merge to form a C‐shaped canal posteriorly. They are continuous through consecutive vertebrae (morphology iii) and occasionally merge with supravertebral diverticula (morphology iv). Posteriorly, this morphs into discontinuous paired tubes (morphology ii). In the dorsal vertebrae, this morphology persists with intermittent connections to the supravertebral diverticula at intervertebral joints. Anteriorly they are paired, ovoid structures. In the mid‐ and posterior dorsals three canals dorsal and lateral to the spinal cord appear. Accipitriformes Buteo jamaicensis (red‐tailed hawk): In the cervical vertebrae of red‐tailed hawks paramedullary diverticula are present as paired tubes dorsal to the spinal cord, within the vertebral canal but discontinuous in consecutive vertebrae (morphology ii; Figure ). These structures persist into the dorsal vertebrae, where they remain paired, discontinuous and dorsal to the spinal cord relative to most vertebrae in this region. They are only variably present in the dorsal vertebrae, and completely absent in the synsacrum. One individual examined had a minute, continuous, single canal dorsal to the spinal cord in the free caudal vertebrae. The other specimen lacked diverticula in the caudal region. Cathartes aura (turkey vulture): Paramedullary diverticula are present in the cervical region as discontinuous, short tubes in the turkey vulture (morphology ii; Figure ). They are paired and occur dorsal to the spinal cord. In the dorsal vertebrae they become more reduced, losing the posterior expansions into the vertebral canal but maintaining anterior expansions. They are present in the anterior dorsals but absent in the posterior section of this region. Paramedullary diverticula are absent in the synsacrum and caudal vertebrae. Apodiformes Calypte anna (Anna's hummingbird): In Anna's hummingbirds, cervical vertebrae exhibit variably continuous (morphology iii) and discontinuous (morphology ii) paramedullary diverticula, which frequently anastomose with supravertebral diverticula (morphology iv). These occur as a single, kidney‐shaped tube dorsal to the spinal cord. Though hummingbird body size is very tiny, diverticula within the vertebral canal in this region are substantial, occupying on average about one third of the area of the space. Such diverticula are absent in all other vertebral regions. Anseriformes Anas platyrhynchos (mallard duck): In this study, mallard ducks were represented only by downy chicks; however, paramedullary diverticula are already present and well‐developed in the cervical region where they appear as paired tubes dorsal to the spinal cord. These structures are substantial, occupying about one third of the area of the vertebral canal. In anterior cervicals, when present, they are discontinuous and exist only at intervertebral joints (morphology i). In the mid‐cervicals, the diverticula are also continuous through consecutive vertebrae. They are absent in all other vertebral regions. Charadriiformes Larus occidentalis (Western gull): In the cervical region of Western gulls, paramedullary diverticula occur as a single, large, C‐shaped tube that encases the spinal cord dorsally and laterally. In one observed individual, they form continuous tubes from vertebra to vertebra in the anterior cervicals. In a second individual they were absent. In the posterior cervicals of both individuals, they become discontinuous, but still enter the vertebral canal (morphology ii). In the dorsal region paramedullary diverticula are small in the anterior vertebrae and absent in the posterior vertebrae. They are completely absent in the synsacrum and caudal region. Uria aalge (common murre): In common murres, paramedullary diverticula are large (one fourth to one third of vertebral canal area) and exhibit a range of morphologies across the cervical vertebrae. Anteriorly, they arise as paired tubes dorsal to the spinal cord. In the mid‐cervicals, these grade into a single C‐shaped canal continuous through consecutive vertebrae (morphology iii). This in turn diminishes abruptly, and the posterior cervicals completely lack such diverticula. These structures are also absent in all more posterior vertebrae. Columbiformes Zenaida macroura (mourning dove): In the cervical region of mourning doves, substantial paramedullary diverticula were observed. While absent in the anterior vertebrae, they appear in mid‐ and posterior cervicals as paired tubes dorsal to the cord and oblong in shape (in transverse view). They first appear only at intervertebral joints (morphology i) but soon expand to invade the vertebral canal (morphology ii). Sometimes they become large enough to contact along the dorsal midline, forming a C‐shaped canal. In the anterior dorsal vertebrae, paramedullary diverticula are paired and in contact with the cord only at intervertebral joints (morphology i). In the posterior dorsal vertebrae very large paramedullary diverticula (approximately two‐thirds of area of vertebral canal) were observed as a single, C‐shaped tube dorsal to the spinal cord continuous through consecutive vertebrae (morphology i). Falconiformes Falco sparverius (American kestrel): Paramedullary diverticula are absent in American kestrels. Galliformes Meleagris gallopavo (wild turkey): The paramedullary diverticula in the cervical region of the wild turkey are paired and discontinuous through vertebral bodies. In the more posterior vertebrae of this region, they merge with the supravertebral diverticula at intervertebral joints but remain discontinuous, exhibiting a combination of morphologies ii and iv. In the dorsal region, paramedullary diverticula become larger and merge to form a C‐shaped canal at intervertebral joints, connected directly to the cervical air sacs. These structures invade the canal anteriorly, but only slightly; inside of the canal the size is highly reduced. Small connections to supravertebral diverticula were also observed. In the mid‐dorsals, paramedullary diverticula decrease in size overall and appear as thin, C‐shaped tubes at intervertebral joints, connected directly to large, bilateral diverticula emanating from the lungs directly. Going into the vertebral canal they appear as paired, squished tubes and are discontinuous through consecutive vertebrae (morphology iii). In posterior dorsals these structures become highly reduced, and are absent in the sacral and caudal vertebrae. Gaviiformes Gavia pacifica (Pacific loon): Paramedullary diverticula are absent in Pacific loons. Gavia immer (common loon): Paramedullary diverticula are absent in common loons. Gavia adamsii (yellow‐billed loon): Paramedullary diverticula are absent in yellow‐billed loons. Musophagiformes Musophaga violacea (violet turaco): Paramedullary diverticula in the violet turaco exhibit a range of unusual and elaborate morphologies, and are present in all four vertebral regions. In the cervicals, they are paired, ventral to the spinal cord, and very tiny in the atlas. These merge to form a single, thin, C‐shaped tube ventral to the cord in the axis. In other cervical vertebrae, paramedullary diverticula are paired, small, and lateroventral, but shift to a more lateral position and become continuous in the posterior neck (morphology iii). In the dorsal region, these diverticula become larger and discontinuous, persisting as paired tubes now dorsal to the spinal cord. In one vertebra, we observed a unique morphology of two bilateral pairs of small, circular diverticula, forming a total of four separate tubes. These subsequently merge to form a single, C‐shaped continuous tube dorsal to the spinal cord in the mid‐dorsals, and become discontinuous and paired again in posterior dorsals. In the mid‐ and posterior dorsal vertebra, paramedullary diverticula connect directly to the lungs. In the synsacrum, diverticula are not continuous with those from the dorsal vertebrae. Anteriorly, a single, unpaired diverticulum is the first structure apparent, though paired tubes rapidly appear. These canals are flattened and appear both ventral and dorsal to the cord. Moving posteriorly, they eventually expand to meet laterally, fully jacketing the last sacral vertebra. Paramedullary diverticula in this region are connected to perirenal diverticula. The spinal cord is bounded dorsally by a single, very large tube through most of the free caudals, and posteriorly becomes fully jacketed. This morphology persists into the pygostyle, where the spinal cord is very small and completely surrounded by paramedullary diverticula. These also appear to originate from the perirenal diverticula. Passeriformes Aphelocoma californica (scrub jay): In scrub jays, paramedullary diverticula were only observed in the cervical vertebrae. Here, they are paired tubes lateral to the spinal cord present in posterior cervicals only. They are discontinuous and variably enter the vertebral canal to a greater or lesser degree (ranging between morphologies i and ii). In the single adult observed, the canals were uniformly small. In a prefledgling individual also observed, they become much larger moving posteriorly in this region, hinting at possible variation through ontogeny. Melozone crisallis (California towhee): Of the two California towhees included in this study, one had paramedullary diverticula only in the cervical vertebrae. In the other, they were present in the cervicals and first two dorsal vertebrae. In the cervical region, diverticula are paired and continuous anteriorly, expanding to become very large at intervertebral joints and appearing as smaller, squished tubes inside of the canal. Notably, in the anterior cervicals, paramedullary diverticula primarily connect to the supravertebral diverticula and only merge with small, intertransverse diverticula mid‐way through the neck (a mosaic combination of morphologies iii and iv). Eventually, in the posterior cervicals, the intertransverse diverticula become dominant and invade the vertebral canal increasingly less, until paramedullary diverticula are absent in all more posterior vertebrae. Psaltriparus minimus (bushtit): Six bushtits were included in this study. Of these, four were pin‐feathered chicks (very immature), all of which lacked paramedullary diverticula. However, two older individuals (fledglings) had paramedullary diverticula in the cervical vertebrae. These structures do not arise until mid‐neck, where a single, asymmetrical tube first appears before becoming bilaterally paired. In this taxon, the canals are sizeable, occupying about one third of the area of the vertebral canal. They invade the canal and are variably continuous between consecutive vertebrae (a combination of morphologies ii and iii). Spinus psaltria (lesser goldfinch): Paramedullary diverticula in the lesser goldfinch are minute and only present in some cervical vertebrae. They are absent in all other vertebral regions. They were observed as small, paired tubes alternately continuous and discontinuous between consecutive vertebrae (morphologies ii and iii) in the posterior‐most cervicals only. Pelicaniformes Nycticorax nycticorax (black‐crowned night heron): In the black‐crowned night heron, paramedullary diverticula are present only in the cervical vertebrae and absent in all other regions. General diverticular morphology around these vertebrae was very elaborate, with particularly prominent intertransverse canals and large paramedullary diverticula. In anterior and mid‐cervicals, the paramedullary diverticula invade the vertebral canal and grade between serially continuous and discontinuous (morphologies ii and iii). In posterior cervicals, they appear as a single C‐shaped tube dorsal to the spinal cord and discontinuous between consecutive vertebrae (morphology ii). There were no notable differences between the fledgling chick and adult. Pelecanus occidentalis (brown pelican): Paramedullary diverticula in the brown pelican are large, elaborate, and present in most of the vertebral column. In the cervical vertebrae, these structures first appear in the anterior cervicals posterior to the atlas and axis. They completely encircle the spinal cord, merge with both the intertransverse and supravertebral diverticula, and are serially continuous throughout the vertebrae of this region (morphology iv). In the dorsal vertebrae, this morphology persists. Within the vertebral canal, these diverticula pneumatize the vertebral body via foramina in the floor of the canal in three consecutive vertebrae. Paramedullary diverticula are absent in posterior dorsals and through most of the synsacrum. They were only observed in the posterior‐most sacral vertebrae, as miniscule, paired tubes. In the free caudal vertebrae, they once again expand to nearly or fully encircle the spinal cord (varying between O‐ and U‐shaped). Phalacrocorax penicillatus (Brandt's cormorant): Paramedullary diverticula are restricted to anterior vertebral regions in Brandt's cormorant. They are present in the cervical vertebrae, and occasionally extend into the anterior‐most dorsal vertebrae. In the cervical region, they first appear as small, paired tubes dorsal to the cord in the mid‐cervicals. These quickly expand to become C‐shaped and extend much further into vertebral canals. However, the size of the canals changes dramatically as they extend, becoming much smaller at the mid‐point of each vertebra. Thus, they appear as barely continuous, connecting to each other at their point of smallest volume (morphology ii grades into morphology iii). In the posterior cervicals, they once again become small, paired, and discontinuous. In one individual observed, they ceased at the end of the cervical series. In another, they appeared as small, paired, and present only at intervertebral joints in the anterior‐most dorsals. Piciformes Dryobates nuttallii (Nuttall's woodpecker): Paramedullary diverticula are absent in Nuttall's woodpeckers. Melanerpes formicivorus (acorn woodpecker): Paramedullary diverticula are absent in the acorn woodpeckers. Podicipediformes Aechmorphus occidentalis (Western grebe): Paramedullary diverticula are absent in Western grebes. Aechmorphus clarkii (Clarke's grebe): Paramedullary diverticula are absent in Clarke's grebes. Procellariiformes Puffinus griseus (sooty shearwater): In the sooty shearwater, paramedullary airways are present in the cervical and dorsal vertebrae, but absent in the more posterior regions. In the cervical vertebrae these structures first appear mid‐neck as a single, C‐shaped tube dorsal and lateral to the spinal cord. They are intermittently continuous and discontinuous (morphologies ii and iii) throughout the rest of the cervical series. In the anterior dorsal vertebrae, paramedullary airways are present as large, paired tubes that frequently merge to form a C‐shaped tube. They are discontinuous and connected to the cervical air sacs. In the mid‐dorsal vertebrae, the size of these diverticula is reduced and they are connected directly to diverticula from the lungs. They are absent in the posterior dorsals. Psittaciformes Pyrrhura molinae (green‐cheeked conure): In the green‐cheeked conure paramedullary diverticula are present throughout the cervical and dorsal vertebrae, appear variably in the posterior sacral vertebrae, and are absent in all free caudals. In the cervicals, paramedullary diverticula are pared, discontinuous, and quite small. They are only present intermittently throughout this region. In the dorsal vertebrae, they become much more substantial. They expand to form a larger pair of canals that sometimes contact at the mid‐line to form a C‐shaped structure. Anteriorly, they remain discontinuous but do invade the vertebral canal (morphology ii). In mid‐ and posterior dorsals, these diverticula become continuous (morphology iii) but narrow substantially mid‐canal. Overall, the paramedullary diverticula of this region are much larger than in the cervical vertebrae. In one individual, there were small, paired canals in the last two sacral vertebrae. Eclectus roratus (eclectus parrot): Paramedullary diverticula are very large in the eclectus parrot, and were observed throughout the cervical and dorsal vertebrae, as well as the posterior sacrals. They are absent in the caudal vertebrae. In the neck, the atlas, axis, and C3 are all encircled by a thick layer of paramedullary diverticula. This morphs into paired, discontinuous diverticula throughout the rest of the cervical series (morphology ii). In the thoracic region, these structures become substantially enlarged, forming a fat, single, tube dorsal to the spinal cord and continuous through the region (morphology iii). This is primarily connected to diverticula branching directly from the lungs, though anteriorly there are also connections to the cervical air sacs. We also noted one dorsal vertebral arch that was pneumatized by paramedullary airways via a foramen in the roof of the vertebral canal. These diverticula disappear in the synsacrum, but briefly reappear as large, paired tubes in the last two sacral vertebrae. Strigiformes Bubo virginianus (great‐horned owl): In the great‐horned owl, paramedullary airways are present in all cervical vertebrae (except the atlas and axis) as paired, discontinuous tubes that invade the vertebral canal (morphology ii). In the thoracic region, this morphology persists with a moderate reduction in the size of the canals, sometimes merging at the midline to form a C‐shaped canal. They are absent through most sacral vertebrae, but are present as tiny, paired tubes lateral to the spinal cord at the very end of the synsacrum. Paramedullary diverticula are absent in the caudal vertebrae. Notably, the appearance of these structures was very similar between the two adults and individual pin‐feathered chick that were included in the study. Struthioniformes Struthio camelus (ostrich): A whole‐body CT scan was only available for a downy ostrich chick, though paramedullary diverticula are already prominent and elaborate in the cervical region even at this relatively early ontogenetic stage. Excepting the atlas and axis, they are present in all other vertebrae of this region. Most commonly, they exist as paired tubes dorsal to the cord, which merge to form a C‐shaped canal posteriorly. They are continuous through consecutive vertebrae (morphology iii) and occasionally merge with supravertebral diverticula (morphology iv). Posteriorly, this morphs into discontinuous paired tubes (morphology ii). In the dorsal vertebrae, this morphology persists with intermittent connections to the supravertebral diverticula at intervertebral joints. Anteriorly they are paired, ovoid structures. In the mid‐ and posterior dorsals three canals dorsal and lateral to the spinal cord appear. DISCUSSION In this study, we find that variation of paramedullary diverticula is much greater than previously described, though notably O'Connor ( ) does make brief mention of morphological disparity among taxa. However, most previous publications (based on observations in individual taxa) describe these structures as diverticular invasions of the vertebral canal that sit as one or two tubes dorsal to the spinal cord. Incorporating data from a phylogenetically broad collection of taxa, we conclude that this definition is not inclusive of the true range of variation. Often intertransverse diverticula give off branches that contact the spinal cord at intervertebral joints, but the extent and source of invasion of the vertebral canal is highly variable. Because the degree of canal extension is not discrete, and instead is seen as a spectrum of varying levels of intrusion, we propose that all variants be considered paramedullary diverticula. Additionally, while diverticula in contact with the spinal cord are often dorsal to it, we observed many cases where these diverticula were lateral or ventral to the cord, and even several instances where the cord was surrounded by diverticula on all sides. Paramedullary diverticula in the cervical region are connected to the cervical air sacs, as are paramedullary diverticula in the anterior dorsal vertebrae. In mid‐ and posterior dorsal vertebrae, paramedullary diverticula arise directly from diverticula branching from the lungs. Sacral and caudal paramedullary diverticula were generally quite rare and small. It was difficult to determine the origin of the structures in all cases, though we did observe in several taxa that they were connected to perirenal diverticula. Thus, we have at least partial answers to the two of the questions posed at the start of this study. Firstly, paramedullary airways share connections with both intertransverse diverticula and other extravertebral diverticula (e.g., supravertebral, perirenal). Secondly, data here indicate that cervical air sacs and diverticula often share connections with paramedullary diverticula in anterior dorsal vertebrae (when present), but not in any more posterior regions of the vertebral column. The adaptive function of these structures—if one exists—is difficult to ascertain. Their loss in some pursuit divers (loons and grebes) appears adaptive; however, the presence of such diverticula in other pursuit divers (common murres and cormorants) seems to contradict this. We also find that the size, complexity, and presence or absence of paramedullary diverticula varies strongly with vertebral region (Figure ). Across the clades observed, there is a trend toward decreasing the size and presence of paremedullary diverticula moving anteroposteriorly through the vertebral column. All birds possessing paramedullary diverticula had them in the cervical region, and in all cases but one (green‐cheeked conures) they are most substantial in this part of the vertebral column. The diverticula persist into the thoracic region in only about half of observed taxa, and are present in the synsacrum and free caudals as minute pockets of air in only a handful of genera included in this study. However, we also note that it was exceedingly rare to observe paramedullary diverticula in the anterior and posterior extremes of the vertebral column. They were found in the atlas and axis of only two genera (the violet turaco and eclectus parrot), and in the pygostyle or caudal vertebrae in the violet turaco, pelican, and (inconsistently) the red‐tailed hawk. We hypothesize that this trend may be related to varying demands of mobility in different vertebral regions. There is a clear correlation between degree of movement and the size (and presence) of paramedullary diverticula; in birds, the greatest range of motion is within the neck, where diverticula associated with the spinal cord are largest and present in the most taxa, while the vertebrae become increasingly fused and modular moving to the posterior end of the body, where paramedullary diverticula are most commonly absent. Here, paramedullary diverticula may be functioning to cushion the spinal cord as it is jostled around within the vertebral canal, functioning in a similar way to extra‐dural adipose tissue in the vertebral canal in mammals (Beaujeux et al., ; Megan Sions et al., ; Reina et al., ). However, we also note that the large swelling of the spinal cord within the sacrum, known as the glycogen body (Watterson, ), may simply not leave any space for diverticula to invade the canal in this region. Thus, observed vertebral variation in paramedullary diverticula may be both adaptive and a corollary of spinal cord structure and development. It is also entirely possible that paramedullary diverticula simply fill in any intravertebral space that happens to be available to them, consistent with Witmer's ( ) hypothesis that diverticula are opportunistic pneumatizing machines, though this is a question for future study. Additionally, a function of cushioning the spinal cord during neck movement makes the absence of such diverticula in woodpeckers (representative piciiforms in this study) all the more puzzling. Strong phylogenetic signal for the presence or absence of these structures as reported here indicates that phylogenetic affinity is likely a strong determining factor in whether or not a particular taxon has paramedullary airways. The presence or absence of these structures may not be adaptive at all, though taxa that do possess them may be exapting these diverticula to cushion the spinal cord in regions of strong vertebral movement. 5.1 Skeletal traces of paramedullary diverticula In this survey we identified two osteological correlates of paramedullary diverticula (Figure ): (a) pocked texturing of the bone in the vertebral canal where pneumatic tissue was in contact with diverticula; and (b) pneumatic foramina inside the vertebral canal, which allowed connections between the paramedullary diverticula and the interosseous diverticula that fill the vertebral central and neural arches in most birds. Across all birds examined, texturing of the bone of the vertebral canal was rare, and always occurred on the roof of the canal. Foramina were more common by comparison, and are most often formed in the dorsolateral aspect of the neural canal (but they can occur on the lateral and even the ventral aspect of the canal). Foramina in the ventral floor of the neural canal appear to be especially common in pelicans, based on observations in both dry skeletons and CT scans of multiple individuals. Importantly, both foramina and texturing are provide clues regarding the position of the paramedullary diverticula relative to the spinal cord. For instance, our preliminary data suggest that a foramen in the floor of the canal results from pneumatization via diverticula entirely jacketing the spinal cord, or would at least imply ventrally located diverticula. Paramedullary diverticula do not always produce texturing or foramina in the walls of the neural canal. In fact, these seem to be the exception rather than the rule in most birds, with the possible exceptions of pelicans, albatrosses, ostriches, and rheas. It is probably not a coincidence that those are all large‐bodied birds with postcranial skeletons that are hyperpneumatic (sensu O'Connor, ). To date we have only observed one instance of pneumatic foramina inside the neural canal of sacral vertebrae (the violet turaco), and no instances of foramina inside the canals of caudal vertebrae. All other occurrences of pneumatic foramina inside the canal were seen in presacral vertebrae. It is tempting to infer a biological basis, given that the cervical and dorsal vertebrae are the regions of the postcranial skeleton that are most commonly pneumatized at all in both extant birds (O'Connor, ), extinct non‐avian theropods (Benson et al., ), pterosaurs (Claessens et al., ), and sauropods (Wedel, ). The apparent restriction of neural canal foramina to the presacral vertebrae could be an artifact of sampling; however, given that such foramina appear to be rare outside of the aforementioned hyperpneumatic taxa, and that paramedullary diverticula do not exist in the sacral and free caudal vertebrae of most birds. Even a random distribution of neural canal foramina would lead to most examples occurring in cervical and dorsal vertebrae, since that is where the paramedullary diverticula themselves are most commonly located. Our own sampling of dry, osteological specimens is very limited, more of an exploration of the morphologies realized in extant birds than a systematic survey that could elucidate statistical regularities. This is an area in which anyone with access to an osteological collection of extant birds could make important contributions with relatively little effort. In sum, although paramedullary diverticula can form texturing or pneumatic foramina in the walls of the neural canal, more commonly they do not leave any diagnostic skeletal trace at the level of gross visual examination—in other words, they are cryptic diverticula (sensu Wedel & Taylor, ). Even in the absence of gross osteological correlates, paramedullary diverticula might still leave distinct histological traces, such as the “pneumosteum” identified by Lambertz et al. ( ); this is another area that is ripe for further investigation. Skeletal traces of paramedullary diverticula In this survey we identified two osteological correlates of paramedullary diverticula (Figure ): (a) pocked texturing of the bone in the vertebral canal where pneumatic tissue was in contact with diverticula; and (b) pneumatic foramina inside the vertebral canal, which allowed connections between the paramedullary diverticula and the interosseous diverticula that fill the vertebral central and neural arches in most birds. Across all birds examined, texturing of the bone of the vertebral canal was rare, and always occurred on the roof of the canal. Foramina were more common by comparison, and are most often formed in the dorsolateral aspect of the neural canal (but they can occur on the lateral and even the ventral aspect of the canal). Foramina in the ventral floor of the neural canal appear to be especially common in pelicans, based on observations in both dry skeletons and CT scans of multiple individuals. Importantly, both foramina and texturing are provide clues regarding the position of the paramedullary diverticula relative to the spinal cord. For instance, our preliminary data suggest that a foramen in the floor of the canal results from pneumatization via diverticula entirely jacketing the spinal cord, or would at least imply ventrally located diverticula. Paramedullary diverticula do not always produce texturing or foramina in the walls of the neural canal. In fact, these seem to be the exception rather than the rule in most birds, with the possible exceptions of pelicans, albatrosses, ostriches, and rheas. It is probably not a coincidence that those are all large‐bodied birds with postcranial skeletons that are hyperpneumatic (sensu O'Connor, ). To date we have only observed one instance of pneumatic foramina inside the neural canal of sacral vertebrae (the violet turaco), and no instances of foramina inside the canals of caudal vertebrae. All other occurrences of pneumatic foramina inside the canal were seen in presacral vertebrae. It is tempting to infer a biological basis, given that the cervical and dorsal vertebrae are the regions of the postcranial skeleton that are most commonly pneumatized at all in both extant birds (O'Connor, ), extinct non‐avian theropods (Benson et al., ), pterosaurs (Claessens et al., ), and sauropods (Wedel, ). The apparent restriction of neural canal foramina to the presacral vertebrae could be an artifact of sampling; however, given that such foramina appear to be rare outside of the aforementioned hyperpneumatic taxa, and that paramedullary diverticula do not exist in the sacral and free caudal vertebrae of most birds. Even a random distribution of neural canal foramina would lead to most examples occurring in cervical and dorsal vertebrae, since that is where the paramedullary diverticula themselves are most commonly located. Our own sampling of dry, osteological specimens is very limited, more of an exploration of the morphologies realized in extant birds than a systematic survey that could elucidate statistical regularities. This is an area in which anyone with access to an osteological collection of extant birds could make important contributions with relatively little effort. In sum, although paramedullary diverticula can form texturing or pneumatic foramina in the walls of the neural canal, more commonly they do not leave any diagnostic skeletal trace at the level of gross visual examination—in other words, they are cryptic diverticula (sensu Wedel & Taylor, ). Even in the absence of gross osteological correlates, paramedullary diverticula might still leave distinct histological traces, such as the “pneumosteum” identified by Lambertz et al. ( ); this is another area that is ripe for further investigation. CONCLUSIONS AND DIRECTIONS FOR FUTURE RESEARCH In this study we have cast a broader net, phylogenetically speaking, than any previous work on paramedullary airways, but there is still much to be done. We were only able to assess a handful of individuals at most for any given species, and for several species only a single individual. Our sampling of species within genera and genera within larger clades is likewise limited. Finally, with so few individuals in the study, our ontogenetic sampling is poor. Further studies addressing the ontogenetic development of paramedullary diverticula, their intraspecific variation, and their clade‐level diversity would all be welcome advances. Much interesting work remains to be done on the basic anatomy of paramedullary diverticula. On a fine scale, it would be useful to know if the pneumatic epithelium that lines the diverticula is firmly attached, either to periosteum that lines the neural canal or to the dura mater, or if the paramedullary diverticula are potentially mobile within the neural canal. As discussed above, further documenting the connections of the paramedullary diverticula to specific air sacs or specific portions of the lungs might elucidate both the development and evolution of this system. Furthermore, the physiological implications of paramedullary diverticular, if any, are completely unknown. Is there a possibility for active circulation of air through an anastomosing system of paramedullary diverticula? Could these diverticula function in either insulating or cooling the central nervous system? These questions await investigation by clever physiologists. Turning to osteological correlates, surveying the neural canals of skeletonized birds in museum collections for pneumatic sculpturing or foramina could potentially yield much useful information for a relatively small investment of effort. A better foundation of skeletal evidence will be needed to determine if the extent of paramedullary diverticula is correlated with degree of skeletal pneumatization. It may be possible to map the distribution of paramedullary diverticula using bone histology—even determining if this is possible in a chicken or a turkey would be valuable contribution. The quest to document paramedullary diverticula need not be limited to extant archosaurs. The presence of pneumatic fossae or foramina in the neural canal of an Eastern moa, Emeus sp. (Figure ), demonstrates the potential for finding evidence of paramedullary diverticula in extinct birds. If present, skeletal traces of paramedullary diverticula should be particularly easy to identify in the vertebrae of large flightless (extinct ratites, phorusrhacids, Diatryma ) and flighted (pelegornithids, teratorns) birds. In addition, an avian‐like respiratory system is known to have been present in several lineages of extinct archosaurs as well, including saurischian dinosaurs (O'Connor & Claessens, ; Wedel, , ) and pterosaurs (Claessens et al., ). Extensive pneumatization of the vertebral column in these clades shows that vertebral diverticula were present (Benson et al., ; Schwarz et al., ; Wedel, ), and the clustering of pneumatic features around the neural canal hints that paramedullary diverticula may have been present even when there is no direct evidence for them (Schwarz & Fritsch, ; Taylor & Wedel, ). To date, the only described evidence for paramedullary diverticula specifically in non‐avian archosaurs are pneumatic foramina connecting the neural canals to pneumatic chambers in cervical vertebrae of the brachiosaurid dinosaur Giraffatitan (Schwarz & Fritsch, ), and in a dorsal vertebra of an unnamed saltasaurid sauropod (Aureliano et al., ). Such foramina are not easy to detect, because the neural canals of fossil vertebrates are rarely completely prepared out (i.e., with the rock matrix removed from the canal). The canals can be difficult to examine even when they are completely prepared, and the number of vertebrae that have been CT scanned is small. Furthermore, as discussed above, even in most birds in which they occur, paramedullary diverticula do not leave foramina or other diagnostic skeletal traces. If the same was true in extinct archosaurs, small sample sizes may hinder the quest for more examples of paramedullary diverticula in the fossil record but is a line of inquiry that should be pursued nonetheless. As this study shows, paramedullary diverticula in birds exhibit a much broader range of morphologies than previously reported or suspected. Sampling issues notwithstanding, we predict that if more effort is directed to finding evidence of paramedullary diverticula in fossil taxa, more will be found, increasing both the number and morphological variety of known examples. We hope that this study serves as a foundation and an enticement for further studies of this most unusual anatomical system, in both extinct and extant archosaurs. The authors have no conflicts of interest. Jessie Atterholt: Conceptualization (supporting); data curation (lead); formal analysis (equal); funding acquisition (lead); investigation (equal); methodology (equal); project administration (lead); resources (equal); software (equal); supervision (equal); writing – original draft (lead); writing – review and editing (equal). Mathew Wedel: Conceptualization (lead); data curation (supporting); formal analysis (equal); funding acquisition (equal); investigation (equal); methodology (equal); project administration (supporting); resources (equal); software (equal); supervision (equal); writing – original draft (supporting); writing – review and editing (equal).
Large variation in radiation therapy fractionation for multiple myeloma in Australia
5b1eb8e2-1bdc-4133-b433-4ec9a7bd56dd
10084224
Internal Medicine[mh]
INTRODUCTION Bone involvement is common in patients with multiple myeloma (MM), with up to 80% of patients with newly diagnosed MM presenting with osteolytic lesions, with high risk of skeletal‐related events, such as pathological fractures and spinal cord compression. Radiation therapy (RT) is an effective treatment modality for symptom management of these bony lesions, and evidence‐based modeling estimated that approximately two in five patients with MM should receive at least one course of RT over the course of their disease. RT fractionation for bone metastases is an area that has been extensively investigated in the past. Meta‐analyses of multiple randomized trials have consistently shown that single fraction RT (SFRT) is as effective as multifraction RT for symptom management for uncomplicated bone metastases. However, few of these studies have specifically looked into the MM cohort. A randomized prospective trial comparing 30 Gy in 10 fractions to 8 Gy in 1 fraction for symptomatic bone lesions in 101 patients with MM showed no differences in symptom response. In the setting of MM‐related bone disease with spinal cord compression, in a large international multicenter retrospective pooled analysis, Rades et al. reported that long‐course RT (10–20 fractions) resulted in better functional improvement compared to short‐course RT (1–5 fractions). Based on the available evidence, several international guidelines and recommendations specifically on the management of MM‐related bone disease have been developed by the International Myeloma Working Group and International Lymphoma Radiation Oncology Group (ILROG). The ILROG consensus guidelines recommend that 8 Gy in 1 fraction, 20 Gy in 5 fractions, or 30 Gy in 10 fractions were all reasonable options for symptom control, but 8 Gy in 1 fraction is preferred for patients with poor prognosis. In situations where there is spinal cord compression or bulky disease where durable control is desired, however, 30 Gy in 10–15 fractions is preferred. Despite these evidence and guidelines, it is unclear as to the actual pattern of practice of RT fractionation for MM in Australia. Earlier Victoria statewide population studies had evaluated the use of SFRT for the management of bone metastases, but these were restricted to patients with solid tumors, excluding patients with hematological cancers, such as MM. It was unclear as to the proportion of patients with MM in a separate population‐based study in the state of New South Wales in Australia. The aim of this study is to evaluate the RT fractionation schedule used in the management of MM‐related bone disease in Victoria, and to identify factors associated with multifraction RT. MATERIALS AND METHODS 2.1 Study population This study comprised a population‐based cohort of patients with MM (ICD10: C90.0) who received RT between 2012 and 2017, as captured in the statewide Victorian Radiotherapy Minimum Data Set (VRDMS). VRDMS is an administrative dataset maintained by the Victorian Department of Health. Patients with plasma cell leukemia (ICD10: C90.1), extramedullary plasmacytoma (ICD10: C90.2), and solitary plasmacytoma (ICD10: C90.3) were excluded. We only included RT courses where the target site of RT was documented as bone. Data from VRMDS were linked with the Victorian Cancer Registry and the Registry of Births, Deaths and Marriages to capture data on death. We further analyzed a subset of RT delivered at the end of life (EOL), defined as at least one fraction palliative RT courses delivered within 90 days of death. The study was approved by our institutional Health Human Research Ethics Committee (LNR/18/34). 2.2 Primary outcomes and covariables The primary outcome was the different RT fractionations used, categorized into four ordinal groups: SFRT, 2–5 fractions, 6–10 fractions, and >10 fractions. Information on radiation dose was not available in VRMDS for the study period. Factors evaluated for association with different fractionations were: age at time of RT, sex, site of treated lesion (spine or non‐spine), socioeconomic status, remoteness of residency (major cities, or regional/remote), treatment center type (public or private) and location (metropolitan or regional), and year of RT. Socioeconomic status was determined based on residential postcode using the Socio‐Economic Indexes for Areas index for Relative Socio‐Economic Disadvantage based on the Australian Bureau of Statistics data (i.e., based on 2011 Australian census data for patients treated in 2012 and 2013, and Australian census 2016 data for patients treated in 2014–2017); this was further subdivided into quintiles based on the Victorian general population. The area of residence was also dichotomized as major city or regional/remote using the Australian Statistical Geographical Standard remoteness structure. It is important to note that VRMDS does not capture information on MM‐related prognostic factors, as well as information on systemic therapy that patients received. 2.3 Statistical analyses Variables associated with different RT fractionations were evaluated using Pearson's chi‐squared test for categorical variables, and Kruskal–Wallis test for continuous variables. The Cochran–Armitage test for trend was used to evaluate the changes in different fractionation use over time. Multinomial logistic regression was used to assess the factors associated with different fractionations, with SFRT as the reference group. For the subset of RT courses delivered at the EOL, multivariate logistic regression was used to evaluate factors associated with SFRT. All multivariable analyses employed the robust standard errors, with analyses clustered on patient identifiers to allow for clustering of multiple courses of RT given to the same patient. A two‐sided p ‐value of < .05 was considered to indicate statistical significances. All statistical analyses were performed using STATA/SE 17 (STATA Corp, College Station, TX, USA). Study population This study comprised a population‐based cohort of patients with MM (ICD10: C90.0) who received RT between 2012 and 2017, as captured in the statewide Victorian Radiotherapy Minimum Data Set (VRDMS). VRDMS is an administrative dataset maintained by the Victorian Department of Health. Patients with plasma cell leukemia (ICD10: C90.1), extramedullary plasmacytoma (ICD10: C90.2), and solitary plasmacytoma (ICD10: C90.3) were excluded. We only included RT courses where the target site of RT was documented as bone. Data from VRMDS were linked with the Victorian Cancer Registry and the Registry of Births, Deaths and Marriages to capture data on death. We further analyzed a subset of RT delivered at the end of life (EOL), defined as at least one fraction palliative RT courses delivered within 90 days of death. The study was approved by our institutional Health Human Research Ethics Committee (LNR/18/34). Primary outcomes and covariables The primary outcome was the different RT fractionations used, categorized into four ordinal groups: SFRT, 2–5 fractions, 6–10 fractions, and >10 fractions. Information on radiation dose was not available in VRMDS for the study period. Factors evaluated for association with different fractionations were: age at time of RT, sex, site of treated lesion (spine or non‐spine), socioeconomic status, remoteness of residency (major cities, or regional/remote), treatment center type (public or private) and location (metropolitan or regional), and year of RT. Socioeconomic status was determined based on residential postcode using the Socio‐Economic Indexes for Areas index for Relative Socio‐Economic Disadvantage based on the Australian Bureau of Statistics data (i.e., based on 2011 Australian census data for patients treated in 2012 and 2013, and Australian census 2016 data for patients treated in 2014–2017); this was further subdivided into quintiles based on the Victorian general population. The area of residence was also dichotomized as major city or regional/remote using the Australian Statistical Geographical Standard remoteness structure. It is important to note that VRMDS does not capture information on MM‐related prognostic factors, as well as information on systemic therapy that patients received. Statistical analyses Variables associated with different RT fractionations were evaluated using Pearson's chi‐squared test for categorical variables, and Kruskal–Wallis test for continuous variables. The Cochran–Armitage test for trend was used to evaluate the changes in different fractionation use over time. Multinomial logistic regression was used to assess the factors associated with different fractionations, with SFRT as the reference group. For the subset of RT courses delivered at the EOL, multivariate logistic regression was used to evaluate factors associated with SFRT. All multivariable analyses employed the robust standard errors, with analyses clustered on patient identifiers to allow for clustering of multiple courses of RT given to the same patient. A two‐sided p ‐value of < .05 was considered to indicate statistical significances. All statistical analyses were performed using STATA/SE 17 (STATA Corp, College Station, TX, USA). RESULTS A total of 967 courses of RT were delivered in 623 patients for MM between 2012 and 2017. The mean age at RT was 69.7 (SD = 11.7). Approximately two‐third of the RT target sites were spine. The use of advanced RT techniques, such as intensity‐modulated RT, volumetric‐modulated arc therapy, or stereotactic RT, was rare (4%). Majority of RT courses were delivered in metropolitan centers (79%), while just over half were delivered in public centers (56%). 3.1 RT fractionation Approximately one in five RT courses were SFRT, and half were delivered over 2–5 fractions (Table ). There was higher proportion of SFRT use in patients aged under 60 years (25%) and above 80 years (24%). There was lower proportion of SFRT use for spine (15%) compared to non‐spine (24%) sites of disease ( p = .002). There were no significant differences in fractionation use between the different socioeconomic quintiles ( p = .2). Of the patients treated in private institutions, there were differences in RT fractionation use between patients who lived in major cities versus regional/remote areas—20% of RT delivered in those who lived in regional/remote areas was SFRT compared to 5% of RT delivered in those who lived in major cities ( p < .001) (Table ). Of patients treated in metropolitan centers, SFRT use was lower in those who lived in the major cities (16%) compared to those who live in regional/remote areas (25%) ( p = .04) (Table ). 3.2 Trend in practice Overall, there was no significant change in SFRT use over time ( p ‐trend = .5) (Table ). There is, however, a marked increase in the use of 2–5 fraction RT (from 48% in 2012 to 60% in 2017, p ‐trend < .001), with corresponding decrease in the use of 6–10 fraction RT (from 26% in 2012 to 20% in 2017; p ‐trend = .003) and > 10 fraction RT (from 9% in 2012 to 4% in 2017, p ‐trend .05). This change in fractionation over time was observed when stratified by target site of RT, area of residence, and treatment centers (Figure ). For RT to non‐spine sites, the most marked changes in fractionation were observed for 6–10 fractions, decreasing from 27% in 2012 to 14% in 2017 ( p ‐trend = .012) (Figure ). For RT to spine, there was marked increase in the use of 2–5 fractions from 50% in 2012 to 62% in 2017 ( p < .001) (Figure ). When stratified by area of residence, the increase in the use of 2–5 fractions was observed in patients who lived in both major cities (from 28% in 2012 to 54% in 2017; p = .002) (Figure ) and regional/remote areas (from 49% in 2012 to 72% in 2017; p = .003) (Figure ). There were no statistically significant changes over time in fractionation in public institutions (Figure ). However, in private institutions, there was marked increase in the use of 2–5 fractions (from 38% in 2012 to 62% in 2017; p ‐trend < .001), and corresponding decrease in the use of 6–10 fractions (from 42% in 2012 to 29% in 2017, p ‐trend < .001) (Figure ). In metropolitan centers, there was increase in the use of 2–5 fractions (from 43% in 2012 to 56% in 2017; p ‐trend < .001) with corresponding decrease in 6–10 fractions (from 32% in 2012 to 25% in 2017; p ‐trend = .017) (Figure ). In regional centers, the use of RT fractionation varied over time, but the overall trend for the different RT fractionations over the 6‐year period was not statistically significant (Figure ). 3.3 Multivariate analyses In multivariate analyses, patient age, target site of RT, area of residence, and treatment centers (type and location) were independently associated with the use of multifraction RT compared to SFRT, after adjusting for the year of treatment (Table ). Compared to patients aged under 60 years, those aged 60–69 were 2.2 times (95%CI = 1.2–4.0; p = .01) more likely to have 2–5 fraction RT (than SFRT), while patients aged above 80 were less likely (OR=0.46; 95%CI = 0.21‐0.99; p =0.05) to have 6–10 fraction RT (than SFRT). Treatment to the spine was more likely to be multifraction RT than SFRT –2.2 times (95%CI = 1.5–3.3; p < .001) more likely to be 2–5 fractions, and 1.9 times (95%CI = 1.2–3.0; p = .01) more likely to be 6–10 fractions. Compared to patients who lived in major cities, RT delivered to patients who lived in regional or remote centers was less likely to be multifraction RT than SFRT – 47% (95%CI = 2–71%; p = .04) relatively less likely to be 2–5 fractions, and 67% (95%CI = 31–84%; p = .003) less likely to be 6–10 fractions. Treatment in private institutions was most strongly associated with multifraction RT use, compared to public institutions – 3.3 times (95%CI = 2.0–5.4; p < .001) more likely to be 2–5 fractions, 7.8 times (95%CI = 4.5–13.6; p < .001) more likely to be 6–10 fractions, and 5 times (95%CI = 2.2–11.2; p < .001) more likely to be > 10 fractions. 3.4 EOL cohort There were 122 courses of RT delivered to 59 patients at the EOL, of which only one‐quarter of the RT courses was SFRT (Table ). SFRT was more likely to be given closer to death, comprising 18%, 14%, and 33% of RT courses delivered within 2–3 months, 1–2 months, and < 1 months of death, respectively ( p = .08). The use of SFRT at the EOL was markedly lower in private institutions (7%) compared to public institutions (41%) ( p < .001). In multivariate analyses, treatment in private institutions was the only factor independently associated with SFRT use (OR = .04; 95%CI = .004–.33; p = .003). RT fractionation Approximately one in five RT courses were SFRT, and half were delivered over 2–5 fractions (Table ). There was higher proportion of SFRT use in patients aged under 60 years (25%) and above 80 years (24%). There was lower proportion of SFRT use for spine (15%) compared to non‐spine (24%) sites of disease ( p = .002). There were no significant differences in fractionation use between the different socioeconomic quintiles ( p = .2). Of the patients treated in private institutions, there were differences in RT fractionation use between patients who lived in major cities versus regional/remote areas—20% of RT delivered in those who lived in regional/remote areas was SFRT compared to 5% of RT delivered in those who lived in major cities ( p < .001) (Table ). Of patients treated in metropolitan centers, SFRT use was lower in those who lived in the major cities (16%) compared to those who live in regional/remote areas (25%) ( p = .04) (Table ). Trend in practice Overall, there was no significant change in SFRT use over time ( p ‐trend = .5) (Table ). There is, however, a marked increase in the use of 2–5 fraction RT (from 48% in 2012 to 60% in 2017, p ‐trend < .001), with corresponding decrease in the use of 6–10 fraction RT (from 26% in 2012 to 20% in 2017; p ‐trend = .003) and > 10 fraction RT (from 9% in 2012 to 4% in 2017, p ‐trend .05). This change in fractionation over time was observed when stratified by target site of RT, area of residence, and treatment centers (Figure ). For RT to non‐spine sites, the most marked changes in fractionation were observed for 6–10 fractions, decreasing from 27% in 2012 to 14% in 2017 ( p ‐trend = .012) (Figure ). For RT to spine, there was marked increase in the use of 2–5 fractions from 50% in 2012 to 62% in 2017 ( p < .001) (Figure ). When stratified by area of residence, the increase in the use of 2–5 fractions was observed in patients who lived in both major cities (from 28% in 2012 to 54% in 2017; p = .002) (Figure ) and regional/remote areas (from 49% in 2012 to 72% in 2017; p = .003) (Figure ). There were no statistically significant changes over time in fractionation in public institutions (Figure ). However, in private institutions, there was marked increase in the use of 2–5 fractions (from 38% in 2012 to 62% in 2017; p ‐trend < .001), and corresponding decrease in the use of 6–10 fractions (from 42% in 2012 to 29% in 2017, p ‐trend < .001) (Figure ). In metropolitan centers, there was increase in the use of 2–5 fractions (from 43% in 2012 to 56% in 2017; p ‐trend < .001) with corresponding decrease in 6–10 fractions (from 32% in 2012 to 25% in 2017; p ‐trend = .017) (Figure ). In regional centers, the use of RT fractionation varied over time, but the overall trend for the different RT fractionations over the 6‐year period was not statistically significant (Figure ). Multivariate analyses In multivariate analyses, patient age, target site of RT, area of residence, and treatment centers (type and location) were independently associated with the use of multifraction RT compared to SFRT, after adjusting for the year of treatment (Table ). Compared to patients aged under 60 years, those aged 60–69 were 2.2 times (95%CI = 1.2–4.0; p = .01) more likely to have 2–5 fraction RT (than SFRT), while patients aged above 80 were less likely (OR=0.46; 95%CI = 0.21‐0.99; p =0.05) to have 6–10 fraction RT (than SFRT). Treatment to the spine was more likely to be multifraction RT than SFRT –2.2 times (95%CI = 1.5–3.3; p < .001) more likely to be 2–5 fractions, and 1.9 times (95%CI = 1.2–3.0; p = .01) more likely to be 6–10 fractions. Compared to patients who lived in major cities, RT delivered to patients who lived in regional or remote centers was less likely to be multifraction RT than SFRT – 47% (95%CI = 2–71%; p = .04) relatively less likely to be 2–5 fractions, and 67% (95%CI = 31–84%; p = .003) less likely to be 6–10 fractions. Treatment in private institutions was most strongly associated with multifraction RT use, compared to public institutions – 3.3 times (95%CI = 2.0–5.4; p < .001) more likely to be 2–5 fractions, 7.8 times (95%CI = 4.5–13.6; p < .001) more likely to be 6–10 fractions, and 5 times (95%CI = 2.2–11.2; p < .001) more likely to be > 10 fractions. EOL cohort There were 122 courses of RT delivered to 59 patients at the EOL, of which only one‐quarter of the RT courses was SFRT (Table ). SFRT was more likely to be given closer to death, comprising 18%, 14%, and 33% of RT courses delivered within 2–3 months, 1–2 months, and < 1 months of death, respectively ( p = .08). The use of SFRT at the EOL was markedly lower in private institutions (7%) compared to public institutions (41%) ( p < .001). In multivariate analyses, treatment in private institutions was the only factor independently associated with SFRT use (OR = .04; 95%CI = .004–.33; p = .003). DISCUSSION This is to our knowledge the first Australian population‐based study to evaluate the pattern of RT fractionation for MM‐related bone disease. We found that SFRT remains a minority of RT fractionation regimens, consistent with the findings of RT for bone metastases in solid tumors. , , , , A major strength of this study is the use of population‐based administrative data, which capture all episodes of RT delivered in Victoria, both in public and private institutions. Thus, the data reflect our statewide practice, allowing us to evaluate any sociodemographic and institutional variations in care, which is not possible using single‐institutional studies. The most common fractionation used in our cohort was 2–5 fractions, and its use has increased over our study period. In contrast, the use of more extended fractionations of 6–10 fractions has decreased. This is in contrast to findings from the only other published population‐based series in the literature, using data from the U.S. National Cancer Database (NCDB) between 2004 and 2014, whereby more than half of the RT fractionations for MM were 6–10 fractions. The use of SFRT in our cohort remained low at 18% over the study period, which is similar in the management of bone metastases in solid tumor in Victoria, , , but was still much higher than the 2% SFRT use for MM reported in NCDB cohort. The number of prescribed RT fractions is often guided by the patients’ MM disease trajectory and overall prognosis. However, one of the major limitations of this study is the lack of information in some of the important patient factors (e.g., ECOG performance status), tumor‐factors (e.g., Revised International Staging System prognostic factors for MM ), and treatment factors (e.g., the use of systemic therapy ) in administrative databases, such as VRMDS. Hence, we are not able to evaluate the appropriateness of RT fractionation use in each individual patient–a young patient with good performance status early in the course of disease with availability of multiple systemic therapy options may warrant higher dose multifraction RT to provide more durable control, and this is different to a frail patient who is refractory to multiple lines of systemic therapy at the EOL. Nonetheless, it is important for radiation oncologists to stay abreast with advancement in systemic therapy options for MM, as new combination systemic therapies (e.g., carfilzomib, daratumumab, and dexamethasone) have been shown to significantly improve outcomes, even in the setting of refractory MM, and this may influence the decision making in RT fractionation prescribed. There should be less ambiguity in RT fractionation recommendation for patients with limited or poor prognosis at the EOL–ILROG guidelines recommend that single fraction 8 Gy is the preferred RT fraction for patients with poor prognosis who require RT. In the subset of RT courses delivered within 3 months of death (i.e., at the EOL) in our cohort, the overall SFRT use in our cohort still appears reasonably low at 25%. The underutilization of SFRT at the EOL has been previously reported in the management of bone metastases in solid tumor. This could reflect either a general reluctance for the use of SFRT even at the EOL, or clinicians’ overestimation of patients’ likely survival. The RT fractionation used also varied depending on the target site of treatment –with lower use of SFRT for spinal disease. ILROG guidelines recommend the use of multifraction RT of 30 Gy in 10–15 fractions in situations where there is epidural disease with spinal cord compression. One limitation of our study is that we do not have detailed clinical information to determine whether the treated spinal disease was associated with spinal instability, pathological fractures requiring surgical interventions, or spinal cord compression, which may justify the need for multifraction RT. We are also unable to account for reirradiation using VRMDS data, which is especially important in spinal disease, given the RT tolerance dose for spinal cord. Given that VRMDS data were only available from 2012 onward, we were not able to confirm if a patient has had RT to the same site prior to 2012. Even when the same target site was irradiated (e.g., spine) on more than one occasion since 2012, we do not have sufficient information to confirm if it was reirradiation of the same level of vertebra, or radiation of another vertebra level, not previously irradiated. We also evaluated institutional and demographic factors associated with RT fractionation use. One of the most striking findings is that treatment in private institutions is the strongest predictor of multifraction RT use. The higher proportion of multifraction RT use persisted even at the EOL and after adjusting for patients’ age and target site of RT. A most likely explanation for the differences in RT fractionation use between institutions may be remuneration related. In the current Australian healthcare setting, the Medicare Benefits Schedule (MBS) reimbursement for RT is based on the number of fractions delivered –MBS reimbursement for SFRT, 5‐fraction RT, and 10‐fraction RT delivered using 3D‐conformal technique in Australia was AUD 1320.35–1948.80, AUD 1821.75–2947.35, and AUD 2448.50–4497.60, respectively, depending on the number of organs‐at‐risks and number of RT fields involved. However, we also could not discount other possible explanations for the observed variations in practice, including differences in patient population seen in public versus private institutions, and possibly resources and capacity constraints in public institutions for delivery of multifraction RT. We observed no differences in RT fractionation use by patient socioeconomic status but there were differences in RT fractionation use depending on patients’ area of residence –those living in regional or remote areas were less likely to be treated with multifraction RT. This may reflect clinicians’ consideration and accommodation of patients’ preference to reduce the number of visits for treatment given the long travel distance to and from RT facilities. While remoteness of residence is an indirect measure of access to RT facilities, there is now increasing number of RT facilities being established in regional areas in Australia. A better measure of access would be the travel distance to the nearest RT facility, but these data were not available in our study. This has been assessed in earlier studies, , which found that increasing distance to the nearest RT facilities was associated with lower likelihood of receiving RT. Apart from the limitations highlighted above, another inherent limitation with the use of administrative dataset is that it is dependent on accuracy of reporting from each institution, and we cannot discount the possibility of misclassification of variables. This is especially critical in potential miscoding of the diagnosis between MM and solitary plasmacytoma, which will influence the recommended RT fractionation–solitary plasmacytoma is often treated with higher dose and more protracted fractionation. CONCLUSION Using an Australian administrative dataset, we observed increasing use of shorter fractionated RT schedules (2–5 fractions) for MM‐related bone disease between 2012 and 2017 in a population‐based cohort of patients. However, the use of SFRT remained low, even at the EOL. We also observed large variations in RT fractionation use depending on institutional type, with SFRT much more commonly used in public centers. This is an important pattern‐of‐practice study for MM in Australia as it provides us with a baseline benchmark of the contemporary practice pattern for MM to be measured against (which to date, is not available in any published literature) for future quality improvement initiatives to reduce unwarranted variations in practice. With advancement in systemic therapy for MM and as patients with MM are living longer, we anticipate that the pattern of practice of RT for MM‐related bone disease will continue to evolve, not only with respect to RT fractionation, but on the use of advanced RT techniques . The study was approved by Austin Health Human Research Ethics Committee (LNR/18/34).
Access to dental care barriers and poor clinical oral health in Australian regional populations
72c51902-10dc-4e3b-a381-b1a13e7b5b5c
10084231
Dental[mh]
INTRODUCTION Untreated dental caries in permanent teeth is the most common untreated disease affecting the global population (34.1%). Despite the use of oral health prevention strategies in both developed and developing nations, there has been less than a 4% reduction in the prevalence of untreated dental caries over nearly 30 years [1990: 31,407 cases per 100,000 people to 2017: 30,129 cases per 100,000 people]. The proportion of Australians aged 15 years and over with complete tooth loss, an inadequate natural dentition or who have dentures decreased between 1987–88 and 2004–06 and again between 2004–06 and 2017–18. The dental caries experience of adults (DMFT [Decayed, Missing and Filled Teeth] Score) similarly decreased over the same time period. Oral health is poorer in rural than in metropolitan areas of Australia. , Clinical oral health improved by a similar amount between 1987–88 and 2004–06 inside and outside Australia's capital cities resulting in the differential in oral health staying the same. This suggested that the poorer rural oral health was not being adequately managed. A possible reason for poor oral health in rural areas might be poorer access to dental care. In both 2004–06 and 2017–2108 , lower rates of visiting at least once a year and usually attending a check‐up were observed for those living outside of capital cities compared with those in capital cities, those with Year 10 or less schooling compared to Year 11 or more schooling, individuals with other or no qualifications than those with a degree or above, those eligible for public dental care compared to those ineligible and uninsured than insured persons. Other reasons for poor oral health in rural areas include reduced access to fluoridated drinking water, high‐risk behaviours such as smoking and alcohol drinking, and usually visiting a dentist for a problem rather than a check‐up . Major cities have the highest number per 100,000 population of practising dentists (64.6) and Remote/Very remote areas had the lowest (25.9) and this workforce imbalance has not improved since 2013. This paper aimed to find the associations in 2017‐18 between oral health with behavioural, demographic, periodontal disease risk indicators, social demographic factors, financial and access barriers to dental care in three Australian regional areas in 2017‐18. It wished to find if adult oral health was poorer than in Australia's major cities, and if not, what factors could be involved. It will inform policy makers, administrators and dental practitioners about which factors influence regional and remote oral health. METHODOLOGY Data from the latest National Study of Adult Oral Health (NSAOH 2017‐18) were analysed. Study participants were selected using a multi‐stage probability sampling design that began with the sampling of postcodes within states/territories in Australia. A sampling frame of postcodes was created that listed all postcodes designated as in‐scope of the study. Through consultation with state and territory dental health services, some remote and very remote postcodes were excluded due to the costs and complexities involved in undertaking oral examinations in these postcodes. The postcode sampling frame was stratified by state and territory and further stratified into greater capital city and rest of state/territory regions. Individuals within selected postcodes were then selected by the Australian Government Department of Human Services from the Medicare database. Participants were given the option to either complete the questionnaire online or complete the questionnaire via a computer‐assisted telephone interview. Participants were asked a series of questions about their oral health and dental service use. Participants who completed an interview and who reported having one or more of their own natural teeth were invited to undergo a standardised oral examination. Examinations were carried out by state/territory dental practitioners. Statistical analyses were carried out on the 5,022 adults who were examined. Information on the NSAOH 2017‐18 study aims and methods and study participation and weighting can be found elsewhere. This study followed the STROBE Statement for reporting cross‐sectional studies. The NSAOH 2017‐18 project was reviewed and approved by The University of Adelaide's Human Research Ethics Committee and ethical approval to conduct examinations in each jurisdiction was sought under the National Mutual Acceptance system. Three regional levels (Major city, Inner regional, Outer regional & Remote/Very remote) were used for analysis rather than two (Inside and outside Major cities) because the proportion of dentists to population numbers in each region varied and having three regions gives a greater gradient of rurality. Putative confounders were selected that have been shown in previous studies to be associated with oral health. These were subdivided into the categories of socio‐demographic characteristics, periodontal disease risk factors and preventive dental behaviours. Periodontal disease risk factors were included because they might help explain the loss of teeth. Socio‐demographic characteristics included age (15‐<45, 45‐<60, 60+ years), sex, annual household income (≤$AU30k, >$AU30k‐<60k, $AU60k+), country of birth (Australia/Other), education level (≤Year 10, Year 11+), Aboriginal and/or Torres Strait Islander (Yes/No) and employment status (Employed/Not employed). Periodontal disease risk factors were smoking (Current/past/never smoked) and diabetes (Yes/No). Preventive dental behaviours were frequency of toothbrushing (2+, <2 day) and flossing (1+ per day, <1 per day) and the usual reason for dental visiting (Check‐up, problem). Barriers to dental care were financial: the difficulty in paying a $200 dental bill (none, hardly any, a little, a lot of difficulty) and avoided or delayed dental treatment because of cost (Yes, No). The access to dental care variables was the usual reason for dental visits (problem/check‐up), the average time between dental visits (1+ times/year, ≤ once a year), and eligibility for public dental care (Yes, No). Clinical oral health was measured by the prevalence of dental caries and periodontitis. The former by the mean number per participant of decayed teeth, missing teeth (under 45 years of age excluded non‐pathology) and filled teeth due to pathology. The latter by the US Centers for Disease Control and Prevention (CDC) and the American Academy of Periodontology (AAP) Periodontal Classification and dichotomized into none/mild and moderate/severe. Dental caries experience was indicated by the mean number of decayed, missing and filled teeth (DMFT score) per participant. To make an estimation of the mean number of teeth missing due to dental decay and periodontal (gum) disease, an assessment was made of the reason for missing teeth in people less than 45 years of age at the time of examination. This meant that teeth which were missing for reasons other than decay or gum disease could be excluded from the analysis. In older people, the assumption was made that missing teeth had been extracted for dental disease. The dependent variables were compared by region, socio‐demographic characteristics, periodontal disease risk factors, preventive dental behaviours, and the barriers and access to dental care variables. Data analysis was performed with all data weighted to ensure the representativeness of the target population. Bivariate analysis was undertaken to identify and describe associations between the outcome variables and main explanatory variables and to find confounders. For categorical variables, the bivariate variate analysis was done as a cross‐tabulation with chi‐square and t‐tests for continuous variables. For collinearity, regression analysis for variance inflation factor calculation was used. Variables that were statistically associated with both the explanatory (regional location) and at least one of the outcome variables were defined as confounders. A multiple variable analysis with the dental clinical disease measures as dependent variables was then undertaken. RESULTS Of the 15,731 people interviewed, 5,022 were examined. Just over half the participants were aged between 15 and less than 45 years, there was an even split of the sexes, annual household income was approximately evenly split between the three categories, and just under 70% had a Year 11 education or higher. A third of the participants were born outside Australia, under 2% reported having Aboriginal or Torres Strait Islander ancestry and under 40% were unemployed (Table ). With the periodontal disease risk factors, 6% reported a doctor saying they had diabetes and just under one‐tenth were current smokers. Over two‐thirds of participants brushed their teeth at least twice a day and just over a fifth flossed their teeth at least once a day and over 60% of the participants reported usually visiting a dentist for a check‐up. With the barriers and access to dental care variables, just under half visited a dentist once a year or less, over half had a little or a lot of difficulties paying a $200 dental bill, and under a half avoided or delayed dental treatment because of cost and just under a third were eligible for public dental care. The mean DMFT score was 11.20. Inner regional areas had a higher proportion of people in both the oldest and youngest age groups than in the other two regions (Table ). There were more people with the lowest household income level in inner regional than in major city areas. People in outer regional, remote and very remote areas had lower education levels and were less likely to be born outside Australia than people in major city areas. There was a higher proportion of Aboriginal or Torres Strait Islanders outside than inside major city areas. People in inner regional areas were more likely to be previous smokers than people in major cities and people in outer regional, remote and very remote areas were less likely to usually visit a dentist for a check‐up or to visit a dentist two times or more a year than people in major city areas. There was no significant difference in the proportion of people with respect to sex, diabetes, employment status, the frequency of toothbrushing or dental flossing, difficulty in paying a $200 dental bill or avoiding or delaying dental treatment because of cost between the three regions. Hence, these variables were not included in the multiple variable analyses. A higher proportion of people in inner regional areas than in major city areas were eligible for public dental care and there were less dentists per 100,000 people in outer regional, remote and very remote areas than in the major cities. The DMFT score and the mean number of missing teeth were significantly higher outside than inside major city areas. However, there was no significant difference in the prevalence of periodontal disease between the three regional areas. For this reason, the influence of periodontal disease was not further reported in this paper. Even though there was also not a significant difference between the mean number of decayed or filled teeth between the regional areas, they were included in the multiple variable analysis to discover how they influenced the DMFT score. Not surprisingly, as the variables were selected on whether they had been shown in previous studies to be associated with oral health, all the socio‐demographic characteristics, periodontal disease risk factors, and preventive dental behaviours were significantly associated with at least one of the dental caries indicators (Table ). Age, annual household income, education, country of birth, being an Aboriginal or Torres Strait Islander, smoking, the usual reason for dental visiting, the average time between dental visits and eligibility for public dental care were significantly associated with both the regional location and at least one of the outcome variables and were included in the multivariable analysis. In the multivariable analysis, there was no significant association between regional location with any of the four clinical dental caries variables (Table ). Age was associated with all four clinical dental caries variables and annual household income with the decayed and missing teeth coefficients, but not with and filled teeth or the DMFT score. Education level was associated with the decayed and filled teeth coefficients. Country of birth or being an Aboriginal and/or Torres Strait islander was not associated with any of the dental caries indicators. Current smoking was associated with more decayed and missing teeth which resulted in a higher DMFT score. With the access to dental care variables, usually visiting a dentist for a problem was associated with higher dental caries in all four multiple variable models than usually visiting for a dental check‐up. Less frequent dental visiting was associated with lower DMFT score and less filled teeth. Eligibility for dental care was associated with a higher DMF and less filled teeth. DISCUSSION The results suggest that tackling differences between the three regions in social demographics such as income and education level as well as smoking behaviour will improve the oral health of people outside Australian major cities. Importantly they also indicate that an emphasis should be on encouraging people outside the major cities to visit their dentist for a check‐up rather than waiting till they have a dental problem. These results are important because they suggest that tackling social demographics and smoking prevalence might do more to lower the dental caries experience of people outside Australian major cities in the long term than the expensive option of increasing the number of dentists. This is a generational change. Improving access to education outside major city areas might flow onto improved incomes, reduced smoking rates, and more dental visits for check‐ups rather than for a problem. This does not mean that having more dentists outside major cities is not a good idea, particularly in the shorter term. The first reason is that more dentists will allow earlier detection of dental diseases and treatment can be provided when the disease is at an early stage. The second reason is that more dental practitioners will give those in rural areas time to provide more preventive practices such as fluoride applications and fissure sealants. Previous studies have found that rural dentists were less likely to supply preventive services than urban dentists and this might be due to having a higher number of patients who need problem‐based care resulting in having less available time to provide non‐urgent dental care. , This study found that being eligible for public dental care was associated with more missing teeth. This suggests that increasing the size of the public dental workforce outside Australia's major cities so that it can provide more preventive and conservative treatment rather than extractions as well as targeting high‐risk groups might be a good strategy. Current smoking being associated with more missing teeth can be explained by the negative effect of smoking on periodontal health. A factor that has not been examined in this paper is differing exposure to lifetime water fluoridation on dental caries nor the usage of professionally applied topical fluoride between people living inside and outside Australia's major cities. A previous paper using 2004–06 data found there was a greater mean lifetime fluoridation exposure in state capital cities than outside capital cities. , Fluoridation of drinking water remains the most effective and socially equitable means of achieving community‐wide exposure to the caries prevention effects of fluoride. Every effort should be made to increase access to water fluoridation for all Australians, not just for people living outside Australia's major cities. Another limitation was that the remoteness level might be too coarse a measure. Not all postcodes were sampled in NSAOH 2017‐18 and so using anything below greater city/rest of state might have risks, for representativeness and small cell sizes. The number of dentists per 100,000 people was not available and it would have added strength to this paper to know its influence on oral health. Whenever using a cross‐sectional survey, one must always be careful not to determine cause and effect. The number of Indigenous participants that had examinations was small limiting the conclusions that can be made about this variable. Less frequent dental visiting might not only be an indicator of access but also could be an indicator of utilisation which might influence the results. The strength of this study is the large sample size. RECOMMENDATIONS FOR RESEARCH Qualitative research should be undertaken to assess regional attitudinal variations in oral health and access to dental services. Research is required into the relative urban and rural changes in clinical and self‐perceived oral health between the Australian National Adult Surveys of 2004–06 and 2017–18 as well as the effect of access to water fluoridation on the 2017–18 results. Research is also required into the factors determining poorer child oral health in rural compared to urban areas. RECOMMENDATION FOR POLICY The findings of this paper suggest that poorer oral health outside Australia's major cities was due to the social determinants of household income, education level and eligibility for public dental care and the behaviours of smoking, the usual reason for and the frequency of, dental visiting. To improve oral health outside Australia's major cities, governments and policymakers should focus on ways to improve rural household incomes and education levels, reduce the prevalence of smoking and encourage dental visits for check‐ups rather than for problems. RECOMMENDATION FOR PRACTICE Continuing with campaigns and legislation aimed at reducing smoking rates as well as encouraging people to usually visit a dentist for a check‐up as opposed to waiting until they have a problem, will improve clinical oral health in both urban and rural areas. Local action for water fluoridation by rural dentists and community leaders will reduce dental caries significantly. CONCLUSIONS Poorer oral health outside major cities was associated with age and the social determinants of household income and education level. It was also associated with behaviours consisting of higher smoking, usual reason for and frequency of dental visiting. Crocombe LA: Contributed to conception, design, data acquisition and interpretation, drafted and critically revised the manuscript. Chrisopoulos S: Contributed to critically revised the manuscript and supporting in the data acquisition process. Kapellas K: Contributed to critically review the manuscript. Brennan D: Contributed to critically review the manuscript. Luzzi L: Contributed to critically review the manuscript. Khan S: Contributed to conception, design, performed all the data analysis, data acquisition and interpretation and critically revised the manuscript. None.
Are adults with autism receiving regular preventive dental services?
c8cd7935-b728-4e8c-ad63-8056bb961d89
10084249
Dental[mh]
INTRODUCTION Autism spectrum disorder (ASD) is a complex and pervasive neurodevelopmental condition that can include communication deficits, behavioral issues, and intellectual impairment. , As a consequence, ASD severely impacts social development. , ASD is more common in males than in females and it is usually diagnosed between the ages of 2 and 11 years of age. It is a prevalent condition, and current research shows that ASD affects approximately 2.5% of the US children and adolescents. For persons with ASD, accessing dental care requires overcoming many obstacles, including communicating effectively with care providers, obtaining consent from guardians, and sensory challenges in a new environment, and these obstacles may prevent a person from being able to effectively cope with receiving dental care. , , , Careful consideration of the needs and barriers related to ASD can help dental practitioners personalize their treatment plans. , However, many dental practitioners are not knowledgeable about the special needs for individuals with ASD and may feel uncomfortable when providing dental care for these patients. , This lack of confidence may be, in part, caused by a lack of research‐derived data about the dental needs of persons with ASD. , , There is currently more empirical evidence to support appropriate medical and dental care for children with ASD but limited research regarding dental care for adults with ASD. , The lack of data precludes a more comprehensive understanding of the dental needs of adults with autism. As a consequence, specific, customized strategies for providing dental care for adults with autism still need to be developed, and the lack of such protocols may reduce the confidence of practitioners in providing dental care for these patients and limit the effectiveness of treating persons with ASD. Considering the lack of research‐based evidence available to guide dental care provision for the adult patients with autism, more investigative efforts are necessary to understand the barriers for adults with autism to access dental care and what is needed to enable them to overcome those barriers. This study will investigate the frequency of preventive dental care among adults with autism and explore what factors are associated with frequent preventive care. Clarifying the frequency and what variables are associated with regular preventive dental care can help identifying barriers and enable practitioners to improve access to preventive dental care for a larger number of individuals with autism. MATERIAL AND METHODS After IRB approval was obtained (IRB ID#202006350), a query was performed in the University of Iowa College of Dentistry and Dental Clinics electronic health records for patients matching three inclusion criteria: being 18 years old or older at the time of first appointment, having self‐reported autism in their health history questionnaires, and had at least one preventive dental procedure been recorded. De‐identified data was collected from these records and provided by the information technology team member to the researchers in an Excel spreadsheet. The retrieved records included information about each patient's age, gender, body mass index (BMI), mental health, heart disease, xerostomia, diabetes, number of medications, type of preventive procedures, and number of preventive procedures. The following ADA codes were used to typify preventive procedures in this study: D1110 (Prophylaxis—Adult), D1110.1 (Prophylaxis—Adult Collegiate Recall), D1110.3 (Pumice Polish), D1110.4 (Prophy—Adult—No Charge—Freshman Clinic), D1110.5 (Ultrasonic Scaling), D1206 (Fluoride Varnish), D1206.4 (Fluoride varnish—No Charge—Freshman Clinic), D1310 (Nutritional counseling), D1330 (Oral Hygiene Instruction—Complex), D1330.1(Oral Hygiene Instruction/Simple), D4346 (Scaling in presence of gingival inflammation—full mouth), D4355 (Full mouth debridement to enable comp eval and diagnosis), D4910 (Periodontal maintenance), D1354 (Caries arresting med—Silver Diamine Fluoride), D1354.1 (Caries arresting med—Silver Diamine Fluoride‐no cost), and D1355 (Caries prev medicament application‐per tooth—not fluorides). These variables were chosen among all variables available in the existing database to provide a description of the sample demographics (age and gender), selected health history variables that had been previously linked to autism and/or oral health problems, and information to determine the frequency of preventive treatment. An “undetermined” category was used for participants that had their first dental visit in 2019, 2020, or 2021. Due to the COVID‐19 pandemic, we were not able to tell for some of them if they would return consistently for visits. Univariate and bivariate analyses were performed. Then, two different approaches were used to investigate what factors are associated with regular preventive dental care, as follows. First, bivariate associations between having consistent preventive dental visits (at least one per year) and the covariates of interest were determined. Chi‐square tests (or Fisher's exact tests) were used when analyzing categorical variables and Wilcoxon rank sum tests were used when analyzing continuous variables. No adjustments have been made for multiple comparisons. As no bivariate associations were found, no multivariable modeling was attempted for associations between having consistent dental and the covariates of interest. Second, bivariate associations between the total number of preventive dental visits and the covariates of interest were determined. Spearman correlation tests were used when analyzing continuous covariates and Wilcoxon rank sum (or Kruskal‐Wallis for more than two groups) tests were used when analyzing categorical covariates. Again, no adjustments have been made for multiple comparisons. To further analyze the associations between covariates of interest and the total number of dental visits , Poisson regression was considered. To account for the different lengths of time participants visited the College of Dentistry, an offset was included in the model. The offset is the log of the years since the participant first visited the College of Dentistry. After fitting the full model and testing the residuals, it was clear that overdispersion is present in the data. To account for this overdispersion, a quasipoisson model was fit to the data. The quasipoisson model allows the overdispersion to be estimated and accounted for in the modeling. Variable selection was conducted using backward variable selection. First, all variables that had a p ‐value < .25 in the bivariate analysis were entered into the starting model. The full model included age at first visit, heart disease, and number of medications. After fitting the full model, variables with the largest p ‐value that was greater than .05 were removed. RESULTS Summary statistics for the covariates of interest are presented in Table . The sample was composed of 119 individuals with an average age of 30.8 years (±12.0). The majority were men (67%) and had Medicaid (58%). Average BMI was very high (42.8 ± 24.7), the prevalence of diabetes and heart disease were 16% and 34%, respectively, and a large proportion of individuals (86%) reported mental health problems. The reported use of tobacco was 16%, alcohol use was 19%, and recreational drugs 6.8%. Dry mouth was reported by 32%, and the average number of medications was 7.2 (±5.5). The average number or preventive dental visits was 7.9 (±10.6) with an average of 1.77 visits per year, and the number of patients with consistent preventive dental visits (at least one visit per year) was 42 (35%). Group comparisons were made to determine differences in having consistent preventive dental visits and the covariates of interest. An independent t ‐test revealed no statistically significant associations (Table ). Similar results were found when checking Spearman rank coefficient between the total number of preventive dental visits and the covariates of interest (Table ), except for a weak positive association with number of medications (Rho = .203; p ‐value = .027). In the quasipoisson regression model used to further analyze the associations between covariates of interest and the total number of preventive dental visits , the number of medications was the only variable retained in the final model (Table ). The estimated coefficient suggests taking one additional medication leads to a minimal change of 1.04 (1.01, 1.07) times increase in the rate of dental visits. DISCUSSION The proportion of adults with autism in this sample who received consistent preventive dental care (a minimum of one visit per year) was only 35%, or only about one in every three patients. The majority of the sample was male (67%), which is consistent with the prevalence of ASD. This result is even direr considering this sample is composed of patients who have had at least one preventive procedure recorded. In a previous study, from 244 persons with autism getting dental care in a dental school, only about half had received a preventive treatment. One should also note that these samples are from patients seeking dental care, and therefore does not take into account those adults with autism not actively seeking dental care. Surprisingly, our investigation about possible explanatory variables, which could help to understand what factors are associated with consistent preventive dental visits, showed none of the available variables were associated with consistent preventive dental visits in a statistically significant way. Similarly, only number of medications was associated with the number of preventive dental visits, and this association was weak. In the quasipoisson regression model used to further analyze the predictors between covariates of interest and the total number of dental visits, the only variable retained in the final model was the number of medications. However, its effect was so small that it cannot be considered clinically significant. One can assume taking more medications presumably means having more understanding of health needs and therefore more medical care, leading to more frequent dental visits. However, it is also fair to assume an opposing hypothesis, that people who are more medicated may be more ill, and therefore less able to visit the dental office frequently. Nevertheless, it was especially surprising that well‐known enablers such as dental insurance, or barriers, such as mental health problems, were not associated with the frequency of preventive dental visits. In part, the lack of statistically or clinically significant associations can be explained by the reduced sample size. The sample size is a major limitation of this study. Another limitation is the retrospective nature of the investigation and the restrictions imposed by the electronic health record itself, which limited the number of explanatory variables that could be used. Other possible explanatory variables previously reported to be barriers for persons with autism to access dental care, such as caregivers' health and dental literacy, poverty ratio, household education, and non‐English language, could not be assessed as these variables are absent in the available data. Another limitation to be considered is that these patients might have visited another dental office during the observation period, although we feel that is unlikely since the College of Dentistry is one of the few providers in the state who provide dental care to adults with autism. One reason it is important to expand the sample size beyond the state of Iowa in future studies is because other states have different Medicaid policies. Therefore, future research to expand this analysis should include larger, national samples using electronic health record consortiums and/or investing more resources into prospective, multicenter or private practice network projects to identify specific barriers and enablers related to accessing preventive dental care for adults with autism. From the practice management perspective, it is important to highlight that the College of Dentistry has an automated recall system that sends recall cards for the patients and gives robot calls confirming patients’ appointment a day before it is scheduled. Missing appointments are usually followed‐up by the provider and front desk team. Although, this management practice seems appropriate, the results of this study show the need for a careful review of the follow‐up system. CONCLUSION In this sample of adults with autism receiving preventive care in a dental school, only about one in every three adults with autism has received at least one preventive dental procedure per year. No significant barriers or enablers were found among the available explanatory variables. The authors declare no conflicts of interest. The authors declare that the study conforms to recognized ethical standards and was approved by the University of Iowa Institutional Review Board (IRB ID#202006350).
Altruism in death: Attitudes to body and organ donation in Australian students
10955273-79d0-4265-a80c-82979ebc5af7
10084255
Anatomy[mh]
Body and organ donation are profoundly valuable post‐mortem altruistic acts. Their value to the community is substantial and enduring. Body donation enables medical education, surgical and other post‐graduate clinical training, and defense, pharmaceutical, road safety, and medical device research, with future healthcare workers benefitting from the use of donated bodies (cadavers) in studies of anatomy and the anatomical sciences (O'Neill, ; Roach, ; Delotte et al., ; Cornwall & Stringer, ; Jones & King, ). Organ donation saves lives, with the number of eligible organ transplant recipients greatly exceeding the number of donations each year both in Australia (Donate Life, ) and internationally (Arshad et al., ). In Australia, unlike in some other jurisdictions (Riederer, ; Habicht et al., ), all cadavers used in education and research are donated. Would‐be donors sign up for their local (university‐based) body donation program prior to death. The consent process includes explicit engagement of next‐of‐kin and family and does not involve remuneration. Likewise, organ donation in Australia operates on an “ opt‐in ” system with explicit next‐of‐kin consent required at the time of death for a donation to occur, irrespective of the deceased's election to donate prior to death. Little has been reported about the support of the Australian community for body donation. Support for organ donation is high, with surveys reporting consistent levels of 85%–90% of those Australians surveyed in favor of organ donation (Irving et al., , ; Sharpe et al., ). However, actual donor registrations are much lower with only one in three Australians on the donor register (Sharpe et al., ; Donate Life, ), and the unique and limited circumstances in which organ donation is possible further restrict the number of available organs. The Australian community is highly multi‐cultural and it is known that both body and organ donation are less common among some ethnic groups (Khanal et al., ; Donate Life, ), despite public health campaigns aimed at addressing misconceptions about donation. The reasons for these disparities are complex; notwithstanding the Australian commitment to the exclusive use of donated bodies and organs, for many, the use of cadavers and organ donation are challenging, and ethnicity, beliefs, and experiences may influence responses to their use in education, research, and healthcare. However, the disparity in donation rates has tangible consequences in healthcare and education; students from non‐Caucasian backgrounds who are training as healthcare workers rarely, if ever, encounter donor bodies that reflect their ethnicity or culture. Thus, their capacity to discuss donation with family and friends and/or with peers and colleagues may be hindered (Curtis et al., ). Similarly, acute shortages of donated organs from donors of non‐Caucasian ethnicities may delay or compromise the chances of transplantation because of less than ideal tissue and genotype matching (Khanal et al., ). The critical role of health professionals in informing and supporting families about, and through, the donation process has been documented in a number of studies (Williams et al., ; Dubois & Anderson, ; Zambudio et al., ; Demir et al., ; Sellers et al., ). Reluctance to raise the issue of organ donation, particularly with distressed family members, is a significant barrier to donation occurring (Dubois & Anderson, ; Potter et al., ). There is also evidence to suggest that donation and discussion of organ donation are more common among individuals who know others who have donated, are recipients of organs, or are on a transplant waiting list (Sung et al., ; Phillipson et al., ; Sellers et al., ). A New Zealand study of registered body donors also reported that knowing someone who had donated or registered to donate their body, was a significant factor in the likelihood of being registered as a body donor (Fennell & Jones, ). A study of 6861 potential organ donors in 2012 across 68 Australian hospitals (Pilcher et al., ) reported that discussion by health care professionals with family about donation occurred in 98% of donor cases compared to only 16% of non‐donors. It was apparent from this study and others, that successful organ donation was facilitated by knowledge of the wishes of the deceased to participate in organ donation prior to death, the presence of dedicated donation counseling and support staff, and a willingness of all those involved with the care of the patient to discuss donation and its implications (Irving et al., ; Neate et al., ; Pilcher et al., ; Marck et al., ). Health care workers have a critical role in facilitating discussion about body and organ donation, and in advocating for their importance in medical care, education, and research (Rikker & White, ; Ghorbani et al., ; Irving et al., ; Keel et al., ; Robert et al., ). Their capacity to undertake effective advocacy for donation, and to approach families with confidence and sensitivity, is in no small part dependent upon them having accurate, up‐to‐date, and nuanced knowledge about body and organ donation programs and processes (Hyde & Chambers, ; Marck et al., ; Keel et al., ). Understanding the ethnic, cultural, and religious context in which such attitudes are formed and held is also important. Despite continued debate about the need for gross anatomical dissection in medical education (Winkelman, ; Ghosh, ), international research consistently reports that exposure of students to gross anatomical dissection, whether through active dissection or instruction using prosected specimens, is beneficial (Pawlina & Lachman, ; Winkelman, ; Sugand et al., ; Estai & Bunt, ; Ghosh, , ). Students themselves value the experience of cadaveric dissection and consider it integral to their learning (Quince et al., ; Mwachaka, et al., ; Flack & Nicholson, ; Alamneh, ; Asante et al., ; Bahşi et al., ), despite their reservations or anxiety about their first encounter with the dead body, or other unpleasant aspects of dissection (Quince et al., ; Dissabandra et al., ; Allison et al., ). Exposure to cadavers and anatomical dissection is also regarded as an important professional development milestone for students, assisting them to develop broader professional competencies including skills in self‐reflection about death and grieving, managing challenging and distressing situations, teamwork, and communication (Pawlina et al., ; Drake et al., ; Böckers et al., ; Johnson et al., ; Ghosh, , ; Allison et al., ). It also provides an opportunity for students to consider the altruistic act of donation and to develop an understanding of the legal and ethical framework in which donation occurs, areas where medical student knowledge has been reported to be deficient (Bardell et al., ; Essman & Thornton, ; Goz et al., ; Ghahovac et al., ; Tontus et al., ; Ciliberti et al., ; Robert et al., ). These skills are integral to their capacity to treat patients with a terminal illness, communicate with grieving families, and initiate and manage conversations about body, organ, and other tissue donation. However, exposure to gross anatomical dissection has been suggested as a contributing factor in the development of negative attitudes to the use of the body after death, including for body donation, organ transplantation, and education and research using donated human tissue (Cahill & Ettarh, , ; Alexander et al., ; Galic et al., ; Viljoen & Stephens, ). Across many countries, medical and health sciences students exposed to gross anatomical dissection reported diminished support for body donation, with students differentiating between support for the public to donate (recognizing the importance of the use of donated bodies in medical education) and reluctance to support donation by family or themselves. Cahill and Ettarh ( ) reported a negative effect after completion of courses in anatomical dissection on a subgroup of students' willingness to donate their own organs or support their families to donate organs. Other international studies (Fennell & Jones, ; McClea & Stringer, ; Cornwall et al., ; Anyanwu & Obikili, ; Rokade & Gaikawad, ; Anyanwu et al., ; Green et al., ; Galic et al., ; Dagcioglu et al., ; Viljoen & Stephens, ) have also reported that medical students and health professionals, including doctors, express less willingness to donate their bodies, despite advocating for the public to do so, and that support for organ donation consistently exceeds support for body donation. Student attitudes to body donation have not been found to be related to education level (Viljoen & Stephens, ), but have been reported to be related to the length of exposure to anatomical dissection or gross anatomy (Alexander et al., ; Galic et al., ; Vijoen & Stephens, 2021). Students' and health professionals' personal or religious beliefs have also been identified as significant factors mediating their support for body and organ donation (Galic et al., ; De Gama et al., ; Dagcioglu et al., ; Naidoo et al., ) with those who identified as atheists or agnostic more likely to support self‐body donation compared to their peers who stated that they practiced a religion (Galic et al., ; Naidoo et al., ). Ignorance of body and organ donation processes has been cited as a significant barrier to donation both in Australia (Neate et al., ; Marck et al., ) and internationally (Hu & Hang, ; Mwachaka et al., ; Dagcioglu et al., ). Undertaking studies of anatomy is near‐universal for healthcare professionals in Australia and thus may provide an opportunity to introduce, inform and educate students and future healthcare workers about the legal and ethical framework in which both body and organ donation occurs. The multi‐cultural nature of Australian society and thus the health workforce, and the people whose health care they will deliver would benefit from broadening knowledge about, and support for, altruistic human tissue donation across all cultural, ethnic, and religious groups. However, it is critical that the attitudes and beliefs of students from all backgrounds about human tissue donation and its use are sensitively considered and accommodated in any programs of instruction aimed at improving knowledge and understanding of donation. This is an area where knowledge is still scant. This study, therefore, seeks to explore attitudes to body and organ donation of students who undertake courses in anatomical sciences at The University of Sydney. This student cohort encompasses local and international students from many different countries and cultural backgrounds. Drawing on the existing literature, the authors hypothesize that continued and more intense exposure to gross anatomy will be associated with more negative views about body donation than occurs with shorter and less intense exposure. It was also hypothesized that students will be more willing to support body and organ donation by the public than they are to support donation of their own bodies and organs or those of their family members, and that their ethnicity and cultural beliefs will be reflected in their reasons for, or against, supporting donation. The study was conducted at The University of Sydney with participants recruited from enrolled students. The research protocol was reviewed and received ethics approval from The University of Sydney Human Research Ethics Committee, protocol approval number 2017/917. Survey instrument A thirty‐one item questionnaire was developed. The questionnaire had three sections: (1) demographic information including age, identified gender (if any), language spoken at home, practice of religion, home country, previous anatomy study; (2) attitudes to body donation, for themselves, their family and the public; (3) attitudes to organ donation for themselves, their family and the public. All questions were optional. The questionnaire was piloted on a small group (26 participants) of undergraduate and postgraduate students and academics. Question wording and options were adjusted to reflect feedback. Cronbach's alpha was calculated and was found to be 0.849 indicating a good level of reliability. The questionnaire is available as Supporting Information. Support for body donation was examined via answers to three questions: (1) Would you be willing to donate your body to The University of Sydney for research and education?, (2) Would you support a member of your family to donate their body?, and (3) Would you support a member of the public to donate their body? Support for organ donation was examined via answers to three questions: (1) Would you donate your own organs for transplantation?, (2) Would you support a family member to donate their organs for transplantation?, and (3) Would you support a member of the public to donate their organs for transplantation? For both body and organ donation questions, participants were able to select either “Yes,” “No,” and “Have not thought about it” in response to all questions. Participants were asked to choose from a list of six possible reasons to support body donation and a list of 14 reasons against supporting it. There were eight reasons to select from to support organ donation, and nine reasons against supporting it. Participants were able to make multiple choices from each list, provide a free text response (other reason), and indicate reasons both for and against donation. A printed paper version of the questionnaire was administered to students recruited through the (then) Discipline of Anatomy and Histology at the University of Sydney. An online version of the questionnaire, hosted on the LimeSurvey (GmbH, v2 2006, Hamburg: Germany) platform, was administered to Mathematics students. Participant recruitment Recruitment of participants to complete the paper‐based questionnaire occurred in 2018 and 2019, and the online survey in 2019. Participation was voluntary. Undergraduate students comprised four groups—Anatomy Experience; Health Sciences; Medical Sciences; Mathematics. Students who were completing a unit of study in the Discipline of Anatomy and Histology (the first three groups) were approached in a timetabled class. Mathematics students were first‐year students enrolled in a mathematics unit via the School of Mathematics and Statistics. They were emailed an explanatory invitation written by the authors to participate in the study by the first‐year coordinator. The participant information statement and survey link were attached. A reminder email was sent 2 weeks later and then a week before the survey link closed. The online version of the questionnaire remained open for 6 weeks. Postgraduate students/trainees were approached at the commencement of their first gross anatomy laboratory session. These students included Medical and Dentistryl students, and Postgraduate trainees undertaking a Master's degree. Questionnaire responses were collated across all courses and both years. Cohort groups for analyses In total, 2056 participants were included in the analysis. These comprised 1923 students who completed paper questionnaires across 2018 (929) and 2019 (994), and 133 Mathematics students. The largest group of respondents were undergraduate students ( N = 1447; 70.4%). Response rates (calculated using class enrolment numbers rather than attendance numbers) averaged 58% for the paper survey, and 12.2% for the online survey. Students were grouped into five cohorts, on the basis of their exposure to gross anatomical dissection and previous anatomy study/experience: Mathematics ( n = 133) and Anatomy Experience ( n = 172) students: (total N = 305), a minority of whom (14.8%) reported having studied anatomy previously, with more than half stating this occurred in high school (in biology and physical education subjects), and whose current exposure was limited at most to one elective, two‐hour laboratory Anatomy Experience class using plastic models and specimens of main organs. The Anatomy Experience course was offered to first‐year students in Nursing, Science, and Arts. The majority of the students electing to take the course were from Nursing. Health Sciences students ( N = 279), comprise those undertaking courses in Sports and Exercise Science; Physiotherapy; Speech Pathology; Occupational Therapy; and Osteology, of whom 37.3% reported having studied anatomy previously, predominantly at the tertiary level. Anatomy lectures and laboratory classes are core course requirements and include guided instruction in organ systems related to their degree specialty, for example, the limb musculature for physiotherapy students. Medical Sciences students ( N = 863), comprise those students who have chosen to complete courses in anatomy, and who are likely to be pursuing majors in anatomy and/or neuroscience, of whom 36.7% had studied anatomy previously, predominantly at tertiary level. Their course requirements included lectures and laboratories focused on the self‐guided discovery of anatomical structures and their relationship to organ systems and human physiology. Postgraduate Medical and Dentistry students ( N = 555), of whom 68.3% reported having studied anatomy previously, mostly at Australian and overseas universities as part of their undergraduate education. These students' courses include two years of compulsory system‐based anatomical studies in lectures and laboratories, with a strong focus on self‐guided learning of anatomical structures, their relationship to normal and disease states in humans, and clinical reasoning. Postgraduate trainees ( N = 54), 89.1% of whom had previously studied anatomy at university in Australia/overseas, and who had enrolled in intensive courses in gross anatomical dissection as a compulsory component of their professional qualifications in surgery, or in intensive courses using prosected cadaveric specimens as a compulsory component of their professional qualifications in critical care. The majority of these students were medical graduates. Statistical analysis Survey responses were compiled for analysis using SPSS statistical package, version 26 (IBM Corp., Armonk, NY). Descriptive statistics (percentages) were used to examine the proportions of participants who were supportive of body/organ donation for themselves, their family, and the public. As all variables investigated were categorical, except for age, cross‐tab chi‐squared statistics ( χ 2 ) were used to examine differences in demographic characteristics, exposure to anatomical dissection, and attitudes to body and organ donation. A significance level of α < 0.001 was adopted to account for the large sample correlations and to ensure significant relationships were identified conservatively. A thirty‐one item questionnaire was developed. The questionnaire had three sections: (1) demographic information including age, identified gender (if any), language spoken at home, practice of religion, home country, previous anatomy study; (2) attitudes to body donation, for themselves, their family and the public; (3) attitudes to organ donation for themselves, their family and the public. All questions were optional. The questionnaire was piloted on a small group (26 participants) of undergraduate and postgraduate students and academics. Question wording and options were adjusted to reflect feedback. Cronbach's alpha was calculated and was found to be 0.849 indicating a good level of reliability. The questionnaire is available as Supporting Information. Support for body donation was examined via answers to three questions: (1) Would you be willing to donate your body to The University of Sydney for research and education?, (2) Would you support a member of your family to donate their body?, and (3) Would you support a member of the public to donate their body? Support for organ donation was examined via answers to three questions: (1) Would you donate your own organs for transplantation?, (2) Would you support a family member to donate their organs for transplantation?, and (3) Would you support a member of the public to donate their organs for transplantation? For both body and organ donation questions, participants were able to select either “Yes,” “No,” and “Have not thought about it” in response to all questions. Participants were asked to choose from a list of six possible reasons to support body donation and a list of 14 reasons against supporting it. There were eight reasons to select from to support organ donation, and nine reasons against supporting it. Participants were able to make multiple choices from each list, provide a free text response (other reason), and indicate reasons both for and against donation. A printed paper version of the questionnaire was administered to students recruited through the (then) Discipline of Anatomy and Histology at the University of Sydney. An online version of the questionnaire, hosted on the LimeSurvey (GmbH, v2 2006, Hamburg: Germany) platform, was administered to Mathematics students. Recruitment of participants to complete the paper‐based questionnaire occurred in 2018 and 2019, and the online survey in 2019. Participation was voluntary. Undergraduate students comprised four groups—Anatomy Experience; Health Sciences; Medical Sciences; Mathematics. Students who were completing a unit of study in the Discipline of Anatomy and Histology (the first three groups) were approached in a timetabled class. Mathematics students were first‐year students enrolled in a mathematics unit via the School of Mathematics and Statistics. They were emailed an explanatory invitation written by the authors to participate in the study by the first‐year coordinator. The participant information statement and survey link were attached. A reminder email was sent 2 weeks later and then a week before the survey link closed. The online version of the questionnaire remained open for 6 weeks. Postgraduate students/trainees were approached at the commencement of their first gross anatomy laboratory session. These students included Medical and Dentistryl students, and Postgraduate trainees undertaking a Master's degree. Questionnaire responses were collated across all courses and both years. In total, 2056 participants were included in the analysis. These comprised 1923 students who completed paper questionnaires across 2018 (929) and 2019 (994), and 133 Mathematics students. The largest group of respondents were undergraduate students ( N = 1447; 70.4%). Response rates (calculated using class enrolment numbers rather than attendance numbers) averaged 58% for the paper survey, and 12.2% for the online survey. Students were grouped into five cohorts, on the basis of their exposure to gross anatomical dissection and previous anatomy study/experience: Mathematics ( n = 133) and Anatomy Experience ( n = 172) students: (total N = 305), a minority of whom (14.8%) reported having studied anatomy previously, with more than half stating this occurred in high school (in biology and physical education subjects), and whose current exposure was limited at most to one elective, two‐hour laboratory Anatomy Experience class using plastic models and specimens of main organs. The Anatomy Experience course was offered to first‐year students in Nursing, Science, and Arts. The majority of the students electing to take the course were from Nursing. Health Sciences students ( N = 279), comprise those undertaking courses in Sports and Exercise Science; Physiotherapy; Speech Pathology; Occupational Therapy; and Osteology, of whom 37.3% reported having studied anatomy previously, predominantly at the tertiary level. Anatomy lectures and laboratory classes are core course requirements and include guided instruction in organ systems related to their degree specialty, for example, the limb musculature for physiotherapy students. Medical Sciences students ( N = 863), comprise those students who have chosen to complete courses in anatomy, and who are likely to be pursuing majors in anatomy and/or neuroscience, of whom 36.7% had studied anatomy previously, predominantly at tertiary level. Their course requirements included lectures and laboratories focused on the self‐guided discovery of anatomical structures and their relationship to organ systems and human physiology. Postgraduate Medical and Dentistry students ( N = 555), of whom 68.3% reported having studied anatomy previously, mostly at Australian and overseas universities as part of their undergraduate education. These students' courses include two years of compulsory system‐based anatomical studies in lectures and laboratories, with a strong focus on self‐guided learning of anatomical structures, their relationship to normal and disease states in humans, and clinical reasoning. Postgraduate trainees ( N = 54), 89.1% of whom had previously studied anatomy at university in Australia/overseas, and who had enrolled in intensive courses in gross anatomical dissection as a compulsory component of their professional qualifications in surgery, or in intensive courses using prosected cadaveric specimens as a compulsory component of their professional qualifications in critical care. The majority of these students were medical graduates. Survey responses were compiled for analysis using SPSS statistical package, version 26 (IBM Corp., Armonk, NY). Descriptive statistics (percentages) were used to examine the proportions of participants who were supportive of body/organ donation for themselves, their family, and the public. As all variables investigated were categorical, except for age, cross‐tab chi‐squared statistics ( χ 2 ) were used to examine differences in demographic characteristics, exposure to anatomical dissection, and attitudes to body and organ donation. A significance level of α < 0.001 was adopted to account for the large sample correlations and to ensure significant relationships were identified conservatively. Cohort characteristics The demographic characteristics of the cohort are described in Table . The majority of the cohort comprised local students, and just over a quarter were international students. Approximately 13% of these students were from Asian countries, predominantly China, and 6.5% were North American (predominantly Canadian). Two of the local students identified as Indigenous. Of the students who spoke a language besides English at home, 30% spoke an Asian language (predominantly Chinese, Mandarin, and/or Cantonese), and 5% each spoke another European language or a Sub‐Continental language (predominantly Hindi). Nearly 20% of the local students spoke an Asian language at home, as did 17% of the North American students. The most commonly practiced religion was Christianity (65%), then Buddhism (12%), with 10% identifying as followers of Islam. Thirty percent of local students who reported practicing a religion identified as Christian, less than 4% as Muslims and less than 3% as Buddhists. The majority of Muslim students came from the Middle East (56%) and the Sub‐Continent (22%). Seventy percent of those students who practiced Buddhism came from Asian countries and 11% from the Sub‐Continent. Body donation Support for body donation and the percentage of participants who self‐reported registration as body donor is summarized in Table . Support for self‐donation of bodies, across all cohort groups, was low with higher support for a family member or member of the public electing to donate their body. Medical and dentistry students were the most likely to be unwilling to donate their own body. Previous exposure to anatomy was associated with reduced support for self‐donation ( p < 0.0001) and with an increased likelihood that participants had thought about body donation for themselves, their family, and the public. This effect was particularly striking in relation to support for the public to donate their bodies: the majority of postgraduate students and trainees had thought about the question, and none of the postgraduate trainees responded negatively to supporting the public to donate their body, with only one postgraduate medical/dentistry student indicating they would not support such an election (all comparisons significant at p < 0.0001). There were no differences across any of the cohort groups in their willingness to support a family member to donate their body. Religious practice was found to have quite substantial effects on student support for body donation. Support for self‐donation of the body was lowest in those students who practiced a religion at home, with 38.5% of students who practiced a religion not supporting self‐donation compared to 26.0% of non‐practicing participants ( p < 0.0005). Just over 40% of both groups had not thought about this issue. More than three‐quarters (76%) of participants who did not practice a religion at home would support a family member to donate their body, whereas only two‐thirds (66.6%) of those who practiced a religion would. The religious‐practicing students were more likely to answer “No” to giving such support (13.1 compared to 8.4% for non‐practicing students) and more likely not to have thought about it ( p < 0.0005). Religious practice did not, however, affect support for a member of the public electing to donate their body. Local students (29%) were also more likely than international students (19.2%) to support self‐donation of the body ( p < 0.0005) and donation by a family member (77.5 local students versus 57.8% international students; p < 0.0005). Support for public donation of a body was much higher in local students (91.1%) when compared to international students (73%; p < 0.0001). Local students were approximately three times more likely to have thought about body donation for themselves, their family members, and the public ( p < 0.0001). Registration as a body donor Few participants reported being registered to donate their body—67 in total, 43 of whom identified as local students. The accuracy of self‐reported registration for these participants was questionable as more than half nominated registration via the National Organ Donor Registry (which is not possible) and a further quarter could not recall where they registered as a body donor. Only one local student nominated a body donor program. Of the 24 international students, approximately half were North American, predominantly Canadian, who nominated established province‐related programs. Of the remaining international students, Singaporean students nominated their local legislation ( Human Organ Transplant Act [HOTA], 2012 ), and its embedded capacity to make an election to donate your body rather than be included in the automatic organ donation program on death through the companion Medical (Therapy, Education and Research Act (MERTA) opt‐in scheme. The self‐reports of registration as a body donor by international students appear to be more reliable than those of local students. Reasons to support body donation Just over two‐thirds (67.8%) of the participants provided reasons for why they would donate their bodies (Figure ). Neither age nor identified gender was related to the reasons chosen to support body donation. Participants from English‐speaking backgrounds (46%) were more likely than their non‐English‐speaking peers (35.2%) to choose the reason “your body has no value once you are dead” ( p < 0.0005). Participants who stated that they practiced a religion at home were almost half as likely (25.3%) as their non‐practicing peers (48%) to choose this reason for unwillingness to donate their body ( p < 0.0005). Participants with previous anatomy exposure (80.5%) were more likely to select “education of future students” as a reason to support body donation than those without such exposure (71.1%; p < 0.0005) and this reason was also selected more often by postgraduate students and trainees ( p < 0.0005) when comparing responses across the cohorts. Reasons not to support body donation Fifty‐two percent of the study cohort (1076) chose reasons as to why they would not support body donation (Figure ). The most commonly cited “other” reason was the wish to donate their body for organ transplantation, nominated most often by postgraduate students and trainees, indicating their greater awareness that body donation generally precludes organ donation (the brain sometimes being an exception). Neither age nor identified gender were found to be associated with reasons for unwillingness to support body donation. Approximately three times as many participants who spoke another language at home (11.4%) chose “my religion doesn't permit” as a reason compared to those who were English‐speaking (4.3%; p < 0.0005), although overall this was not a common reason selected by participants. Thirty percent of non‐English‐speaking background participants chose “my family is not comfortable with it” as a reason for unwillingness to donate, compared to 10.1% of participants from an English‐speaking background ( p < 0.0005). English‐speaking participants were more likely to choose “discomfort with the concept” (62.5%), than non‐English background students (46.8%; p < 0.0005), and “I don't want students like me using my body” (41.8 English‐speaking, 25.5% non‐English background; p < 0.0005). Participants who stated that they practiced a religion at home were much more likely (17%) to choose “my religion does not permit it” for unwillingness to donate their bodies, than those who did not (1.2%; p < 0.0005), and also to nominate “personal beliefs” (32.2 compared to 19.6%; p < 0.0005). These participants also nominated “discomfort with the concept” (59 compared to 48.3%; p < 0.0005), and “I don't want to contemplate my own death” (19.1 compared to 10.2%; p < 0.0005) more frequently than their non‐practicing peers. Twice as many local (16.7%) as international students (8.1%) nominated “I do not want to contemplate my own death” ( p < 0.0005). Participants with no previous anatomy study were more likely to choose “they did not know enough to make a decision” (38.8%), than participants with previous anatomy exposure (25.6%; p < 0.0005). More participants who had studied anatomy previously also nominated “other” and a wish to donate for organ transplantation as a reason not to donate their body ( p < 0.0005). Together these findings suggest that exposure to anatomical sciences provides opportunities for students to become more informed about donation. Organ donation Table presents the responses to each of the three questions about organ donation for the whole cohort and compares those with previous anatomy experience and those without. It also presents the proportion of participants who self‐reported registration as an organ donor. Students with previous anatomy experience were more likely to support organ donation for themselves, their family, and the public ( p < 0.0001). In contrast, students with no previous anatomy experience were approximately twice as likely to indicate they “have not thought about it” in answer to all three questions. Older students were more likely to support the self‐donation of their organs for transplantation and to have thought about the issue ( p < 0.0005). English‐speaking and local students were more likely to support organ donation for themselves, their family, and the public than their non‐English‐speaking and international counterparts. Their support for self‐donation was almost 90%, and for family and public donation of organs for transplantation more than 95%. This difference in support for organ donation by English‐speaking participants compared to their non‐English‐speaking peers was substantial; 15% greater for self‐donation; 18% greater for family donation; 12% greater for public donation of organs ( p < 0.0005 for all three comparisons). Students who practiced a religion were less likely (76.0%) to support self‐organ donation for transplantation compared to non‐practicing participants (86.3%; p < 0.0005). There were no differences between participants who practiced a religion and those who did not with regard to support for family or public donation of organs for transplantation. Registration as an organ donor Self‐reported registration as an organ donor was higher across all groups compared to registration as a body donor. However, the postgraduate students and trainees were more likely to report having registered as organ donors, with 58% of the postgraduate trainees reporting being registered organ donors, and 43.8% of postgraduate medical and dentistry students, compared to 11%–15% of the undergraduate participants ( p < 0.0001). Students with previous anatomy exposure were more than twice as likely to be registered as organ donors (31%) than those without such experience (15%; p < 0.0001). Reasons to support organ donation The majority of the study cohort (86.4%) selected one or more reasons why they would support organ donation (Figure ). A smaller proportion of the postgraduate trainees compared to the other cohort groups chose “medical research” and a greater proportion chose “a good idea” ( p < 0.0001 in both cases), suggesting a greater awareness in this group of the value of organ transplantation. There were neither age nor identified gender‐related differences in the reasons nominated for supporting organ donation. Students who did not speak English at home were less likely (39.6%) to choose “your organs have no value once you are dead” than their English‐speaking peers (56.9%; p < 0.0005). In contrast, English‐speaking students were more likely to select “I hope if I need an organ transplant, someone will donate for me” and “it is a good idea” (77.6 and 57.6%, respectively), compared to those who were from non‐English‐speaking backgrounds (58.3 and 41.0% respectively; p < 0.0005 for both comparisons), suggesting a distinct difference between how these two groups regarded organ transplantation. Local students were more likely to express a belief (50.9%) that “your organs have no value once you are dead” than international students (40.6%; p < 0.0005), and that it “was a good idea” (52.3 compared to 40.8% for international students; p < 0.0005). Local students were also more likely (73%) to express the “hope that if I need an organ transplant someone will donate for me” than international students (53.6%; p < 0.0005), again suggesting there is an interplay between cultural and societal norms, including beliefs about the value of organs, altruistic donation, and reciprocity. Participants who identified as practicing a religion at home were less likely (35.2%) to nominate their “organs having no value once you are dead” as a reason to donate them compared to those who were non‐religious (55.2%; p < 0.0005), and also less likely to nominate organ donation as a good idea (41.8 compared to 53.4% for non‐religious students; p < 0.0005). Reasons not to support organ donation Less than a quarter of the cohort ( n = 439) answered this question, reflecting the generally higher support for organ donation, and a lower level of ambivalence about motivations to donate or not, in comparison to attitudes to body donation (Figure ). There were no statistically significant differences in the reasons selected for unwillingness to support organ donation associated with age, identified gender, previous anatomy exposure, or across the five cohort groups. Approximately one‐fifth (19.1%) of the students who practiced a religion and who answered this question, nominated their religious beliefs as a barrier to donation compared to 2.0% of those who did not practice a religion ( p < 0.0005). Half of this group also nominated “discomfort with the concept” (50%) compared to 29.4% of their non‐religion‐practicing peers ( p < 0.0005). Comparison of support for body and organ donation Support for body donation was much lower than support for organ donation (26.5 versus 82.3%) across the whole cohort. However, there was a very high level of support for the public to donate their body (86.3%), especially in comparison to the unwillingness of all participants to donate their own body. While a similar proportion of the whole cohort supported the public to donate their organs (84.2%) as supported the public to donate their body, the disparity between support for own/family donation compared to support for public donation was much greater for body donation compared to organ donation. The difference was also most marked for those students who had undertaken previous anatomy study, who were least likely to indicate they would donate their own body, and the most likely to donate their own organs. Students who supported self‐, family‐, and public donation of the body were also more strongly in favor of organ donation, for themselves ( p < 0.0001), their family ( p < 0.0001), and the public ( p < 0.0001). Those students who answered affirmatively to both questions were also the most likely to have thought about their attitudes to donation, whether it was body or organ donation, and whether for themselves, their family, or the public. Support for self‐donation of organs ( p < 0.00001), and the likelihood of registration as a body donor ( p < 0.0001), or as an organ donor ( p < 0.00001) increased with age. Otherwise, age was not found to be associated with differences in attitudes to body and organ donation. The demographic characteristics of the cohort are described in Table . The majority of the cohort comprised local students, and just over a quarter were international students. Approximately 13% of these students were from Asian countries, predominantly China, and 6.5% were North American (predominantly Canadian). Two of the local students identified as Indigenous. Of the students who spoke a language besides English at home, 30% spoke an Asian language (predominantly Chinese, Mandarin, and/or Cantonese), and 5% each spoke another European language or a Sub‐Continental language (predominantly Hindi). Nearly 20% of the local students spoke an Asian language at home, as did 17% of the North American students. The most commonly practiced religion was Christianity (65%), then Buddhism (12%), with 10% identifying as followers of Islam. Thirty percent of local students who reported practicing a religion identified as Christian, less than 4% as Muslims and less than 3% as Buddhists. The majority of Muslim students came from the Middle East (56%) and the Sub‐Continent (22%). Seventy percent of those students who practiced Buddhism came from Asian countries and 11% from the Sub‐Continent. Support for body donation and the percentage of participants who self‐reported registration as body donor is summarized in Table . Support for self‐donation of bodies, across all cohort groups, was low with higher support for a family member or member of the public electing to donate their body. Medical and dentistry students were the most likely to be unwilling to donate their own body. Previous exposure to anatomy was associated with reduced support for self‐donation ( p < 0.0001) and with an increased likelihood that participants had thought about body donation for themselves, their family, and the public. This effect was particularly striking in relation to support for the public to donate their bodies: the majority of postgraduate students and trainees had thought about the question, and none of the postgraduate trainees responded negatively to supporting the public to donate their body, with only one postgraduate medical/dentistry student indicating they would not support such an election (all comparisons significant at p < 0.0001). There were no differences across any of the cohort groups in their willingness to support a family member to donate their body. Religious practice was found to have quite substantial effects on student support for body donation. Support for self‐donation of the body was lowest in those students who practiced a religion at home, with 38.5% of students who practiced a religion not supporting self‐donation compared to 26.0% of non‐practicing participants ( p < 0.0005). Just over 40% of both groups had not thought about this issue. More than three‐quarters (76%) of participants who did not practice a religion at home would support a family member to donate their body, whereas only two‐thirds (66.6%) of those who practiced a religion would. The religious‐practicing students were more likely to answer “No” to giving such support (13.1 compared to 8.4% for non‐practicing students) and more likely not to have thought about it ( p < 0.0005). Religious practice did not, however, affect support for a member of the public electing to donate their body. Local students (29%) were also more likely than international students (19.2%) to support self‐donation of the body ( p < 0.0005) and donation by a family member (77.5 local students versus 57.8% international students; p < 0.0005). Support for public donation of a body was much higher in local students (91.1%) when compared to international students (73%; p < 0.0001). Local students were approximately three times more likely to have thought about body donation for themselves, their family members, and the public ( p < 0.0001). Few participants reported being registered to donate their body—67 in total, 43 of whom identified as local students. The accuracy of self‐reported registration for these participants was questionable as more than half nominated registration via the National Organ Donor Registry (which is not possible) and a further quarter could not recall where they registered as a body donor. Only one local student nominated a body donor program. Of the 24 international students, approximately half were North American, predominantly Canadian, who nominated established province‐related programs. Of the remaining international students, Singaporean students nominated their local legislation ( Human Organ Transplant Act [HOTA], 2012 ), and its embedded capacity to make an election to donate your body rather than be included in the automatic organ donation program on death through the companion Medical (Therapy, Education and Research Act (MERTA) opt‐in scheme. The self‐reports of registration as a body donor by international students appear to be more reliable than those of local students. Just over two‐thirds (67.8%) of the participants provided reasons for why they would donate their bodies (Figure ). Neither age nor identified gender was related to the reasons chosen to support body donation. Participants from English‐speaking backgrounds (46%) were more likely than their non‐English‐speaking peers (35.2%) to choose the reason “your body has no value once you are dead” ( p < 0.0005). Participants who stated that they practiced a religion at home were almost half as likely (25.3%) as their non‐practicing peers (48%) to choose this reason for unwillingness to donate their body ( p < 0.0005). Participants with previous anatomy exposure (80.5%) were more likely to select “education of future students” as a reason to support body donation than those without such exposure (71.1%; p < 0.0005) and this reason was also selected more often by postgraduate students and trainees ( p < 0.0005) when comparing responses across the cohorts. Fifty‐two percent of the study cohort (1076) chose reasons as to why they would not support body donation (Figure ). The most commonly cited “other” reason was the wish to donate their body for organ transplantation, nominated most often by postgraduate students and trainees, indicating their greater awareness that body donation generally precludes organ donation (the brain sometimes being an exception). Neither age nor identified gender were found to be associated with reasons for unwillingness to support body donation. Approximately three times as many participants who spoke another language at home (11.4%) chose “my religion doesn't permit” as a reason compared to those who were English‐speaking (4.3%; p < 0.0005), although overall this was not a common reason selected by participants. Thirty percent of non‐English‐speaking background participants chose “my family is not comfortable with it” as a reason for unwillingness to donate, compared to 10.1% of participants from an English‐speaking background ( p < 0.0005). English‐speaking participants were more likely to choose “discomfort with the concept” (62.5%), than non‐English background students (46.8%; p < 0.0005), and “I don't want students like me using my body” (41.8 English‐speaking, 25.5% non‐English background; p < 0.0005). Participants who stated that they practiced a religion at home were much more likely (17%) to choose “my religion does not permit it” for unwillingness to donate their bodies, than those who did not (1.2%; p < 0.0005), and also to nominate “personal beliefs” (32.2 compared to 19.6%; p < 0.0005). These participants also nominated “discomfort with the concept” (59 compared to 48.3%; p < 0.0005), and “I don't want to contemplate my own death” (19.1 compared to 10.2%; p < 0.0005) more frequently than their non‐practicing peers. Twice as many local (16.7%) as international students (8.1%) nominated “I do not want to contemplate my own death” ( p < 0.0005). Participants with no previous anatomy study were more likely to choose “they did not know enough to make a decision” (38.8%), than participants with previous anatomy exposure (25.6%; p < 0.0005). More participants who had studied anatomy previously also nominated “other” and a wish to donate for organ transplantation as a reason not to donate their body ( p < 0.0005). Together these findings suggest that exposure to anatomical sciences provides opportunities for students to become more informed about donation. Table presents the responses to each of the three questions about organ donation for the whole cohort and compares those with previous anatomy experience and those without. It also presents the proportion of participants who self‐reported registration as an organ donor. Students with previous anatomy experience were more likely to support organ donation for themselves, their family, and the public ( p < 0.0001). In contrast, students with no previous anatomy experience were approximately twice as likely to indicate they “have not thought about it” in answer to all three questions. Older students were more likely to support the self‐donation of their organs for transplantation and to have thought about the issue ( p < 0.0005). English‐speaking and local students were more likely to support organ donation for themselves, their family, and the public than their non‐English‐speaking and international counterparts. Their support for self‐donation was almost 90%, and for family and public donation of organs for transplantation more than 95%. This difference in support for organ donation by English‐speaking participants compared to their non‐English‐speaking peers was substantial; 15% greater for self‐donation; 18% greater for family donation; 12% greater for public donation of organs ( p < 0.0005 for all three comparisons). Students who practiced a religion were less likely (76.0%) to support self‐organ donation for transplantation compared to non‐practicing participants (86.3%; p < 0.0005). There were no differences between participants who practiced a religion and those who did not with regard to support for family or public donation of organs for transplantation. Self‐reported registration as an organ donor was higher across all groups compared to registration as a body donor. However, the postgraduate students and trainees were more likely to report having registered as organ donors, with 58% of the postgraduate trainees reporting being registered organ donors, and 43.8% of postgraduate medical and dentistry students, compared to 11%–15% of the undergraduate participants ( p < 0.0001). Students with previous anatomy exposure were more than twice as likely to be registered as organ donors (31%) than those without such experience (15%; p < 0.0001). The majority of the study cohort (86.4%) selected one or more reasons why they would support organ donation (Figure ). A smaller proportion of the postgraduate trainees compared to the other cohort groups chose “medical research” and a greater proportion chose “a good idea” ( p < 0.0001 in both cases), suggesting a greater awareness in this group of the value of organ transplantation. There were neither age nor identified gender‐related differences in the reasons nominated for supporting organ donation. Students who did not speak English at home were less likely (39.6%) to choose “your organs have no value once you are dead” than their English‐speaking peers (56.9%; p < 0.0005). In contrast, English‐speaking students were more likely to select “I hope if I need an organ transplant, someone will donate for me” and “it is a good idea” (77.6 and 57.6%, respectively), compared to those who were from non‐English‐speaking backgrounds (58.3 and 41.0% respectively; p < 0.0005 for both comparisons), suggesting a distinct difference between how these two groups regarded organ transplantation. Local students were more likely to express a belief (50.9%) that “your organs have no value once you are dead” than international students (40.6%; p < 0.0005), and that it “was a good idea” (52.3 compared to 40.8% for international students; p < 0.0005). Local students were also more likely (73%) to express the “hope that if I need an organ transplant someone will donate for me” than international students (53.6%; p < 0.0005), again suggesting there is an interplay between cultural and societal norms, including beliefs about the value of organs, altruistic donation, and reciprocity. Participants who identified as practicing a religion at home were less likely (35.2%) to nominate their “organs having no value once you are dead” as a reason to donate them compared to those who were non‐religious (55.2%; p < 0.0005), and also less likely to nominate organ donation as a good idea (41.8 compared to 53.4% for non‐religious students; p < 0.0005). Less than a quarter of the cohort ( n = 439) answered this question, reflecting the generally higher support for organ donation, and a lower level of ambivalence about motivations to donate or not, in comparison to attitudes to body donation (Figure ). There were no statistically significant differences in the reasons selected for unwillingness to support organ donation associated with age, identified gender, previous anatomy exposure, or across the five cohort groups. Approximately one‐fifth (19.1%) of the students who practiced a religion and who answered this question, nominated their religious beliefs as a barrier to donation compared to 2.0% of those who did not practice a religion ( p < 0.0005). Half of this group also nominated “discomfort with the concept” (50%) compared to 29.4% of their non‐religion‐practicing peers ( p < 0.0005). Support for body donation was much lower than support for organ donation (26.5 versus 82.3%) across the whole cohort. However, there was a very high level of support for the public to donate their body (86.3%), especially in comparison to the unwillingness of all participants to donate their own body. While a similar proportion of the whole cohort supported the public to donate their organs (84.2%) as supported the public to donate their body, the disparity between support for own/family donation compared to support for public donation was much greater for body donation compared to organ donation. The difference was also most marked for those students who had undertaken previous anatomy study, who were least likely to indicate they would donate their own body, and the most likely to donate their own organs. Students who supported self‐, family‐, and public donation of the body were also more strongly in favor of organ donation, for themselves ( p < 0.0001), their family ( p < 0.0001), and the public ( p < 0.0001). Those students who answered affirmatively to both questions were also the most likely to have thought about their attitudes to donation, whether it was body or organ donation, and whether for themselves, their family, or the public. Support for self‐donation of organs ( p < 0.00001), and the likelihood of registration as a body donor ( p < 0.0001), or as an organ donor ( p < 0.00001) increased with age. Otherwise, age was not found to be associated with differences in attitudes to body and organ donation. This study reports a relationship between exposure to anatomical examination using cadaveric tissue, and support for self‐, family‐, and public‐donation of bodies and organs. Previous anatomy exposure and increasing exposure to gross anatomy, are associated with more decisive views (positive and negative) about supporting body and organ donation, strongly stimulate consideration of these issues, and increase the likelihood of registration as an organ donor. Previous anatomy exposure, however, appears to diminish support for self‐body donation, while increasing support for public donation, suggesting that students who undertaken studies in anatomy value the opportunity and thus the altruistic gift of donation, but also show reluctance to donate themselves. Public good and personal decisions The findings of this study suggest that participation in anatomical studies using donated human tissues prompts students to think about their own feelings and attitudes to donation, and invokes consideration of whether they would support the wishes of their family and the public to donate. In making these decisions, they are required to balance their own personal feelings, and imagined responses to the loss of a family member, against the clear benefit of the donation. This study suggests that students were capable of, and inclined to, delineate between the public good that donation represents, and recognition of their feelings in the situation where the possibility of donation arises. The dichotomy between supporting body donation overall but not being prepared to donate themselves, or to support donation for family members, suggest a separation of attitudes with regard to the value of anatomical education and the altruistic act of donation. Other studies have also reported this separation (Cahill & Ettarh, , ; Galic et al., ). Further, the lower level of support for family member body donation compared to organ donation suggests that participants experienced ambivalence in balancing their support for (and benefit from) the use of donated bodies from the public, with their own reluctance to donate or to have family members donate. For participants of diverse religious, cultural, and ethnic backgrounds, this ambivalence and tension may be particularly challenging, but that should be a driver rather than a barrier to engaging with these students (in anatomy class) to assist them in developing and defining their views about donation generally, and the use of donated tissue more specifically. Participants who were inclined against donation or ambivalent about it, and who thus nominated both reasons for and against donation, also felt that they had insufficient information to make decisions about donation of their body or their organs. These participants were more likely to state that they also felt discomfort with the concept, and for some, discomfort occurred irrespective of their support for donation. This was more apparent in relation to body donation, but a small but significant proportion of the cohort articulated reasons both for and against organ donation, demonstrating insight into the complexity of these issues at a personal level. These findings affirm the lack of knowledge and competence medical and health sciences students feel when asked to consider issues of human tissue use and donation, be it for education, research, or clinical treatment. The students also preferred the word “discomfort” over “distaste” in relation to both body and organ donation, suggesting that the emotional response is not so much the disgust or “ick” factor suggested in some studies as deterring donation (O'Carroll et al., ), but a more emotional response to the unknowns associated with having to make a decision about donation while grieving. That emotional response engendered by exposure to a dead human, and dissection of that human, manifested in consideration of a broad range of issues associated with donation. For some, the emotional response will dominate (Morgan et al., ; Miller et al., ) but the experience and information that accompanies such exposure in anatomy class does provide a factual base on which to deal with the challenging issues that arise at the time of death. Thus, the lack of support for self‐ and family‐body donation, and in some participants, organ donation, reported here do not seem to reflect an aversive response. Those participants who had more exposure to gross anatomy were most likely to have thought about donation. In addition, the reasons nominated by participants for not supporting body donation, reflected anxiety about participants' knowledge base, and concerns about future students like themselves handling their bodies, and particularly for a small group of postgraduate students/trainees, the inability to donate organs if they donated their body, rather than antipathy toward dissection. It seems apparent that the participants in the study who had the most exposure to gross anatomy (the postgraduate students/trainees) had been prompted to consider difficult and emotional issues associated with death and donation, and to inform themselves about donation, in a way that those with less exposure had not. Body donation The results of this study demonstrate support for self‐body donation is much lower than support for the donation of one's organs for transplantation, and that exposure to gross anatomy reduces support for self‐donation. The concept of body donation was observed to be novel to many participants; more than 40% of the whole cohort had not thought about body donation for themselves; close to 20% had not thought about it in relation to a family member; over 12% also had not considered the issue of public donation. In contrast, only around 10% of the cohort had not considered organ donation for themselves, their family, or for a member of the public. Body donation is less publicly recognized than organ donation (Cornwall, ) and overall measures of public support for body donation have not been reported in Australia. Likewise, little is known about public knowledge of body donation and the processes for effecting it. Most body donor programs in Australia do not advertise or promote their programs, and outreach is mostly limited to aged care, general practice, and other geriatric health‐related services. Apart from occasional media reports or first‐hand experience through having a family member donate, the general public in Australia is unlikely to have encountered information about body donation. However, the level of support found in this study is lower than the support reported in some other cohorts or countries with similar practices for procurement of bodies. For example, a 2009 Irish study of post‐graduate medical students prior to their first dissection session reported that around 40% would support self‐donation, although the proportion strongly against self‐donation increased with subsequent dissection session attendance (Perry & Ettarh, ). Support for family donation was higher, but both were lower than the support for public donation. The findings reported here are consistent overall (in pattern, if not in proportion) with the low level of support by medical students for self‐body donation, somewhat higher support for family‐member body donation, and the highest support for the public to donate their body (Cahill & Ettarh, ; Cornwall et al., ; Anyanwu et al., ; Galic et al., ; Abbasi Asl et al., ; Kumar et al., ). Postgraduate students and trainees who were least willing to self‐body donate were more aware that body donation precludes organ donation, and cited this as a reason not to donate their body. However, 10% of the whole cohort stated that they would not support a family member to donate, suggesting that personal views of next of kin are likely to affect whether an election is honored. This is significant because none of the body donor programs in Australia will accept a body if any of the family or next of kin object. The use of bodies in education and research is a sensitive issue (Cornwall, ; Jones & Whitaker, ; Ghosh, ; Jones & King, ), not the least because of the unsavory historic practices for body procurement (Richardson & Hurwitz, ; Jones & Fennell, ; Jones, ; Jones & Whitaker, ), as well as contemporary cultural and other beliefs (Richardson & Hurwitz, ; Larner et al., ; Ghosh, ). Practices aimed to address concerns about dignity and respect for the deceased, and the respectful, compassionate handling of the body by students are integrated into the pedagogy and policies of many gross anatomy courses (Ghosh, ) including those at The University of Sydney. However, for some, the use of bodies for dissection, notwithstanding their value in education and research, is distasteful and unacceptable (Ghosh, , ), a view that may be reflected in the low overall support for self‐ as opposed to public body donation reported here. Religious practice and donation There were clear differences between the views of students from different cultural, ethnic, and religious backgrounds in this study toward donation. It was found that participants who practiced a religion were more likely to decline to support body and organ donation for themselves and their family. The association between religious beliefs and reluctance to donate has been reported previously in both Australian (Edwards et al., ; Wakefield et al., ; Alexander et al., ; Phillipson et al., ; Ralph et al., ) and international cohorts (Rumsey et al., ; Wong, ; Galic et al., ; Ciliberti et al., ; Zhang & Ma, ). This study suggests that the way in which the deceased body is viewed, and the need to adhere to cultural and familial norms are factors that engender reluctance to support both body and organ donation. The higher value placed by some cultures on rituals associated with death, and regarding the body as sacred, and, therefore, needing to be intact for funeral rites may be particularly significant for these groups (Wong, ; Ralph et al., ; Sasi et al., ; Donate Life, ) and reflected in the reasons they selected to explain their unwillingness to donate. Local and English‐speaking participants in this study were more likely to regard their deceased body and organs as having no value after death, a view to which their religious and non‐English‐speaking peers were much less likely to subscribe. These same students, nevertheless, expressed some discomfort with the concept of body donation and concern about having others like them dissect their body, again suggesting a very personalized response to body donation. Most religions in Australia have issued public statements of support for donation by religious leaders confirming that donation does not violate religious codes, including those of followers of Buddhism, Hinduism, Islam, and various Christian religions (Donate Life, ). However, misconceptions about donation persist within some religious, cultural, and ethnic sub‐groups (Cooper & Taylor, ; Shaheen, ; Wong, ; Phillipson et al., ; Ralph et al., ; Sasi et al., ; Dagcioglu et al., ) and have significant detrimental impacts on the capacity of transplantation services to provide organs for culturally diverse patients (Ralph et al., ). Australian data show that consent rates for organ donation are much lower in some ethnic and religious communities, including those with a greater clinical need for transplantation, and that there are many fewer registered donors (Donate Life, ). Although organs from ethnically diverse donors can be used in recipients of different ethnicity, the success rate for matching is lower due to specific combinations of blood and tissue type being more commonly required in ethnic groups (Khanal et al., ). Morgan et al. ( ) completed a systematic review of both qualitative and quantitative literature examining attitudes to organ donation and donor registration in ethnic minority groups across the United States and the United Kingdom. They reported five areas where ethnic minority groups' attitudes or knowledge constituted barriers to positive attitudes to donation and effective registration: (1) low levels of factual knowledge about donation and registration as a donor; (2) familial factors including reluctance to discuss donation with family members, taboos about death, and respect for parental authority; (3) religious and cultural beliefs, including that donation was prohibited by particular religions; (4) concerns about bodily integrity including the need for rapid burial and an intact body (often in conjunction with cultural and religious beliefs); (5) distrust in healthcare systems and doctors, including in relation to receiving optimal treatment and equitable distribution of donated organs. This study suggests these barriers to donation may exist for some of the participants in this study, particularly those from religious and cultural groups who may be local students of immigrant background, or international students pursuing education in Australia, and may also apply to body donation. The much greater level of support of participants from English‐speaking backgrounds for organ donation also suggests that cultural and societal factors such as the widespread promotion of, and support for, organ donation in Australia and other English‐speaking countries (e.g., the United Kingdom and the United States) may also be influential in laying a foundation of positive attitudes to donation. The idea of reciprocity reflected in the selection of the “I hope if I need one someone donates for me” as a reason to support organ donation by local and English‐speaking participants may also reflect both this promotion and trust in their health systems to manage organ donation equitably and for those in most need. Value of exposure to anatomical sciences The exposure to anatomical study is likely to be a positive factor in enabling participants to become more competent in providing factual information and support to family members and the public about donation, and in taking steps to make their own election to donate organs effective by registering as a donor. However, it may dissuade or affirm a reluctance in some for self‐body donation. These observations confirm the value of exposure to anatomical sciences over and above the educational value. It is well established that ignorance is a barrier to the use of human tissue: the community and donors have a limited understanding of the processes for donating body, and how they are used (Fennell & Jones, ; Richardson & Hurwitz, ; Boulware et al., ; Ciliberti et al., ; Champney et al., ). Ignorance may also inhibit the capacity of health professionals to effectively counsel family about body and organ donation (Schaeffner et al., ; Zhang et al., ), and to inform potential body donors of the implications of donation, including the possibility of permanent retention of body parts, and/or the likelihood that their body will be used for education, not research (Fennell & Jones, ; Chung & Lehmann, ; Larner et al., ; Champney et al., ; Farsides & Smith, ). Barriers to effective registration as an organ donor include failure to act upon a positive view of organ donation due to ignorance of the need to register prior to death, and failure to ensure the effectiveness of the election by not discussing the wish to donate with family and next‐of‐kin (Williams et al., ; Wakefield et al., ; Irving et al., ; Potter et al., ). Ignorance of organ donation processes, in both the community (Sander & Miller, ; Newton, ; Wakefield et al., ; Rokade & Gaikawad, ; Larner et al., ) and the health system (Irving et al., ; Potter et al., ; Keel et al., ) are also associated with lower rates of registration as an organ donor (Schaeffner et al., ; Figueroa et al., ). Research confirms the crucial role friends, family and colleagues have in informing discussion about donation (both body and organ) and registration as a donor (Fennell & Jones, ; Conesa et al., ; Bolt et al., ; Cornwall et al., ; Larner et al., ; Phillipson et al., ; Merola et al., ; Cornwall et al., ). Body donors have been found to be motivated to register as a donor through their experience of having a friend or family member donate their body (Fennell & Jones, ; Bolt et al., ; McClea & Stringer, ; Cornwall et al., , ). Research has also found that potential donors and indeed the general community find initiating conversations about donation with their family and next‐of‐kin difficult, and sometimes unacceptable (Phillipson et al., ; Ralph et al., ); the specter of loss of someone dear prompts emotional responses which may be overwhelming and prevent discussion. The lack of factual knowledge about donation, particularly the process of procurement, is thus a barrier to donation and inhibits the capacity of the community and healthcare workers to support donation. This research suggests significant potential in the opportunity of anatomy class as a forum to provide factual information about donation, giving students a knowledge base that may assist them to make an informed decision for themselves. Possessing such knowledge is also likely to improve their competency and confidence in providing accurate information about donation to their friends and family now, and to their patients and the community as they move into their future careers. The experience of encountering a donor body may be an additional factor promoting contemplation of issues about death, the handling of the deceased, and the role of donation in education, research, and training. It may also generate opportunities for further discussion about the potential use of donated organs and tissue, and the role of health care professionals, and medical researchers in supporting the transplantation system, and enabling these life‐saving procedures. While students, of all backgrounds, may find these discussions and thoughts initially confronting, they are possibly more receptive to thinking about them as a consequence of the exposure to the donor body, and the inherent altruistic qualities of donation (Cornwall & Stringer, ; Flack & Nicholson, ). It would be important, however, that any program about donation included in anatomy class was constrained to the provision of information, not promotion; the issue of donation is very personal and, as shown here and elsewhere, (Rumsey et al., ; Shaheen, ; Wong, ; Wakefield et al., ; Phillipson et al., ; Ralph et al., ; Naidoo et al., ) reflective of cultural, religious beliefs and other factors. Students should not feel pressured to support donation but could be better informed about it, with the provision of facts about the legal and ethical frameworks in which donation is enabled and the implications of donation, equipping them with the knowledge to address misconceptions, including their own, those of their family and friends, and the community. For those participants who did not support either body or organ donation, their move into professional roles, encountering patients and families who support donation, either as donors or recipients of donated organs, may challenge their views and beliefs about donation. The experience of anatomical dissection and exposure to cadaveric tissue provides an opportunity to offer these students knowledge and personal insights that may assist them to manage these challenges. Such outcomes would be of considerable benefit to the students themselves, and the community. Limitations of the study There are a number of limitations to this study, and thus the conclusions that can be drawn. A validated scale to assess knowledge about body and organ donation and which also had the capacity to measure whether respondents had considered these issues was not identified. The new questionnaire developed for this study has not been validated against existing instruments measuring attitudes to organ donation such as the Attitude Scale and Knowledge Scale , both developed by Sander and Miller ( ), or the Organ Donation Attitude Scale (Rumsey et al., ). The questionnaire showed good reliability with a Cronbach's alpha = 0.849. However, validation in a different cohort would be desirable. Another limitation is the possible effect of sampling bias in the cohort selection. The cohort was not intended to be a general population sample, however, the enrolment of students and postgraduate trainees from a health professional and biomedical sciences courses constitutes a potential bias in relation to the key question of the impact of exposure to gross anatomy and attitudes to donation. A recent study (Viljoen & Stephens, ) reported that students of biomedical sciences were more positive about body donation than arts students and that postgraduates were more positive than undergraduates. Possibly, participants were more positive about gross anatomy and the use of donated bodies for research and education because of their choice to pursue vocations requiring training in anatomy, rather than their exposure to anatomical examination per se. The value participants place on their anatomical sciences education may also be reflected in positive attitudes to donation, as reported by others (Cahill & Ettarh, ; Ciliberti et al., ; Kumar et al., ; Lee & Lee, ; Naidoo et al., ). The study did not examine participants' actual knowledge of donation processes, nor their understanding of how donated bodies and organs were used in Australia, or indeed in their home countries if they were not Australian. Some of the differences in attitudes to body and organ donation, particularly for students coming from non‐European/Anglo‐Saxon countries, may be attributable to their experiences of, or concerns about, the practices in their home countries and/or ignorance of the donation practices in Australia. The anomalies between available avenues for registration as a body donor, and the participants' self‐reported details of registration as a donor highlight the limitation of self‐reported information. It is quite likely that recollection of donor registration details may be inaccurate. Participants may also have felt pressure to support organ donation or to state that they are registered as organ donors because organ donation is regarded as a social good, a phenomenon reported by others (Sehgal et al., ). However, the rate of self‐reported registration as an organ donor in this cohort was not greater than might be expected given the registration level in the general population of individuals of a commensurate age. National data (Donate Life, ) suggest that Australians in this age group are highly supportive of organ donation, but that actual registration rates are very low (around 8%). Thus, even if there was some inaccuracy in self‐reporting registration as an organ donor, it was probably not of a substantial level. Another possible source of bias relates to the order of the choices available to participants when answering questions about their reasons for supporting donation, or not. Ideally, the order of the choices should be randomly presented to avoid primacy and recency bias. Randomization was not practical when using a paper questionnaire, and it was important that the online questionnaire replicated the paper one to avoid introducing other potential biases. Although the order was fixed, responses were selected from all options suggesting that the participants were not unduly influenced by the presented order. The findings of this study suggest that participation in anatomical studies using donated human tissues prompts students to think about their own feelings and attitudes to donation, and invokes consideration of whether they would support the wishes of their family and the public to donate. In making these decisions, they are required to balance their own personal feelings, and imagined responses to the loss of a family member, against the clear benefit of the donation. This study suggests that students were capable of, and inclined to, delineate between the public good that donation represents, and recognition of their feelings in the situation where the possibility of donation arises. The dichotomy between supporting body donation overall but not being prepared to donate themselves, or to support donation for family members, suggest a separation of attitudes with regard to the value of anatomical education and the altruistic act of donation. Other studies have also reported this separation (Cahill & Ettarh, , ; Galic et al., ). Further, the lower level of support for family member body donation compared to organ donation suggests that participants experienced ambivalence in balancing their support for (and benefit from) the use of donated bodies from the public, with their own reluctance to donate or to have family members donate. For participants of diverse religious, cultural, and ethnic backgrounds, this ambivalence and tension may be particularly challenging, but that should be a driver rather than a barrier to engaging with these students (in anatomy class) to assist them in developing and defining their views about donation generally, and the use of donated tissue more specifically. Participants who were inclined against donation or ambivalent about it, and who thus nominated both reasons for and against donation, also felt that they had insufficient information to make decisions about donation of their body or their organs. These participants were more likely to state that they also felt discomfort with the concept, and for some, discomfort occurred irrespective of their support for donation. This was more apparent in relation to body donation, but a small but significant proportion of the cohort articulated reasons both for and against organ donation, demonstrating insight into the complexity of these issues at a personal level. These findings affirm the lack of knowledge and competence medical and health sciences students feel when asked to consider issues of human tissue use and donation, be it for education, research, or clinical treatment. The students also preferred the word “discomfort” over “distaste” in relation to both body and organ donation, suggesting that the emotional response is not so much the disgust or “ick” factor suggested in some studies as deterring donation (O'Carroll et al., ), but a more emotional response to the unknowns associated with having to make a decision about donation while grieving. That emotional response engendered by exposure to a dead human, and dissection of that human, manifested in consideration of a broad range of issues associated with donation. For some, the emotional response will dominate (Morgan et al., ; Miller et al., ) but the experience and information that accompanies such exposure in anatomy class does provide a factual base on which to deal with the challenging issues that arise at the time of death. Thus, the lack of support for self‐ and family‐body donation, and in some participants, organ donation, reported here do not seem to reflect an aversive response. Those participants who had more exposure to gross anatomy were most likely to have thought about donation. In addition, the reasons nominated by participants for not supporting body donation, reflected anxiety about participants' knowledge base, and concerns about future students like themselves handling their bodies, and particularly for a small group of postgraduate students/trainees, the inability to donate organs if they donated their body, rather than antipathy toward dissection. It seems apparent that the participants in the study who had the most exposure to gross anatomy (the postgraduate students/trainees) had been prompted to consider difficult and emotional issues associated with death and donation, and to inform themselves about donation, in a way that those with less exposure had not. The results of this study demonstrate support for self‐body donation is much lower than support for the donation of one's organs for transplantation, and that exposure to gross anatomy reduces support for self‐donation. The concept of body donation was observed to be novel to many participants; more than 40% of the whole cohort had not thought about body donation for themselves; close to 20% had not thought about it in relation to a family member; over 12% also had not considered the issue of public donation. In contrast, only around 10% of the cohort had not considered organ donation for themselves, their family, or for a member of the public. Body donation is less publicly recognized than organ donation (Cornwall, ) and overall measures of public support for body donation have not been reported in Australia. Likewise, little is known about public knowledge of body donation and the processes for effecting it. Most body donor programs in Australia do not advertise or promote their programs, and outreach is mostly limited to aged care, general practice, and other geriatric health‐related services. Apart from occasional media reports or first‐hand experience through having a family member donate, the general public in Australia is unlikely to have encountered information about body donation. However, the level of support found in this study is lower than the support reported in some other cohorts or countries with similar practices for procurement of bodies. For example, a 2009 Irish study of post‐graduate medical students prior to their first dissection session reported that around 40% would support self‐donation, although the proportion strongly against self‐donation increased with subsequent dissection session attendance (Perry & Ettarh, ). Support for family donation was higher, but both were lower than the support for public donation. The findings reported here are consistent overall (in pattern, if not in proportion) with the low level of support by medical students for self‐body donation, somewhat higher support for family‐member body donation, and the highest support for the public to donate their body (Cahill & Ettarh, ; Cornwall et al., ; Anyanwu et al., ; Galic et al., ; Abbasi Asl et al., ; Kumar et al., ). Postgraduate students and trainees who were least willing to self‐body donate were more aware that body donation precludes organ donation, and cited this as a reason not to donate their body. However, 10% of the whole cohort stated that they would not support a family member to donate, suggesting that personal views of next of kin are likely to affect whether an election is honored. This is significant because none of the body donor programs in Australia will accept a body if any of the family or next of kin object. The use of bodies in education and research is a sensitive issue (Cornwall, ; Jones & Whitaker, ; Ghosh, ; Jones & King, ), not the least because of the unsavory historic practices for body procurement (Richardson & Hurwitz, ; Jones & Fennell, ; Jones, ; Jones & Whitaker, ), as well as contemporary cultural and other beliefs (Richardson & Hurwitz, ; Larner et al., ; Ghosh, ). Practices aimed to address concerns about dignity and respect for the deceased, and the respectful, compassionate handling of the body by students are integrated into the pedagogy and policies of many gross anatomy courses (Ghosh, ) including those at The University of Sydney. However, for some, the use of bodies for dissection, notwithstanding their value in education and research, is distasteful and unacceptable (Ghosh, , ), a view that may be reflected in the low overall support for self‐ as opposed to public body donation reported here. There were clear differences between the views of students from different cultural, ethnic, and religious backgrounds in this study toward donation. It was found that participants who practiced a religion were more likely to decline to support body and organ donation for themselves and their family. The association between religious beliefs and reluctance to donate has been reported previously in both Australian (Edwards et al., ; Wakefield et al., ; Alexander et al., ; Phillipson et al., ; Ralph et al., ) and international cohorts (Rumsey et al., ; Wong, ; Galic et al., ; Ciliberti et al., ; Zhang & Ma, ). This study suggests that the way in which the deceased body is viewed, and the need to adhere to cultural and familial norms are factors that engender reluctance to support both body and organ donation. The higher value placed by some cultures on rituals associated with death, and regarding the body as sacred, and, therefore, needing to be intact for funeral rites may be particularly significant for these groups (Wong, ; Ralph et al., ; Sasi et al., ; Donate Life, ) and reflected in the reasons they selected to explain their unwillingness to donate. Local and English‐speaking participants in this study were more likely to regard their deceased body and organs as having no value after death, a view to which their religious and non‐English‐speaking peers were much less likely to subscribe. These same students, nevertheless, expressed some discomfort with the concept of body donation and concern about having others like them dissect their body, again suggesting a very personalized response to body donation. Most religions in Australia have issued public statements of support for donation by religious leaders confirming that donation does not violate religious codes, including those of followers of Buddhism, Hinduism, Islam, and various Christian religions (Donate Life, ). However, misconceptions about donation persist within some religious, cultural, and ethnic sub‐groups (Cooper & Taylor, ; Shaheen, ; Wong, ; Phillipson et al., ; Ralph et al., ; Sasi et al., ; Dagcioglu et al., ) and have significant detrimental impacts on the capacity of transplantation services to provide organs for culturally diverse patients (Ralph et al., ). Australian data show that consent rates for organ donation are much lower in some ethnic and religious communities, including those with a greater clinical need for transplantation, and that there are many fewer registered donors (Donate Life, ). Although organs from ethnically diverse donors can be used in recipients of different ethnicity, the success rate for matching is lower due to specific combinations of blood and tissue type being more commonly required in ethnic groups (Khanal et al., ). Morgan et al. ( ) completed a systematic review of both qualitative and quantitative literature examining attitudes to organ donation and donor registration in ethnic minority groups across the United States and the United Kingdom. They reported five areas where ethnic minority groups' attitudes or knowledge constituted barriers to positive attitudes to donation and effective registration: (1) low levels of factual knowledge about donation and registration as a donor; (2) familial factors including reluctance to discuss donation with family members, taboos about death, and respect for parental authority; (3) religious and cultural beliefs, including that donation was prohibited by particular religions; (4) concerns about bodily integrity including the need for rapid burial and an intact body (often in conjunction with cultural and religious beliefs); (5) distrust in healthcare systems and doctors, including in relation to receiving optimal treatment and equitable distribution of donated organs. This study suggests these barriers to donation may exist for some of the participants in this study, particularly those from religious and cultural groups who may be local students of immigrant background, or international students pursuing education in Australia, and may also apply to body donation. The much greater level of support of participants from English‐speaking backgrounds for organ donation also suggests that cultural and societal factors such as the widespread promotion of, and support for, organ donation in Australia and other English‐speaking countries (e.g., the United Kingdom and the United States) may also be influential in laying a foundation of positive attitudes to donation. The idea of reciprocity reflected in the selection of the “I hope if I need one someone donates for me” as a reason to support organ donation by local and English‐speaking participants may also reflect both this promotion and trust in their health systems to manage organ donation equitably and for those in most need. The exposure to anatomical study is likely to be a positive factor in enabling participants to become more competent in providing factual information and support to family members and the public about donation, and in taking steps to make their own election to donate organs effective by registering as a donor. However, it may dissuade or affirm a reluctance in some for self‐body donation. These observations confirm the value of exposure to anatomical sciences over and above the educational value. It is well established that ignorance is a barrier to the use of human tissue: the community and donors have a limited understanding of the processes for donating body, and how they are used (Fennell & Jones, ; Richardson & Hurwitz, ; Boulware et al., ; Ciliberti et al., ; Champney et al., ). Ignorance may also inhibit the capacity of health professionals to effectively counsel family about body and organ donation (Schaeffner et al., ; Zhang et al., ), and to inform potential body donors of the implications of donation, including the possibility of permanent retention of body parts, and/or the likelihood that their body will be used for education, not research (Fennell & Jones, ; Chung & Lehmann, ; Larner et al., ; Champney et al., ; Farsides & Smith, ). Barriers to effective registration as an organ donor include failure to act upon a positive view of organ donation due to ignorance of the need to register prior to death, and failure to ensure the effectiveness of the election by not discussing the wish to donate with family and next‐of‐kin (Williams et al., ; Wakefield et al., ; Irving et al., ; Potter et al., ). Ignorance of organ donation processes, in both the community (Sander & Miller, ; Newton, ; Wakefield et al., ; Rokade & Gaikawad, ; Larner et al., ) and the health system (Irving et al., ; Potter et al., ; Keel et al., ) are also associated with lower rates of registration as an organ donor (Schaeffner et al., ; Figueroa et al., ). Research confirms the crucial role friends, family and colleagues have in informing discussion about donation (both body and organ) and registration as a donor (Fennell & Jones, ; Conesa et al., ; Bolt et al., ; Cornwall et al., ; Larner et al., ; Phillipson et al., ; Merola et al., ; Cornwall et al., ). Body donors have been found to be motivated to register as a donor through their experience of having a friend or family member donate their body (Fennell & Jones, ; Bolt et al., ; McClea & Stringer, ; Cornwall et al., , ). Research has also found that potential donors and indeed the general community find initiating conversations about donation with their family and next‐of‐kin difficult, and sometimes unacceptable (Phillipson et al., ; Ralph et al., ); the specter of loss of someone dear prompts emotional responses which may be overwhelming and prevent discussion. The lack of factual knowledge about donation, particularly the process of procurement, is thus a barrier to donation and inhibits the capacity of the community and healthcare workers to support donation. This research suggests significant potential in the opportunity of anatomy class as a forum to provide factual information about donation, giving students a knowledge base that may assist them to make an informed decision for themselves. Possessing such knowledge is also likely to improve their competency and confidence in providing accurate information about donation to their friends and family now, and to their patients and the community as they move into their future careers. The experience of encountering a donor body may be an additional factor promoting contemplation of issues about death, the handling of the deceased, and the role of donation in education, research, and training. It may also generate opportunities for further discussion about the potential use of donated organs and tissue, and the role of health care professionals, and medical researchers in supporting the transplantation system, and enabling these life‐saving procedures. While students, of all backgrounds, may find these discussions and thoughts initially confronting, they are possibly more receptive to thinking about them as a consequence of the exposure to the donor body, and the inherent altruistic qualities of donation (Cornwall & Stringer, ; Flack & Nicholson, ). It would be important, however, that any program about donation included in anatomy class was constrained to the provision of information, not promotion; the issue of donation is very personal and, as shown here and elsewhere, (Rumsey et al., ; Shaheen, ; Wong, ; Wakefield et al., ; Phillipson et al., ; Ralph et al., ; Naidoo et al., ) reflective of cultural, religious beliefs and other factors. Students should not feel pressured to support donation but could be better informed about it, with the provision of facts about the legal and ethical frameworks in which donation is enabled and the implications of donation, equipping them with the knowledge to address misconceptions, including their own, those of their family and friends, and the community. For those participants who did not support either body or organ donation, their move into professional roles, encountering patients and families who support donation, either as donors or recipients of donated organs, may challenge their views and beliefs about donation. The experience of anatomical dissection and exposure to cadaveric tissue provides an opportunity to offer these students knowledge and personal insights that may assist them to manage these challenges. Such outcomes would be of considerable benefit to the students themselves, and the community. There are a number of limitations to this study, and thus the conclusions that can be drawn. A validated scale to assess knowledge about body and organ donation and which also had the capacity to measure whether respondents had considered these issues was not identified. The new questionnaire developed for this study has not been validated against existing instruments measuring attitudes to organ donation such as the Attitude Scale and Knowledge Scale , both developed by Sander and Miller ( ), or the Organ Donation Attitude Scale (Rumsey et al., ). The questionnaire showed good reliability with a Cronbach's alpha = 0.849. However, validation in a different cohort would be desirable. Another limitation is the possible effect of sampling bias in the cohort selection. The cohort was not intended to be a general population sample, however, the enrolment of students and postgraduate trainees from a health professional and biomedical sciences courses constitutes a potential bias in relation to the key question of the impact of exposure to gross anatomy and attitudes to donation. A recent study (Viljoen & Stephens, ) reported that students of biomedical sciences were more positive about body donation than arts students and that postgraduates were more positive than undergraduates. Possibly, participants were more positive about gross anatomy and the use of donated bodies for research and education because of their choice to pursue vocations requiring training in anatomy, rather than their exposure to anatomical examination per se. The value participants place on their anatomical sciences education may also be reflected in positive attitudes to donation, as reported by others (Cahill & Ettarh, ; Ciliberti et al., ; Kumar et al., ; Lee & Lee, ; Naidoo et al., ). The study did not examine participants' actual knowledge of donation processes, nor their understanding of how donated bodies and organs were used in Australia, or indeed in their home countries if they were not Australian. Some of the differences in attitudes to body and organ donation, particularly for students coming from non‐European/Anglo‐Saxon countries, may be attributable to their experiences of, or concerns about, the practices in their home countries and/or ignorance of the donation practices in Australia. The anomalies between available avenues for registration as a body donor, and the participants' self‐reported details of registration as a donor highlight the limitation of self‐reported information. It is quite likely that recollection of donor registration details may be inaccurate. Participants may also have felt pressure to support organ donation or to state that they are registered as organ donors because organ donation is regarded as a social good, a phenomenon reported by others (Sehgal et al., ). However, the rate of self‐reported registration as an organ donor in this cohort was not greater than might be expected given the registration level in the general population of individuals of a commensurate age. National data (Donate Life, ) suggest that Australians in this age group are highly supportive of organ donation, but that actual registration rates are very low (around 8%). Thus, even if there was some inaccuracy in self‐reporting registration as an organ donor, it was probably not of a substantial level. Another possible source of bias relates to the order of the choices available to participants when answering questions about their reasons for supporting donation, or not. Ideally, the order of the choices should be randomly presented to avoid primacy and recency bias. Randomization was not practical when using a paper questionnaire, and it was important that the online questionnaire replicated the paper one to avoid introducing other potential biases. Although the order was fixed, responses were selected from all options suggesting that the participants were not unduly influenced by the presented order. Overall, this study suggests that exposure to gross dissection invokes complex and difficult thinking in students of anatomy with regard to donation, and their support for it. The findings affirm that there is value in this exposure that goes beyond imparting anatomical knowledge; these students grapple with questions and issues that assist them in forming opinions and views about donation that they may carry with them into their professional lives. The findings suggest that exposure of students who are likely to work in the health and biomedical sciences professions to anatomical studies using donated human tissue is an opportunity to develop their understanding of the practices, and value of, body and organ donation. Participation in gross anatomical instruction prompts students to think about these issues and thus to contemplate their own thoughts and attitudes about being a donor, and about donation by their family and the public. There is an inherent value in the exposure of students in health, biomedical, and medical sciences to gross anatomy as a means of opening up discussion about post‐mortem human tissue donation. These opportunities could be used to explore and develop student awareness, knowledge, and understanding of the value of body and organ donation in education, research, and health, and of the legal and ethical frameworks in which these occur in Australia. The authors declare no conflicts of interest, financial or otherwise. Supinfo Click here for additional data file.
A hybrid design for dose‐finding oncology clinical trials
cf891951-d458-4aff-b185-45110c09898d
10084431
Internal Medicine[mh]
INTRODUCTION The primary objective of a Phase I oncology dose‐finding clinical trial is to identify the maximum tolerated dose (MTD) of the investigational treatment and subsequently to recommend a dose for the dose‐expansion and Phase II trial. Typically, the MTD is defined as the highest dose with no more than a proportion of participants experiencing a dose‐limiting toxicity (DLT), and the fraction is typically denoted as the “target toxicity probability” (eg, 0.3). Finding this MTD correctly is crucial, as most responses occur at 80% to 120% of the MTD. Any dose below the true MTD can potentially lead to suboptimal efficacy and thus result in a negative Phase II or Phase III trial. Meanwhile, any dose above the true MTD can expose the participants to excessive toxicity. Multiple trial design approaches have been developed for optimal dose‐finding of oncology therapeutics that can be classified into three categories. The first category is the algorithm‐based designs using a prespecified algorithm to sequentially decide the next dose for the treatment, such as the 3 + 3 design. The 3 + 3 design is simple and transparent, has fixed rules and is easy to use, leading to it being the most commonly used design for dose‐escalation trials in early years. However, there are many drawbacks to the 3 + 3 design: regardless of the true DLT rate, this design has the same fixed rules and fixed cohort size, with no reescalation allowed, and the 3 + 3 design is also poor at targeting the true MTD, , , resulting in a biased MTD estimate with high variability. Numerous novel approaches, in particular, Bayesian dose‐finding designs, have been developed to improve the accuracy in MTD identification, and their comparative performances have been studied extensively. , The second category is the model‐based designs, in which the design assumes a parametric model for the dose‐toxicity relationship and continuously updates the model parameters based on the accumulated data to guide dose escalation. The continual reassessment method (CRM) is the most well‐documented model‐based design, and multiple modifications have been proposed, including dose escalation with overdose control (EWOC) and the Bayesian logistic regression model (BLRM), among others. The model‐based designs have excellent operating characteristics, with full inference along with uncertainty assessments for true DLT rates. These designs allow flexible cohort sizes and a feasible selection of intermediate doses. However, executing these designs can be time‐consuming. Other limitations include the nontransparent rules for decision‐making, which sound like a “black box” to a nonstatistician, and robustness issues from model misspecification. The third category is the model‐assisted designs, which are relatively new designs with more transparent dose transition rules and similar operating characteristics compared to the model‐based designs. , Examples of model‐assisted designs are the modified toxicity probability interval (mTPI) design and its variation, mTPI‐2, the Bayesian optimal interval (BOIN) design , and the keyboard design. These designs share the same rule transparency as the 3 + 3 design but improve the performance and provide certain rules regarding the cohort size. However, these designs could have unacceptable overdosing levels. The BOIN design uses an optimal interval, and deviation from this optimal interval can lead to a high variation in decisions. Another drawback shared with the 3 + 3 design is that the information from each dose level in these designs is treated independently; that is, the safety information from previous doses is ignored when making dose transition decisions, and available firsthand knowledge about the experimental treatment is wasted, reducing efficiency. The mTPI design is a widely used model‐assisted design and has a robust performance regardless of the choice of the acceptable toxicity interval. However, the suboptimal overdose control of the mTPI design has been noticed by many practitioners ; for example, if the target DLT probability is 30%, the mTPI design fails to de‐escalate the dose when three DLTs are observed out of six participants, which is a 50% DLT rate. Similar case occurs for two DLTs out of four participants. The keyboard design intended to overcome this drawback with some success. However, the controlling overdosing toxicity using the keyboard design remains not very efficient. For example, using a beta(0.005, 0.005) prior and the target key or acceptable interval [0.27, 0.33] for the target DLT rate 30%, the keyboard design also fails to de‐escalate the dose when three DLTs are observed out of six participants or two DLTs are observed out of four participants. Thus, to address the overdosing risk more efficiently, we will consider some modifications first in the algorithm of the mTPI design to guarantee a dose de‐escalation when the observed toxicity rate is high. Also, we can impose stricter overdose control rules to eliminate the overly toxic doses and ensure that participants are not treated at such dose levels if overly high toxicity rates are observed. It is common practice to impose overdose control rules in the model‐based designs such as EWOC and the BLRM. Note that mTPI instead of keyboard design is chosen to be modified is due to only three intervals in mTPI. Keyboard design further divides the overdosing interval of mTPI into small equal length overdosing keys, which differentiates each overdosing key. However, this is not reasonable to clinicians since no matter whether it is overdosing Key 1 or overdosing Key 2, it should be considered overdosing and thus all these overdosing keys should be combined, as labeled as overdosing interval in mTPI. Another limitation of the model‐assisted designs is that they can only be implemented in predefined dose levels. However, sometimes it may be necessary to test an intermediate dose level if the lower dose level is too low but the next higher dose level is expected to be too toxic. This can be easily accomplished in model‐based designs because the toxicity probability can be estimated at any given dose level with the dose‐toxicity curve. In summary, the three types of existing designs have different merits and limitations. In this article, we propose a novel hybrid design to incorporate the desirable features of both model‐based and model‐assisted designs. The hybrid design was developed based on the infrastructure of the mTPI design, to keep the simplicity of dose transition rules at each individual dose level. An overdose control rule has been added to avoid treating too many participants at toxic dose levels. Meanwhile, a dose‐toxicity model, such as a logistic regression model, is used to pool all the available information from previous doses and account for a dose‐toxicity relationship, as used in the BLRM design, to more accurately estimate the toxicity at the current dose level and to allow the flexibility of adding intermediate doses during trial conduct. The operating characteristics of the hybrid design have been demonstrated through simulations and a real trial dataset. METHODS The proposed hybrid design incorporates the features of the mTPI and BLRM designs, which are briefly discussed below. Suppose there are J provisional dose levels of an investigational treatment, denoted by d 1 < ⋯ < d J , with toxicity probabilities p 1 < ⋯ < p J . Let n j and y j be the number of participants and number of observed DLTs at dose level d j , respectively. We use ϕ to denote the target toxicity probability. 2.1 mTPI design The mTPI design uses a beta‐binomial model at each dose level as follows: y j ∣ n j , p j ~ Binom n j p j , p j ~ Beta a , b . Thus, the posterior distribution of toxicity probability is: p j ∣ n j , y j ~ Beta y j + a , n j − y j + b . Given a target toxicity probability ϕ , the mTPI design prespecifies three intervals with parameters δ 1 = ϕ − ε 1 , δ 2 = ϕ + ε 2 , that is, the underdosing interval 0 δ 1 , the acceptable dosing interval δ 1 δ 2 and the overdosing interval δ 2 1 , where 0 < δ 1 < ϕ < δ 2 < 1 . Then, the mTPI design defines a quantity named unit probability mass (UPM) given the posterior distribution of p j for each of the three intervals as follows: UPM 1 = Pr p j < δ 1 n j y j / δ 1 , UPM 2 = Pr δ 1 ≤ p j ≤ δ 2 n j y j / δ 2 − δ 1 , UPM 3 = Pr p j > δ 2 n j y j / 1 − δ 2 . That is, the UPM is the posterior probability that p j lies in the corresponding interval divided by the length of that interval. The mTPI design determines dose escalation/de‐escalation only based on the observed data at the current dose level j as follows: If UPM 1 = max UPM 1 UPM 2 UPM 3 , escalate dose to level j + 1 ; If UPM 2 = max UPM 1 UPM 2 UPM 3 , stay at the current dose level j ; If UPM 3 = max UPM 1 UPM 2 UPM 3 , de‐escalate dose to level j − 1 . Because the three UPMs can be calculated for all the possible outcomes of n j and y j , dose escalation and de‐escalation rules can be determined before the onset of the trial. To avoid treating excessive participants at extremely toxic dose levels, the mTPI design implements a dose‐exclusion rule: if Pr p j > ϕ n j y j > 0.95 , dose level j and higher doses are excluded in the trial. If the lowest dose is excluded, the trial is stopped for safety. 2.2 BLRM design The BLRM design utilizes a 2‐parameter logistic model: logit p j = log α + β × log d j d * , α , β > 0 , j = 1 , … , J , where α , β are the unknown parameters and d * the prespecified reference dose. Usually, a vague bivariate normal distribution is assigned for the prior of log α , log β . During the trial conduct, BLRM updates the estimate of the dose‐toxicity curve based on the accumulated DLT data across all dose levels. Similar to the mTPI design, the BLRM defines the proper dosing interval [ δ 1 , δ 2 ] as the range of toxicity probabilities regarded as acceptable. The BLRM imposes an overdose control rule as follows: if the observed data suggest that there is more than 25% posterior probability that the DLT rate of a dose is greater than δ 2 , that is, Pr p j > δ 2 n j y j ≥ 0.25 , that dose is an overdose and cannot be used to treat participants. Then, among the dose levels satisfying the safety criterion, the BLRM assigns the next cohort of participants to the “optimal” dose, which is defined as the dose with the maximum posterior probability of the proper dosing interval. 2.3 The hybrid design The hybrid design is a hybrid of the mTPI design and a dose‐toxicity model in three steps. 2.3.1 Step 1 The mTPI design is modified to control the overdosing toxicity using the posterior probability of the DLT rate in the overdosing interval δ 2 1 being less than a value γ (eg, less than 0.75). With this rule, if three DLTs are observed out of six participants, which is a 50% DLT rate, the modified mTPI design will guarantee a dose de‐escalation instead of staying at the current dose level when the observed toxicity rate is high. Table shows the decision rules based on the modified mTPI design with 30% target toxicity rate, where the overdosing toxicity issue is removed. , Thus, the modified mTPI is very efficient in controlling the overdosing toxicity. 2.3.2 Step 2 Since the mTPI design approach treats each dose level independently, the information from all previous doses is disregarded. In contrast, the second step of the hybrid design is to use a dose‐toxicity model by pooling all observed safety information from all previous doses to estimate the DLT rate for the current dose level and predict the DLT rate for the next dose level in the provisional dose list. For example, a frequentist logistic dose‐toxicity model could be used. The estimated DLT rate at the current dose level is used together with the decision rules from the earlier modified mTPI design (as shown in Table ) to make a decision about dose escalation. Note that if the dose‐toxicity model is not feasible (eg, no DLT is observed in all tested doses), then no action is needed at this step. 2.3.3 Step 3 If the decision following the modified mTPI design in Step 1 was to escalate to the next higher dose in the provisional dose list, then the predicted DLT rate using the dose‐toxicity model from Step 2 is used to judge if the next dose level is feasible or not by comparing the predicted DLT rate at the next dose level with the prespecified‐targeted DLT rate (Table ). If the predicted DLT rate is over the targeted DLT rate, the next dose level in the provisional dose list cannot be used. Instead, an intermediate dose from the earlier dose‐toxicity model will be calibrated so that the DLT rate is closer to the targeted DLT rate. Similarly, if the decision was to de‐escalate to the next lower dose in the provisional dose list, an intermediate dose from the earlier dose‐toxicity model can be calibrated so that the DLT rate is closer to the targeted DLT rate if the toxicity at the next lower dose level is too low. Note that choosing an intermediate dose level should be clinically and operationally feasible (eg, based on tablet strength or expected pharmacokinetics [PK]). If the decision was to stay at the current dose, then the estimated DLT rate at the current dose using the dose‐toxicity model from Step 2 is used to make a decision. If the estimated DLT at the current dose is over the prespecified‐targeted DLT rate, then the decision is to dose de‐escalate, otherwise, it is to stay. Note that the number of additional participants required to avoid overtoxicity can be guided using the rule from the modified mTPI design in Step 1, which is another advantage of the hybrid design over the BLRM approach. At the end of the dose‐escalation procedure, the DLT rates at all tested dose levels are estimated based on the dose‐toxicity model or the pool‐adjacent‐violators algorithm if the parametric dose‐toxicity model is not feasible. The dose with an estimated DLT rate closest to the prespecified‐targeted toxicity rate will be treated as a preliminary MTD. However, the totality of the available data, such as the emerging safety, PK, pharmacodynamic (PD) and other biomarker information, will be considered before deciding on the dose(s) to be carried forward to the next phase (eg, the cohort expansion phase or Phase II trial). mTPI design The mTPI design uses a beta‐binomial model at each dose level as follows: y j ∣ n j , p j ~ Binom n j p j , p j ~ Beta a , b . Thus, the posterior distribution of toxicity probability is: p j ∣ n j , y j ~ Beta y j + a , n j − y j + b . Given a target toxicity probability ϕ , the mTPI design prespecifies three intervals with parameters δ 1 = ϕ − ε 1 , δ 2 = ϕ + ε 2 , that is, the underdosing interval 0 δ 1 , the acceptable dosing interval δ 1 δ 2 and the overdosing interval δ 2 1 , where 0 < δ 1 < ϕ < δ 2 < 1 . Then, the mTPI design defines a quantity named unit probability mass (UPM) given the posterior distribution of p j for each of the three intervals as follows: UPM 1 = Pr p j < δ 1 n j y j / δ 1 , UPM 2 = Pr δ 1 ≤ p j ≤ δ 2 n j y j / δ 2 − δ 1 , UPM 3 = Pr p j > δ 2 n j y j / 1 − δ 2 . That is, the UPM is the posterior probability that p j lies in the corresponding interval divided by the length of that interval. The mTPI design determines dose escalation/de‐escalation only based on the observed data at the current dose level j as follows: If UPM 1 = max UPM 1 UPM 2 UPM 3 , escalate dose to level j + 1 ; If UPM 2 = max UPM 1 UPM 2 UPM 3 , stay at the current dose level j ; If UPM 3 = max UPM 1 UPM 2 UPM 3 , de‐escalate dose to level j − 1 . Because the three UPMs can be calculated for all the possible outcomes of n j and y j , dose escalation and de‐escalation rules can be determined before the onset of the trial. To avoid treating excessive participants at extremely toxic dose levels, the mTPI design implements a dose‐exclusion rule: if Pr p j > ϕ n j y j > 0.95 , dose level j and higher doses are excluded in the trial. If the lowest dose is excluded, the trial is stopped for safety. BLRM design The BLRM design utilizes a 2‐parameter logistic model: logit p j = log α + β × log d j d * , α , β > 0 , j = 1 , … , J , where α , β are the unknown parameters and d * the prespecified reference dose. Usually, a vague bivariate normal distribution is assigned for the prior of log α , log β . During the trial conduct, BLRM updates the estimate of the dose‐toxicity curve based on the accumulated DLT data across all dose levels. Similar to the mTPI design, the BLRM defines the proper dosing interval [ δ 1 , δ 2 ] as the range of toxicity probabilities regarded as acceptable. The BLRM imposes an overdose control rule as follows: if the observed data suggest that there is more than 25% posterior probability that the DLT rate of a dose is greater than δ 2 , that is, Pr p j > δ 2 n j y j ≥ 0.25 , that dose is an overdose and cannot be used to treat participants. Then, among the dose levels satisfying the safety criterion, the BLRM assigns the next cohort of participants to the “optimal” dose, which is defined as the dose with the maximum posterior probability of the proper dosing interval. The hybrid design The hybrid design is a hybrid of the mTPI design and a dose‐toxicity model in three steps. 2.3.1 Step 1 The mTPI design is modified to control the overdosing toxicity using the posterior probability of the DLT rate in the overdosing interval δ 2 1 being less than a value γ (eg, less than 0.75). With this rule, if three DLTs are observed out of six participants, which is a 50% DLT rate, the modified mTPI design will guarantee a dose de‐escalation instead of staying at the current dose level when the observed toxicity rate is high. Table shows the decision rules based on the modified mTPI design with 30% target toxicity rate, where the overdosing toxicity issue is removed. , Thus, the modified mTPI is very efficient in controlling the overdosing toxicity. 2.3.2 Step 2 Since the mTPI design approach treats each dose level independently, the information from all previous doses is disregarded. In contrast, the second step of the hybrid design is to use a dose‐toxicity model by pooling all observed safety information from all previous doses to estimate the DLT rate for the current dose level and predict the DLT rate for the next dose level in the provisional dose list. For example, a frequentist logistic dose‐toxicity model could be used. The estimated DLT rate at the current dose level is used together with the decision rules from the earlier modified mTPI design (as shown in Table ) to make a decision about dose escalation. Note that if the dose‐toxicity model is not feasible (eg, no DLT is observed in all tested doses), then no action is needed at this step. 2.3.3 Step 3 If the decision following the modified mTPI design in Step 1 was to escalate to the next higher dose in the provisional dose list, then the predicted DLT rate using the dose‐toxicity model from Step 2 is used to judge if the next dose level is feasible or not by comparing the predicted DLT rate at the next dose level with the prespecified‐targeted DLT rate (Table ). If the predicted DLT rate is over the targeted DLT rate, the next dose level in the provisional dose list cannot be used. Instead, an intermediate dose from the earlier dose‐toxicity model will be calibrated so that the DLT rate is closer to the targeted DLT rate. Similarly, if the decision was to de‐escalate to the next lower dose in the provisional dose list, an intermediate dose from the earlier dose‐toxicity model can be calibrated so that the DLT rate is closer to the targeted DLT rate if the toxicity at the next lower dose level is too low. Note that choosing an intermediate dose level should be clinically and operationally feasible (eg, based on tablet strength or expected pharmacokinetics [PK]). If the decision was to stay at the current dose, then the estimated DLT rate at the current dose using the dose‐toxicity model from Step 2 is used to make a decision. If the estimated DLT at the current dose is over the prespecified‐targeted DLT rate, then the decision is to dose de‐escalate, otherwise, it is to stay. Note that the number of additional participants required to avoid overtoxicity can be guided using the rule from the modified mTPI design in Step 1, which is another advantage of the hybrid design over the BLRM approach. At the end of the dose‐escalation procedure, the DLT rates at all tested dose levels are estimated based on the dose‐toxicity model or the pool‐adjacent‐violators algorithm if the parametric dose‐toxicity model is not feasible. The dose with an estimated DLT rate closest to the prespecified‐targeted toxicity rate will be treated as a preliminary MTD. However, the totality of the available data, such as the emerging safety, PK, pharmacodynamic (PD) and other biomarker information, will be considered before deciding on the dose(s) to be carried forward to the next phase (eg, the cohort expansion phase or Phase II trial). Step 1 The mTPI design is modified to control the overdosing toxicity using the posterior probability of the DLT rate in the overdosing interval δ 2 1 being less than a value γ (eg, less than 0.75). With this rule, if three DLTs are observed out of six participants, which is a 50% DLT rate, the modified mTPI design will guarantee a dose de‐escalation instead of staying at the current dose level when the observed toxicity rate is high. Table shows the decision rules based on the modified mTPI design with 30% target toxicity rate, where the overdosing toxicity issue is removed. , Thus, the modified mTPI is very efficient in controlling the overdosing toxicity. Step 2 Since the mTPI design approach treats each dose level independently, the information from all previous doses is disregarded. In contrast, the second step of the hybrid design is to use a dose‐toxicity model by pooling all observed safety information from all previous doses to estimate the DLT rate for the current dose level and predict the DLT rate for the next dose level in the provisional dose list. For example, a frequentist logistic dose‐toxicity model could be used. The estimated DLT rate at the current dose level is used together with the decision rules from the earlier modified mTPI design (as shown in Table ) to make a decision about dose escalation. Note that if the dose‐toxicity model is not feasible (eg, no DLT is observed in all tested doses), then no action is needed at this step. Step 3 If the decision following the modified mTPI design in Step 1 was to escalate to the next higher dose in the provisional dose list, then the predicted DLT rate using the dose‐toxicity model from Step 2 is used to judge if the next dose level is feasible or not by comparing the predicted DLT rate at the next dose level with the prespecified‐targeted DLT rate (Table ). If the predicted DLT rate is over the targeted DLT rate, the next dose level in the provisional dose list cannot be used. Instead, an intermediate dose from the earlier dose‐toxicity model will be calibrated so that the DLT rate is closer to the targeted DLT rate. Similarly, if the decision was to de‐escalate to the next lower dose in the provisional dose list, an intermediate dose from the earlier dose‐toxicity model can be calibrated so that the DLT rate is closer to the targeted DLT rate if the toxicity at the next lower dose level is too low. Note that choosing an intermediate dose level should be clinically and operationally feasible (eg, based on tablet strength or expected pharmacokinetics [PK]). If the decision was to stay at the current dose, then the estimated DLT rate at the current dose using the dose‐toxicity model from Step 2 is used to make a decision. If the estimated DLT at the current dose is over the prespecified‐targeted DLT rate, then the decision is to dose de‐escalate, otherwise, it is to stay. Note that the number of additional participants required to avoid overtoxicity can be guided using the rule from the modified mTPI design in Step 1, which is another advantage of the hybrid design over the BLRM approach. At the end of the dose‐escalation procedure, the DLT rates at all tested dose levels are estimated based on the dose‐toxicity model or the pool‐adjacent‐violators algorithm if the parametric dose‐toxicity model is not feasible. The dose with an estimated DLT rate closest to the prespecified‐targeted toxicity rate will be treated as a preliminary MTD. However, the totality of the available data, such as the emerging safety, PK, pharmacodynamic (PD) and other biomarker information, will be considered before deciding on the dose(s) to be carried forward to the next phase (eg, the cohort expansion phase or Phase II trial). A TRIAL EXAMPLE We considered a dose‐escalation trial to determine the MTD and/or recommended dose for expansion of an antibody XYZ against a target expressed on immune cells, administered every 14 days in participants with selected tumor types. During dose escalation, cohorts of participants were treated with XYZ until the MTD was reached or a lower recommended dose(s) was established. The dose escalation was guided by an adaptive BLRM following the EWOC principle. During the dose escalation, additional cohorts of up to six participants could be enrolled at any planned or intermediate dose level below the next dose level or the MTD to better characterize safety, PK and/or PD activity. The MTD was defined as the highest dose not expected to cause DLT in ≥33% of the treatment participants in the first 28 days of XYZ treatment during the escalation part of the trial. The provisional dose levels and the corresponding DLT numbers from the trial are listed in Table , where the third column shows the estimated DLT rate and predicted DLT rate using a logistic regression model. At the dose level (DL) 6, the BLRM algorithm led to an intermediate dose level at a dose level between DL5 and DL6. By way of comparison, the results from applying the hybrid design and other methods to determine the MTD are listed in Table . There were two DLTs out of six participants at DL6, thus, the 3 + 3 design gave an MTD at DL5. The dose‐toxicity model and the raw point estimate indicated that the DLT rate at DL6 was over the targeted 33%; however, both the mTPI design and the BOIN design led to a “stay at the current dose level” conclusion and continued to enroll additional participants at DL6. For the hybrid design, additional toxicity evaluation was added to the mTPI design rules. As shown in the third column in Table , the estimated DLT rate at the current DL6 and the predicted DLT level at next dose level (DL7) were over the 33% target toxicity; thus, the hybrid design led to a de‐escalation. However, the DLT rate at DL5 was too low; therefore, an intermediate dose level was recommended and the DLT rate at DL5.5 was estimated (Table ), which was closer but still below the target toxicity of 33%. Screenshots of the hybrid design results using a developed R‐shiny tool and R‐code can be viewed in the Supplementary , respectively. Table indicates that the 3 + 3 design selected an undertoxic MTD, both the mTPI design and the BOIN design selected an overtoxic MTD, but the BLRM and the hybrid design selected a reasonably toxic MTD level between DL5 and DL6. Although the hybrid design reached a similar conclusion to the BLRM, the hybrid design implemented the logistic regression model in a frequentist setting, which did not require the Bayesian setting using a priori and the Markov Chain Monte Carlo simulations. Due to the unacceptable toxicity at current DL6, an intermediate dose level, DL5.5, below but close to DL6, was likely feasible. NUMERICAL TRIAL We conducted a simulation trial to compare the operating characteristics of the proposed hybrid design with the mTPI, BOIN, Keyboard design, CRM, BLRM and 3 + 3 designs at the target toxicity level of 0.20, 0.25 and 0.30. A total of 15 true toxicity scenarios are displayed in Table . In each scenario, there were five dose levels, with the true DLT in bold. It was assumed that the toxicity level monotonically increased with dose level. The true DLT was placed at different dose levels. A cohort size of three was used for all methods so that the methods were comparable to the 3 + 3 design. The maximum number of participants that could be dosed was 30. A total of 10 000 trials were simulated for each scenario. In the 3 + 3 design, the dose‐escalation procedure usually stops before the sample size reaches 30, given the nature of the design. To make the average number of participants treated at the MTD comparable, the remaining participants were treated at the selected MTD for a 3 + 3 design. For the mTPI and hybrid designs, the proper dosing interval was set to [ δ 1 , δ 2 ] = [ ϕ − 0.05, ϕ + 0.05], where ϕ was the target DLT. For the Keyboard design, the target key was set to [ ϕ − 0.05, ϕ + 0.05]. For the BOIN, an optimal interval was used and for BLRM design, a default dosing interval was used. As the BLRM and hybrid design require a dose‐toxicity relationship, dose levels of 3, 6, 12, 18, and 24 mg were assumed, corresponding to the toxicity levels for all scenarios shown in Table . In the BLRM, 24 mg was used as the reference dose. A noninformative prior of (log α , log β ) log α log β ∼ N − 0.847 0.381 , 2.015 2 0 0 1.027 2 was applied according to Neuenschwander et al. The CRM utilized a 1‐parameter power model with a normal prior of α ~ N 0 , 2 . The following metrics were used to demonstrate the operating characteristics of the six designs: Probability of correct selection: the number of trials with target dose selected as the MTD/10 000. Average participants treated at MTD: the average number of participants assigned to the MTD across 10 000 trials. Probability of overdosing: the number of trials with selected dose above the true MTD/10 000. Probability of underdosing: (the number of trials with selected dose under the true MTD + the number of trials terminated early)/10 000. 4.1 RESULTS Figure shows the probability of correct selection and average number of participants treated at the MTD for the seven designs, respectively. In general, the 3 + 3 design yielded a lower correct selection rate compared to other model‐based and model‐assisted designs; thus, it treated fewer participants at the MTD. The correct selection rates and the average number of participants treated at the MTD for the hybrid, mTPI, Keyboard and BOIN designs were comparable across the 15 scenarios at three different target DLTs. The hybrid design performed better than the mTPI design in all scenarios except case 5. This was expected because the hybrid design is based on the mTPI design, with a logistic regression model added for the improvement of accuracy. The hybrid design performed better than the BOIN design and Keyboard design in all scenarios except scenario 15. The CRM had a better performance compared to other methods when the target DLT was 0.20 and for some of the scenarios when the target DLT was 0.25 or 0.30. The correct selection rate for the BLRM was low, especially when the first dose was the target MTD. This may have been due to the conservative overdosing control rule Pr p 1 > δ 2 data > 0.25 . The dose escalation tended to stop early with the imposed overdose control rule. Overdosing control is the most relevant concern for protecting the safety of trial participants, and therefore it is strictly regulated by health authorities and ethics committees. Figure shows the probability of overdosing and underdosing, respectively. The hybrid design had a very robust performance and yielded a relatively lower probability of overdosing across all 15 scenarios. It had a lower overdosing toxicity compared to the mTPI, Keyboard and BOIN designs for all 15 scenarios. The overdosing rate became lower for the 3 + 3 design as the target DLT got higher, because the 3 + 3 design targets a fixed DLT range. The risk of overdosing using the mTPI design was high when the target DLT was 0.20 and 0.25, and the risk of overdosing for the CRM, Keyboard and BOIN designs was high when the target DLT was 0.25 and 0.30. The BLRM had a lower overdosing rate when the target DLT was 0.20. The BLRM was the most conservative method, meaning that it was more likely to treat participants at a suboptimal dose when the target DLT was 0.20. The hybrid design had a slightly higher underdosing risk compared to other model‐assisted methods. The CRM had the lowest risk of underdosing in most of the scenarios. In terms of safety, the hybrid design had a lower chance of selecting a toxic dose, as the MTD uniformly compared to other model‐based and model‐assisted designs with a reasonable underdosing percentage. Though the CRM outperformed other methods with regard to correct selection rate at a low MTD rate and risk of underdosing, it may aggressively select a toxic dose as the MTD. 4.2 Simulation for intermediate dose as MTD In a real trial, the true MTD is usually not in the preselected provisional dose list. One advantage of the hybrid design is that an intermediate dose can be calibrated and selected as the MTD when the predicted probability of toxicity of the next dose exceeds the target DLT or when the toxicity level at the current dose level is over the target DLT level. We conducted simulations to compare the performance of the hybrid design to other approaches. A typical model‐assisted design chooses a dose among the doses in the provisional dose list. If the true MTD is not at exactly on one of the provisional dose list but between two adjacent doses, then a typical model‐assisted design has zero chance selecting a true MTD. However, the hybrid design has the chance selecting the true MTD due to its ability using an intermediate dose. Since not all methods are able to calibrate for an intermediate dose, it was not feasible to use overdosing and underdosing for comparison. We proposed a new measure: mean dose level deviation from the target MTD, for which the optimal result is the lowest mean dose level deviation, and is defined as: mean dose level selected from each simulation − target MTD . In this new measure, the dose levels were ranked as 1, 2, 3, 4 or 5, from the lowest dose to the highest dose. If the target MTD was between dose Levels 2 and 3, the target MTD rank was 2.5. If an intermediate dose level between dose Levels 1 and 2 was selected by the hybrid design, the rank for the intermediate dose level would be 1.5. If dose escalation was terminated early, meaning that the first dose level was too toxic, a rank of 0.5 was assigned. Table shows the four simulation scenarios with an intermediate dose as MTD. The target DLTs were 0.125, 0.20, 0.25 and 0.30, respectively. For each scenario, 10 000 trials were simulated. Note that only one intermediate dose between two adjacent doses was used in the simulation. The mean dose level deviation from the true MTD is displayed in Figure . The 3 + 3 design and the BLRM performed the worst among all the methods. The Keyboard and BOIN designs performed better than the mTPI design. The CRM and the hybrid design had a smaller deviation from the true MTD than other methods, but the hybrid design was better than the CRM in scenario 3. In summary, with all simulated scenarios, the hybrid design had the most robust operating characteristics and the best overall performance among the selected designs. The hybrid design controlled overdosing effectively and gave an MTD estimate closest to the true MTD. RESULTS Figure shows the probability of correct selection and average number of participants treated at the MTD for the seven designs, respectively. In general, the 3 + 3 design yielded a lower correct selection rate compared to other model‐based and model‐assisted designs; thus, it treated fewer participants at the MTD. The correct selection rates and the average number of participants treated at the MTD for the hybrid, mTPI, Keyboard and BOIN designs were comparable across the 15 scenarios at three different target DLTs. The hybrid design performed better than the mTPI design in all scenarios except case 5. This was expected because the hybrid design is based on the mTPI design, with a logistic regression model added for the improvement of accuracy. The hybrid design performed better than the BOIN design and Keyboard design in all scenarios except scenario 15. The CRM had a better performance compared to other methods when the target DLT was 0.20 and for some of the scenarios when the target DLT was 0.25 or 0.30. The correct selection rate for the BLRM was low, especially when the first dose was the target MTD. This may have been due to the conservative overdosing control rule Pr p 1 > δ 2 data > 0.25 . The dose escalation tended to stop early with the imposed overdose control rule. Overdosing control is the most relevant concern for protecting the safety of trial participants, and therefore it is strictly regulated by health authorities and ethics committees. Figure shows the probability of overdosing and underdosing, respectively. The hybrid design had a very robust performance and yielded a relatively lower probability of overdosing across all 15 scenarios. It had a lower overdosing toxicity compared to the mTPI, Keyboard and BOIN designs for all 15 scenarios. The overdosing rate became lower for the 3 + 3 design as the target DLT got higher, because the 3 + 3 design targets a fixed DLT range. The risk of overdosing using the mTPI design was high when the target DLT was 0.20 and 0.25, and the risk of overdosing for the CRM, Keyboard and BOIN designs was high when the target DLT was 0.25 and 0.30. The BLRM had a lower overdosing rate when the target DLT was 0.20. The BLRM was the most conservative method, meaning that it was more likely to treat participants at a suboptimal dose when the target DLT was 0.20. The hybrid design had a slightly higher underdosing risk compared to other model‐assisted methods. The CRM had the lowest risk of underdosing in most of the scenarios. In terms of safety, the hybrid design had a lower chance of selecting a toxic dose, as the MTD uniformly compared to other model‐based and model‐assisted designs with a reasonable underdosing percentage. Though the CRM outperformed other methods with regard to correct selection rate at a low MTD rate and risk of underdosing, it may aggressively select a toxic dose as the MTD. Simulation for intermediate dose as MTD In a real trial, the true MTD is usually not in the preselected provisional dose list. One advantage of the hybrid design is that an intermediate dose can be calibrated and selected as the MTD when the predicted probability of toxicity of the next dose exceeds the target DLT or when the toxicity level at the current dose level is over the target DLT level. We conducted simulations to compare the performance of the hybrid design to other approaches. A typical model‐assisted design chooses a dose among the doses in the provisional dose list. If the true MTD is not at exactly on one of the provisional dose list but between two adjacent doses, then a typical model‐assisted design has zero chance selecting a true MTD. However, the hybrid design has the chance selecting the true MTD due to its ability using an intermediate dose. Since not all methods are able to calibrate for an intermediate dose, it was not feasible to use overdosing and underdosing for comparison. We proposed a new measure: mean dose level deviation from the target MTD, for which the optimal result is the lowest mean dose level deviation, and is defined as: mean dose level selected from each simulation − target MTD . In this new measure, the dose levels were ranked as 1, 2, 3, 4 or 5, from the lowest dose to the highest dose. If the target MTD was between dose Levels 2 and 3, the target MTD rank was 2.5. If an intermediate dose level between dose Levels 1 and 2 was selected by the hybrid design, the rank for the intermediate dose level would be 1.5. If dose escalation was terminated early, meaning that the first dose level was too toxic, a rank of 0.5 was assigned. Table shows the four simulation scenarios with an intermediate dose as MTD. The target DLTs were 0.125, 0.20, 0.25 and 0.30, respectively. For each scenario, 10 000 trials were simulated. Note that only one intermediate dose between two adjacent doses was used in the simulation. The mean dose level deviation from the true MTD is displayed in Figure . The 3 + 3 design and the BLRM performed the worst among all the methods. The Keyboard and BOIN designs performed better than the mTPI design. The CRM and the hybrid design had a smaller deviation from the true MTD than other methods, but the hybrid design was better than the CRM in scenario 3. In summary, with all simulated scenarios, the hybrid design had the most robust operating characteristics and the best overall performance among the selected designs. The hybrid design controlled overdosing effectively and gave an MTD estimate closest to the true MTD. DISCUSSION In this article, a hybrid design for dose‐finding oncology clinical trials was proposed. The design is a hybrid of the modified mTPI design and a dose‐toxicity model, and it is a hybrid of the Bayesian approach for each individual dose and the frequentist approach, combining the available information from all tested doses. This proposed hybrid design takes all the merits from the current existing designs and has a very robust performance. At each individual dose level, it diligently focuses on the DLT severity by using an existing Bayesian approach, such as the mTPI design. By combining the observed information from all the tested dose levels at the next step, it utilizes a dose‐toxicity model to further characterize and quantify the DLT level. The DLT information from both individual dose level and the model‐based quantification is used to recommend the dose‐escalation strategy. In principle, any model‐assisted design can be used in this hybrid design. Due to its more effectiveness in controlling the overdosing toxicity, a modified mTPI was used in the hybrid design. The hybrid design facilitates close communication and discussion for optimal dose decision‐making among the study team members, particularly the clinicians and the statisticians, which results in a deep understanding of the data leading to greater efficiency and more information for dose selection. For a dose‐escalation trial, the most relevant question asked by health agencies and ethics committees is, “How well does the trial design control overdosing?” Our simulation results indicate that the proposed hybrid design has the best overdosing toxicity control rate among the commonly used scenarios. Thus, the recommended dose from the hybrid design has fewer safety concerns. In practice, it is commonly challenging to have the MTD as exactly one of the prelisted provisional dose levels. The hybrid design is able to calibrate the MTD and recommend an intermediate dose level, if needed, using the dose‐toxicity model. The simulation results indicate that the recommended MTD from the hybrid design is much closer to the true MTD among the commonly used scenarios. Thus, this hybrid design has a high probability of more precisely selecting an optimal dose for Phase II and Phase III trials to reduce attrition rates in late‐phase oncology clinical development. In practice, a provisional dose level list is usually carefully selected before the trial begins. If, after the highest dose level in the provisional list is tested, there is no DLT observed and no clear efficacy‐related signal, then the dose‐escalation procedure may be continued, with  additional participants enrolled at a higher dose level. At the end of the dose‐escalation procedure, the DLT rates at all tested dose levels are estimated based on the dose‐toxicity model. The dose with an estimated DLT rate closest to targeted toxicity rate will be treated as the MTD. However, the totality of the available data, such as the emerging safety, PK, PD and other biomarker information, is considered before deciding on the recommended dose for cohort expansion and Phase II trial. To help implement the proposed hybrid design, an R‐shiny tool ( https://fzh223.shinyapps.io/HybridModel/ ) has been developed and is freely available to guide clinicians in every step of dose‐finding process. LIMITATIONS The hybrid design is built on top of a modified mTPI, and it requires pooling all available information accumulated and running a dose‐toxicity model for determining the next dose level. However, a web‐based, freely available R‐shiny app for the hybrid design was developed for the simplicity of use. FUTURE DIRECTIONS The current hybrid design only deals with the dose escalation for monotherapy. In recent years, more and more studies seem to focus on the combination of two novel agents. As such, we are working to expand the method for the dose escalation of two novel drugs in combination. CONCLUSIONS The hybrid design is a hybrid of the modified mTPI design and a dose‐toxicity model, and it is a hybrid of the Bayesian approach for each individual dose and the frequentist approach for combining the available information from all tested doses. The hybrid design takes all the merits from the current existing designs and has a very robust performance. The hybrid design controls the overdosing toxicity well and leads to a recommended dose closer to the true MTD. With the integration of all available data and interpolation of information across dose groups on top of the modified mTPI, the hybrid design leads to more accuracy and efficiency for dose selection. The design procedure facilitates close communication for more robust dose decision‐making among the study team members, particularly between the clinician and the statistician, which results in a deep understanding of the data leading to greater efficiency and more information for dose selection. The work reported in the paper has been performed by the authors, unless clearly specified in the text. Jason J. Z. Liao contributed to the conceptualization, methodology, formal analysis, software and writing. Feng Zhou and Heng Zhou contributed to the methodology, software and writing. Lilli Petruzzelli, Kevin Hou and Ekaterine Asatiani contributed to the methodology and writing. The authors declare no potential conflicts of interest. Appendix S1 Supporting Information Click here for additional data file.
Caring for affective subjects produced in intimate healthcare examinations
1eb94b37-be2d-4d44-88b8-aeae60f0d75b
10084453
Gynaecology[mh]
The older man is sitting on the stretcher, his legs dangling over its edge incongruously. He is in his eighties, and at the urology clinic because he must urinate often and urgently, even though he claims he hardly drinks any fluid. The urologist, Hanna, comments to him that their patients often know where to find public toilets. This triggers a smile of mutual understanding in them both. The patient explains that he has an enlarged prostate and that he thinks it may cause these problems. Hanna states that his PSA is normal but that she will examine the prostate. She asks him to pull down his trousers and place himself on his left side on the stretcher with his back towards her. Hanna turns away to put on a plastic apron and gloves while the patient undresses, but the patient keeps on talking to her, which makes her turn towards him from time to time, even though she is trying to give him privacy as he undresses. At the start of the examination, they realize that the patient’s previous haemorrhoid surgery is making the digital rectal exam very painful, and Hanna applies generous amounts of topical anaesthesia crème. ‘This is completely normal,’ she says to the patient, intentionally reassuring him. But it is clear that he is uncomfortable. The above fieldwork note succinctly articulates aspects of care that we are going to discuss – care which is much more complex than the mere practice of providing an exam in the name of health care. This article is about the feelings – affect – induced by the digital rectal exam of the prostate and the gynaecological bimanual pelvic exam, and the care doctors are or are not instructed to give during courses that teach the exams to medical students in Sweden. Focusing on care allows us to ask questions about subject positions, assumptions about the patient experience and the teaching of medical professionalism that is influenced by norms and values often associated with intersectional aspects of identity like sexgender, sexuality, age, class, race and health. In our analysis we will be focusing on sexgender and sexuality. Social aspects of care appear to differing degrees around intimate exams. They, as analytically generative spaces, in turn, allow for us to think about the structures and norms that shape medical health care provision and produce sexgendered doctors and patients. Care, care work and emotional labour have been analysed in many different domains, especially as an essential element of certain professions , in particular health care . Much of this work has pointed to the tangled association of care with undercurrents of emotional attachment or domestic or familiar responsibilities and with its relationship to the body of the care giver and receiver , particularly in the realm of nursing. Care is also discursively employed in the description of tasks often awarded low prestige and low pay and actioned as a motivation in providing these tasks in the face of low pay and low status . The gendered aspect of care work is widely recognized, as is its relationship to class and race . Here one finds analysis of the frequent assignment of caring tasks to marginalized workers as well as analysis of how these tasks appear to be the target of labour process (re)organization . This connection to labour is often seen in rhetoric around the promise of technological developments for care provision, such as telemedicine and robotics , technology which would claim to complement or potentially replace care workers. Posthuman analysis of this care work articulates how care is assigned qualities such as ‘good’ and ‘human’ but then questions what these assignments do. These are all inspirational aspects of care that we take with us into our analysis. Yet, in this article, much of our analytical understanding of care is inspired by Science and Technology Studies (STS) and work on care practices that are situated and contextual and which can harm or reproduce vulnerabilities even as one attempts to care . The work we have observed involves being responsible for the provider’s and the patient’s affective response to the vulnerabilities produced by particular examinations and the situated materialities of where and how that care is provided. The materiality of the intimate exams (and discourse, always, forever, being entangled with it) also speaks to concern for how care is received and provided. The material aspects of the exams are part of our analysis (the table, the lighting, the speculum or ultrasound), but so are elements that structure the discursive practices of medicine – the binary understanding of biological sex that has historically produced gynaecology and urology as separate fields with different norms and values – as well as structuring discursive concepts of femininity, masculinity and sexuality. These human and non-human actors, cultural figurations, discursive and physical elements are all entangled into the knot that we, through this analysis, hope to loosen and discuss. Therefore, we attend to care as a medical practice with affective discursive and material (social and embodied) aspects, which is why we attend to affective subjectivities, see below. We will, at times, be using the term body to indicate the anatomical, biological, physical referent often assumed to be the object of medical knowledge construction. We see the term body as a categorizing tool, like sexgender, and in line with discussions that problematize the distinction between a physical body and a social subject . We employ body to reference an entangled understanding of patient subjectivities and medical anatomies that we are observing and note that it is often mediated by the material technologies of medical practice and their constellations, including the teaching methods and tools used to examine and represent it . ‘Affective subjectivity’ as a concept has found resonance in discussions about the relationship between psychiatrists and their patients, often in a therapeutic framework employing counter-transference and empathy. The term is used to connote, ‘the awareness of and reflection on our emotional responses and their influence on our work, and the development of a capacity for self-reflection and emotional attunement with our patients’ ( : 97; see also ). In these discussions, the affective subject is the provider of a health service, not the patient. The term recognizes the importance of a psychiatrist’s (or medical doctor’s) emotional response to their patients for the quality and type of care that the carer is able to provide. This is a conscious response and the term is used to draw attention to it. But the concept of the affective subject can be equally useful in analysing how medical care practices create specific patient subjects, or at least collective imaginings of them. Used in this sense, it draws on Judith Butler’s discussion of performativity, which emphasizes the interplay between repetitive acts of being (or being done to/on, as in our case) and the performance of acceptable subjects. She writes, We think of subjects as the kind of beings who ask for recognition in the law or in political life; but perhaps the more important issue is how the terms of recognition condition in advance who will count as a subject and who will not ( : iv). Using the concept affective subject brings attention to how patient bodies come into being through norms in medical practice, how bodies are encountered, observed and touched, through the material positionings of the patient and doctor bodies, as well as the feelings these examinations provoke. The concept also brings attention to how various pedagogical approaches consciously address how to handle one’s emotions in training situations, how to recognize and care for the feelings that particular patient encounters produce in a care provider’s body – or what they are expected to produce – and also how to display or hide them. This involves both acknowledging that one has been affected (in the way discussed by ; ) but also acknowledging the professional requirements of emotional control and learning the ‘proper’ feeling for the job and learning how to be a ‘doctor’ ( [1961]; ). Thus, we see affect and care as ‘tightly bound’ ( : 30) and intertwined with how patient subjectivities are produced in medical care practices. Our use of the concept of affective subjects can also be traced to , ) work on bodies and the way her ideas are engaged in Robert’s (2015) analysis of puberty. Hence, the concept affective subject shows how feelings are entangled with, and provoked by, bodies and how they come into being. Roberts uses the concept of an affective subject to explore how feelings (negative and positive) are often unreflectively entangled with a phenomenon. In her research, these are the practices and discourses of sexual development that produce affective subjects. In ours, they are the practices and discourses of intimate, invasive exams. Through its attention to affect and the subject, Roberts’ work, drawing on Haraway, inspires us to analytically explore those entanglements by trying to ‘loosen the knots’ of affect, care, materiality and medical practice ( : 129). In Robert’s work, these knots joined early onset puberty to female sexuality, youth and anxiety, exploring ‘how these three intertwined elements – findings, feelings and figurations – articulate sexually developing bodies as bio-psycho-social’ ( : 31). Roberts asserts that early onset puberty becomes a ‘gendered and gendering problem’, because of the attention given to female bodies and female sexuality (as problematic) in medicine, psychology and in public debates and compares this with the lack of research about male puberty. Reading along with her, we also loosen the knots that entangle the material-discursive practices in affective moments of care to articulate sexgender, sexuality and subject positions. Of note, however, is that most of these moments of affect are consciously reflected upon in our material; teaching the students the ‘proper way’ to deal with their emotions that are triggered by an exam, for example, but also teaching them to pay attention to the emotions that may be triggered in the patient. The affective subject of the care provider and the care receiver is conscientiously considered, but more so in one exam than in the other. This article is based on a research project about how intimate, internal examinations are taught and practiced by medical students, and how the body is produced in the doctor-patient relationship. We are comparing two exams where there are resemblances found in how they are performed and experienced: the digital rectal exam of the prostate and the bimanual pelvic exam, hereafter called the prostate exam and the pelvic exam. They are both invasive, intimate exams located at a part of the body often charged with norms and emotions related to sexgender and sexuality . They also involve trying to exam the internal body through an orifice, and they both are often used in association with ultrasound technology to ‘see’ inside the body. The prostate exam is conducted by inserting a doctor’s finger into the patient’s anus and feeling the approximate size, shape and firmness/texture of the prostate, to see if it is hard or lumpy, possibly indicating cancer. This exam can be part of a process of diagnosing cancer, benign prostate hypoplasia or prostatitis. The bimanual pelvic exam is conducted by placing the one hand on the patient’s abdomen and inserting two fingers of the other hand into the vagina to feel the firmness of the cervix, the uterus’ shape, placement and texture, checking for cysts on it and the ovaries. Both examinations are taught to all medical students and carried out at times by general practitioners, even though both are also standard exams for their respective specialties, urology and gynaecology. There are material-discursive differences, of course. For example, the prostate exam is often carried out in a regular examination room on a standard examination bed rather than in a specifically designed chair, which we will discuss later in this paper. Our aim in studying these two exams together is to show that the pelvic and the prostate exam do much more than determine the health (or not) of an anatomical body part. They also combine with cultural understandings to produce and reproduce very different affective subjectivities for the doctors and the patients in practices of care. At the particular medical education program in Sweden which we used as our case, students are taught the pelvic exam in learning sessions where they practice it on so called professional patients; volunteer cis females (someone who identifies as female and was assigned a female sex at birth) who use their own bodies to teach medical students doing the examination. This is offered at two occasions, during the second and fifth year. The first of these sessions focuses on teaching the exam step-by-step. Together, the teaching gynaecologist and the professional patients talked the students through the whole patient encounter, from calling in the patient from the waiting room, to talking with her, to how to position oneself as an examiner and how to move one’s hands, and even to when and where to look (both at the genitals, and when and how to make eye-contact with the patient). This is repeated during the final semester when the students again practice the exam on professional patients. At this second occasion, students also practice how to use a speculum, an instrument that opens the vaginal walls, used to facilitate many different procedures such as pap smear tests or insertion of an intrauterine device. This way of teaching the pelvic exam and to introduce it early in students’ education has been recognized as successful as it eases the students’ anxiety . In contrast, at this medical education program, prostate exams are taught during clinical practice, especially during the fourth year when the students are placed in surgical clinics and wards. They may have observed and/or performed rectal examinations in other clinics, as well, since it is not only conducted for examining the prostate. But most rectal exams scheduled in the curriculum are done when practicing prostate exams. The students do the exam on actual patients at the clinic with meagre theoretical preparations. However, doctor-patient encounters at the urology clinic followed a well-established script: the patient initially being encouraged to narrate why he is there, followed by the doctor presenting a medical point of view, then conducting the medical examination or procedure, and ending with the doctor explaining possible findings and how to proceed. If a medical student is present, s/he will also examine the patient, contingent upon patient approval. Hence, these two intimate examinations are taught and conceptualized in different ways at this medical program. Our primary material consists of interviews with medical students, observations in hospital settings focusing on prostate examinations, and observations of students practicing pelvic examinations with professional patients. Gleisner, who is trained in anthropology, spent three evenings at a gynaecological clinic when, all together, 16 students practiced pelvic examinations on professional patients. Gleisner followed a urologist for 2 days meeting prostate patients to get an insight of what a day at the urology clinic could look like and how a patient encounter could be carried out. During one of these days the urologist also supervised a medical student. Two urologists were interviewed about doing prostate examinations and supervising students, providing context and insight into their profession as well as teaching their profession to students. Gleisner also conducted in-depth, qualitative interviews with nine medical students, lasting approximately 1 hour each. The students studied their second, fourth or final year of medical school in Sweden (medical education in Sweden spans 5.5 years). Interview questions addressed learning to perform these two examinations, and in relation to that, how to approach patients, touching and examining patient bodies and the process of becoming a doctor. The limited empirical material is mitigated by the earlier research both authors have conducted on medical training, gynaecology and urology. Johnson, a medical sociologist at a department of gender studies, has integrated this study into a wider, interdisciplinary programme of research on the prostate and has collaboratively shaped the direction of the analysis. The analysis is also contextualized against ethnographic work with midwifery training done for a previous study and previous research involving observations of the gynaecological professional patient training compared with simulator training . We conducted our analysis together, through close readings of the fieldnotes, attention to moments in the observations and interviews that indicated a disruption or affective response, and reflection on these notes against other studies of care and affective subjectivities. In our material, we consider feelings and emotions as findings (c.f. ). We are writing about the feelings that doctors and patients express, and the feelings they are expected to have, drawing directly from ethnographic fieldwork (observations), of these exams in hospital settings and interviews with students and doctors. Bringing attention to feelings and emotions in students’ training elucidates how the prostate exam and the pelvic exam also become ‘gendered and gendering problems’ because, as we will show below, normative understandings about a patient and the body are entangled with how these exams are taught and carried out in medical care practice. Thus, this study primarily relies on ethnographic observation and interviews with Swedish medical doctors and students, which limits the types of generalizations which can be drawn from a qualitative study of this sort, with what could be considered a small body of material. Additionally, our material is bounded by language (citations have been translated by the authors from Swedish) and must be contextualized as occurring in a tax-funded healthcare structure that provides universal healthcare for nominal cost at point-of-delivery. However, and in line with the ethnographic traditions of sociology, STS and anthropology which we work in, we feel the empirics presented here generate a space to reflect upon some situated, sexgendered aspects of how affective care is taught by and to health care professionals, how care for the patient’s understanding of what is normal is imagined, and how care for an individual patient’s integrity and modesty during the two intimate exams is discussed. The study has followed the ethical guidelines of the Swedish Research Council for research with professionals about their profession or the learning of that profession. In line with the practices of our institution, ethics board approval was secured for the umbrella study within which this was placed. All participants granted informed consent, and written approval from the director of the medical program was procured. Even though the focus of this study is students’ learning and educational practices, both professional patients and clinical patients were present during observations. They were all informed about the study prior to observations and granted informed consent. No identifying information about patients was collected. The pelvic examination The professional patient Linda is positioned in the gynaecological chair, partially reclined on her back and with her legs in stirrups. Fluorescent lamps light up the room and an additional lamp is directed towards her genitals. She is dressed in a hospital gown, blue with buttons down the front, and with long white socks to warm her. She has an extra pillow behind her head so that she can maintain eye contact with the students without straining her neck. The socks and the extra pillow are only used for these special occasions and not during regular patient examinations, the teaching gynaecologist explains. Two second-year students stand on either side of Linda and a third one is between her legs. The students seem nervous, eyes flickering, hands constantly moving as they have not yet learned how to position their bodies or where to look when meeting patients. In a firm voice, Linda tells the student in front of her to place his hand on her knee before adjusting the chair. ‘It makes me feel safe and not as if you are sending me through the ceiling’, she says. She tells him to raise the chair up a bit further and explains: ‘This is your working environment and you have to think about that. If you’re standing in a crooked position all day, it will cause pain in your back and shoulders’. She turns to the whole group when saying this and the students begin to laugh. This scene is taken from observations of a training session where second-year students practiced doing the pelvic exam on professional patients. The medical doctor’s (and in particular, the medical students’) affective responses to the patient and the examination were fundamental to the development of professional patients in gynaecology examinations at the university hospital where our work was conducted as well as for gynaecology training programmes internationally . While mention may be made of how unpleasant it is for a female patient to be the body upon which a new student is learning the exam, literature about the use of professional patients also underlines that students probably think that their first pelvic exam is difficult, both physically and emotionally . This is partly because the examination is largely based on the sense of touch, as one is examining interior parts of the body not visible without an ultrasound. It is hard to know if one is touching the right things and in the right way. But it is also imagined to be because one is examining the female genitals, and the medical student is positioned between the legs of a half-naked woman. This is presented as a normal cause of emotional stress. One does not normally find one’s self in that position, and that emotional stress is not only ‘normalized’ by the professional patient training, it is also recognized, legitimated and then the student is given tools by the professional patient and the instructor to deal with their emotions (c.f. ). During observations of learning sessions with professional patients and in the interviews, students mentioned and expressed feelings of gratitude, of awe, fascination, a sense of pride and wonder at having done the exam ‘the first time’. ‘This is so cool!’, one of the second-year students said when watching the teaching gynaecologist point out the ovaries made visible through the ultrasound device. But she immediately corrected herself by saying to the students next to her ‘Right, don’t be so enthusiastic’. The students are taught to think of the female genitals as amazing, but also taught to not show this wonder, as doctors, during an examination; to consciously control their affective response as professionals. Yet, acknowledging the feelings evoked through the learning session was encouraged by the teaching gynaecologist. At the end of the learning session she asked how they (the students) felt. There was an unquestionably positive fascination of the female genitals and the conversation during the post-exam teaching session included remarks about everyone now wanting to become gynaecologists. This intentional affective training produces positive feelings evoked through the training on the professional patients. Through the professional patient exams, the students are allowed to be ‘affective subjects’ and are taught how to professionalize that affect in a caring way which entangled respect, wonder and awe for the female genitals together with the stress and embarrassment of finding one’s self between the legs of a half-naked woman. Thus, through the learning sessions, there is both learning how to be affected as a professional subject and the expression of a collective responsibility for those being affected and cared for (c.f. ). The prostate examination The urologist Hanna, the fourth-year medical student David, and I (Gleisner) are sitting in Hanna’s office. There are a few minutes left before we need to go see the next patient. Hanna tells us about him: a man in his sixties experiencing urination problems. PSA-tests have shown increased levels lately. And his father had recently undergone surgery due to prostate cancer. Just like with the previous patients, Hanna walks into the examination room first to ask the patient if David and I could participate. We are let in, just like with all patient meetings that day. Hanna pulls out chairs for us and we sit down close to the patient. Hanna begins by asking him to narrate why he is there, at the urology clinic. He tells us about his worries, that he was afraid that he also has cancer, just like his father. Hanna listens to him, talks about his PSA-tests and then explains that she is going to examine the prostate, do an ultrasound and a biopsy. She instructs him how to position himself on the examination table. While the patient pulls down his pants and underwear, and lays down on his side on the stretcher, with his back turned toward us, Hanna and David put on plastic aprons and gloves. Hanna sits down on a stool, puts lubricant onto her right index finger, rests her left hand on the patient’s hip and tells him that she is going to start examining. She gently inserts her finger into the patient’s rectum and then goes down on one knee trying to reach further in. Hanna later explains to us that she needs to do that sometimes because her fingers are so short. When David examines the patient after Hanna, she mumbles to him if he had felt ‘that’ and David hums in response. They change places again so that Hanna can do the ultrasound examination and the biopsy, assisted by a nurse who has entered the room. Hanna slowly inserts the lubricated ultrasound probe. She points at the screen and explains to David and me the contours of the prostate visualized through the device. Without saying anything, she taps her finger at a dark spot that she apparently wants us to notice. Throughout the patient encounters observed at the urology clinic, the urologist turned around while the patient undressed or dressed himself. This mirrors a strategy that was explicitly taught to the students when practicing pelvic exams on professional patients, the production of care for the patient’s personal integrity. However, affect in the doctor and the patient encounter are taught and expected differently in the learning situations of these two exams. For example, rather than eliciting a ‘wow’ response in the medical students, like the female body did, examining the prostate rather provoked other kinds of feelings in describing the exam and the body. When asked during interviews if they remember doing the first pelvic exam, the students vividly described the shocking, fascinating or amazing experience it was, as described earlier. They could, in detail, recall the time and place for it as well as their own experiences, of course being reinforced by the specific learning sessions and that this was the first bodily exam they practice (except listening to hearts and lungs). When the fourth and final-year students were asked if they remembered their first prostate exams, their responses differed from the gynaecological exam on two points. Either that they did not recall it at all (‘It must have been during the placement in surgery/urology. . .’) or it was described in relation to teaching aspects, or rather the lack of teaching aspects, as in: ‘The supervisor just told me to insert my finger’ or ‘Now it’s your turn’ he said, ‘without any kind of instructions of how or what to search for’. One of the final-year students, however, described the experience of doing a prostate exam for the first time in relation to the patient’s body. She said, I remember that I thought to myself ‘I have my finger in another person’s butt. This is so weird’. But it was over so quickly. And after you’ve done a couple of them, eventually it’s not a big thing. When asked about describing the prostate examination compared to other examinations they learn, one of the fourth-year students said, ‘It’s not an exam that you choose to do’. When prompted to elaborate, he laughed and said, ‘Well, there are more enjoyable things’. He continued, It’s a taboo thing. . . nothing you would do outside of the examination room. You do it in the role of being a doctor and then it’s okay, the normal boundaries disappear, and it feels relevant to do it. So yeah, it’s a special kind of examination but the more you do the less stigmatizing it gets. But now I also know what to look for when examining. Another student also mentioned that there was a distinction in the frequency, or at least the perceived specialness of doing pelvic examinations compared to rectal examinations. Pelvic exams are usually done at special clinics, while prostate exams are often done more ‘off the cuff’.’ The student laughed a little uncomfortably . That sounded horrible. Not ‘off the cuff’, really. . . but digital rectal exams are used for many different indications, and it isn’t just prostate palpations that can indicate them. While the pelvic exam is almost exclusively discussed as a sensitive exam in literature (see ; ; , ) as well as in the empirical material, the prostate exam is more ambiguous. It is given little attention in the students’ course literature, even though it is also described as ‘. . . one of the more intrusive examinations’ and that ‘men experience it as unpleasant’ ( [2006]). But the students are not advised during training on how to care for the patient’s experience as carefully as during the gynaecology courses. Engaging with the concept of affective subjects employed in psychology, we can consider how during this moment of teaching a basic element of healthcare, the doctor and student were being affected by the patient – but not the patient-as-prostate, rather the patient as an entanglement of potential disease, age, fear, angst and stoic masculinity . The bodies they are examining are there because of a problem and the fear of disease, cancer and death is present in the expectations of those in the room, which, of course, makes them present and capable of producing affect. Given that the patient is already at the doctor’s office because of a health concern and not generally for a standard check-up (rare in Sweden) or as a professional patient, the student may feel it is more imperative to maintain a disaffected demeanour in front of the patient than they would in a purely teaching situation. However, these sources of affect are not articulated during the teaching of the exam, they are merely ignored verbally and addressed through the choice of examination method and position. This non-recognition of the affect is (re)producing the stoic patient and the unaffected urologist – two subjectivities which have been reflected upon in sociological studies of prostate cancer practices . And apart from the affective state of the already worrying patient, the prostate exam, itself, does create affect, which needs care directed at it, in both the students and the imagined patients. As the example of what position a prostate patient is examined in, below, will show, there is a good deal of affective response to imagined sexual norms and taboos that are considered and cared for when dealing with prostate exams. The professional patient Linda is positioned in the gynaecological chair, partially reclined on her back and with her legs in stirrups. Fluorescent lamps light up the room and an additional lamp is directed towards her genitals. She is dressed in a hospital gown, blue with buttons down the front, and with long white socks to warm her. She has an extra pillow behind her head so that she can maintain eye contact with the students without straining her neck. The socks and the extra pillow are only used for these special occasions and not during regular patient examinations, the teaching gynaecologist explains. Two second-year students stand on either side of Linda and a third one is between her legs. The students seem nervous, eyes flickering, hands constantly moving as they have not yet learned how to position their bodies or where to look when meeting patients. In a firm voice, Linda tells the student in front of her to place his hand on her knee before adjusting the chair. ‘It makes me feel safe and not as if you are sending me through the ceiling’, she says. She tells him to raise the chair up a bit further and explains: ‘This is your working environment and you have to think about that. If you’re standing in a crooked position all day, it will cause pain in your back and shoulders’. She turns to the whole group when saying this and the students begin to laugh. This scene is taken from observations of a training session where second-year students practiced doing the pelvic exam on professional patients. The medical doctor’s (and in particular, the medical students’) affective responses to the patient and the examination were fundamental to the development of professional patients in gynaecology examinations at the university hospital where our work was conducted as well as for gynaecology training programmes internationally . While mention may be made of how unpleasant it is for a female patient to be the body upon which a new student is learning the exam, literature about the use of professional patients also underlines that students probably think that their first pelvic exam is difficult, both physically and emotionally . This is partly because the examination is largely based on the sense of touch, as one is examining interior parts of the body not visible without an ultrasound. It is hard to know if one is touching the right things and in the right way. But it is also imagined to be because one is examining the female genitals, and the medical student is positioned between the legs of a half-naked woman. This is presented as a normal cause of emotional stress. One does not normally find one’s self in that position, and that emotional stress is not only ‘normalized’ by the professional patient training, it is also recognized, legitimated and then the student is given tools by the professional patient and the instructor to deal with their emotions (c.f. ). During observations of learning sessions with professional patients and in the interviews, students mentioned and expressed feelings of gratitude, of awe, fascination, a sense of pride and wonder at having done the exam ‘the first time’. ‘This is so cool!’, one of the second-year students said when watching the teaching gynaecologist point out the ovaries made visible through the ultrasound device. But she immediately corrected herself by saying to the students next to her ‘Right, don’t be so enthusiastic’. The students are taught to think of the female genitals as amazing, but also taught to not show this wonder, as doctors, during an examination; to consciously control their affective response as professionals. Yet, acknowledging the feelings evoked through the learning session was encouraged by the teaching gynaecologist. At the end of the learning session she asked how they (the students) felt. There was an unquestionably positive fascination of the female genitals and the conversation during the post-exam teaching session included remarks about everyone now wanting to become gynaecologists. This intentional affective training produces positive feelings evoked through the training on the professional patients. Through the professional patient exams, the students are allowed to be ‘affective subjects’ and are taught how to professionalize that affect in a caring way which entangled respect, wonder and awe for the female genitals together with the stress and embarrassment of finding one’s self between the legs of a half-naked woman. Thus, through the learning sessions, there is both learning how to be affected as a professional subject and the expression of a collective responsibility for those being affected and cared for (c.f. ). The urologist Hanna, the fourth-year medical student David, and I (Gleisner) are sitting in Hanna’s office. There are a few minutes left before we need to go see the next patient. Hanna tells us about him: a man in his sixties experiencing urination problems. PSA-tests have shown increased levels lately. And his father had recently undergone surgery due to prostate cancer. Just like with the previous patients, Hanna walks into the examination room first to ask the patient if David and I could participate. We are let in, just like with all patient meetings that day. Hanna pulls out chairs for us and we sit down close to the patient. Hanna begins by asking him to narrate why he is there, at the urology clinic. He tells us about his worries, that he was afraid that he also has cancer, just like his father. Hanna listens to him, talks about his PSA-tests and then explains that she is going to examine the prostate, do an ultrasound and a biopsy. She instructs him how to position himself on the examination table. While the patient pulls down his pants and underwear, and lays down on his side on the stretcher, with his back turned toward us, Hanna and David put on plastic aprons and gloves. Hanna sits down on a stool, puts lubricant onto her right index finger, rests her left hand on the patient’s hip and tells him that she is going to start examining. She gently inserts her finger into the patient’s rectum and then goes down on one knee trying to reach further in. Hanna later explains to us that she needs to do that sometimes because her fingers are so short. When David examines the patient after Hanna, she mumbles to him if he had felt ‘that’ and David hums in response. They change places again so that Hanna can do the ultrasound examination and the biopsy, assisted by a nurse who has entered the room. Hanna slowly inserts the lubricated ultrasound probe. She points at the screen and explains to David and me the contours of the prostate visualized through the device. Without saying anything, she taps her finger at a dark spot that she apparently wants us to notice. Throughout the patient encounters observed at the urology clinic, the urologist turned around while the patient undressed or dressed himself. This mirrors a strategy that was explicitly taught to the students when practicing pelvic exams on professional patients, the production of care for the patient’s personal integrity. However, affect in the doctor and the patient encounter are taught and expected differently in the learning situations of these two exams. For example, rather than eliciting a ‘wow’ response in the medical students, like the female body did, examining the prostate rather provoked other kinds of feelings in describing the exam and the body. When asked during interviews if they remember doing the first pelvic exam, the students vividly described the shocking, fascinating or amazing experience it was, as described earlier. They could, in detail, recall the time and place for it as well as their own experiences, of course being reinforced by the specific learning sessions and that this was the first bodily exam they practice (except listening to hearts and lungs). When the fourth and final-year students were asked if they remembered their first prostate exams, their responses differed from the gynaecological exam on two points. Either that they did not recall it at all (‘It must have been during the placement in surgery/urology. . .’) or it was described in relation to teaching aspects, or rather the lack of teaching aspects, as in: ‘The supervisor just told me to insert my finger’ or ‘Now it’s your turn’ he said, ‘without any kind of instructions of how or what to search for’. One of the final-year students, however, described the experience of doing a prostate exam for the first time in relation to the patient’s body. She said, I remember that I thought to myself ‘I have my finger in another person’s butt. This is so weird’. But it was over so quickly. And after you’ve done a couple of them, eventually it’s not a big thing. When asked about describing the prostate examination compared to other examinations they learn, one of the fourth-year students said, ‘It’s not an exam that you choose to do’. When prompted to elaborate, he laughed and said, ‘Well, there are more enjoyable things’. He continued, It’s a taboo thing. . . nothing you would do outside of the examination room. You do it in the role of being a doctor and then it’s okay, the normal boundaries disappear, and it feels relevant to do it. So yeah, it’s a special kind of examination but the more you do the less stigmatizing it gets. But now I also know what to look for when examining. Another student also mentioned that there was a distinction in the frequency, or at least the perceived specialness of doing pelvic examinations compared to rectal examinations. Pelvic exams are usually done at special clinics, while prostate exams are often done more ‘off the cuff’.’ The student laughed a little uncomfortably . That sounded horrible. Not ‘off the cuff’, really. . . but digital rectal exams are used for many different indications, and it isn’t just prostate palpations that can indicate them. While the pelvic exam is almost exclusively discussed as a sensitive exam in literature (see ; ; , ) as well as in the empirical material, the prostate exam is more ambiguous. It is given little attention in the students’ course literature, even though it is also described as ‘. . . one of the more intrusive examinations’ and that ‘men experience it as unpleasant’ ( [2006]). But the students are not advised during training on how to care for the patient’s experience as carefully as during the gynaecology courses. Engaging with the concept of affective subjects employed in psychology, we can consider how during this moment of teaching a basic element of healthcare, the doctor and student were being affected by the patient – but not the patient-as-prostate, rather the patient as an entanglement of potential disease, age, fear, angst and stoic masculinity . The bodies they are examining are there because of a problem and the fear of disease, cancer and death is present in the expectations of those in the room, which, of course, makes them present and capable of producing affect. Given that the patient is already at the doctor’s office because of a health concern and not generally for a standard check-up (rare in Sweden) or as a professional patient, the student may feel it is more imperative to maintain a disaffected demeanour in front of the patient than they would in a purely teaching situation. However, these sources of affect are not articulated during the teaching of the exam, they are merely ignored verbally and addressed through the choice of examination method and position. This non-recognition of the affect is (re)producing the stoic patient and the unaffected urologist – two subjectivities which have been reflected upon in sociological studies of prostate cancer practices . And apart from the affective state of the already worrying patient, the prostate exam, itself, does create affect, which needs care directed at it, in both the students and the imagined patients. As the example of what position a prostate patient is examined in, below, will show, there is a good deal of affective response to imagined sexual norms and taboos that are considered and cared for when dealing with prostate exams. Looking more closely at the material positionings of the patient and doctor bodies elucidates the emotional responses entangled with these exams. For example, material difference is found in the table/chair that the patient’s body is placed upon. Hereon we will explore the context-specific sites of care and the material affordances of the patient and doctor bodies in the affective work of the care that is provided. When entering an examination room at the urology clinic, one could not immediately see what kind of examinations were carried out in there, or even which clinic it was. In comparison, an examination room at the gynaecological clinic was unmistakably recognizable by the gynaecological chair dominating the room. The gynaecological chair dictates the patient’s position and is designed for the gynaecologist’s needs. It provokes certain feelings attached to the patient’s exposed position . Some minor adjustments can be made, for example to the height of the stirrups. But it is primarily the patient who needs to adjust to the examination chair, rather than the other way around, an adjustment that the gynaecologist initiates by asking the patient to move further down into a position where the patient’s pelvis is tilted into a favourable position for the examination. Patients and doctors describe the position as if one is almost falling out of the chair . The height of the gynaecological chair could be altered depending on how tall the examiner is and whether they are standing or sitting on a stool. Reflecting back over the scene from the pelvic exam, we would like to analyse something peculiar that brought the students to laughter during the training session. The professional patient who took on the instructor’s role not only taught and guided the students in doing the examination in a proper way but also cared for their experience of doing the examination as well as their role of becoming an examiner. The latter included very practical aspects of examining such as the working position. These inexperienced second-year medical students, who had so far only listened to hearts and lungs, saw and did their first bimanual pelvic examination and they had just been told how sensitive it is and the importance of caring for the patient’s feelings. Meanwhile, the woman in the gynaecological chair, half-naked and in a position that is often described as exposed and vulnerable, encouraged them to think about the ergonomics of their own working position. The irony of this redirection of care practices produced a discursive rupture and induced laughter. In contrast to the gynaecological chair, the examination table/bed is not exclusive for urology, it is part of the standard equipment in many kinds of examination rooms. Nor is there one specific position for having one’s prostate examined; doctors may prefer different positioning of the patients. The patient could be in a forward-bending position, standing on the floor with hands on the examination table, or on hands and knees on the examination table, or, as depicted in the excerpt presented earlier, laying on his side. In all these different positions the examining doctor is positioned behind the patient, which inhibits eye contact. One of the urologists said, You can’t see if the patients are in pain since they lay with their backs towards you. That makes this situation special. Otherwise you could just look up, but in this case, you have no idea what he is feeling. Though this comment indicates that the urologist is aware of and trying to be responsive to what the patient is feeling, the constellation of material artefacts and bodies in the room prevents a visual indication of the patient’s experience. As the patient’s position is not predetermined by the examination table, and doctors may have different preferences for how to examine the patient, the doctors must explain to the patient how they should position themselves on the table each time the patient enters the examination room. A different urologist, who preferred the patients to be standing up and leaning over the table, explained, I usually prefer them standing as it gives you a better notion of the prostate. But it is a bit tricky. It is a very. . . well, it is a difficult situation. That position is difficult for a man. The position the doctor in this quote is referring to, when the patient has pulled down his trousers and bends over while the doctor is standing behind inserting a finger into his rectum, shows that it is not only what is examined that is sensitive but also how. This positioning of bodies brings with it associations of receiving anal penetration, which adds to the sensitivity of the situation. By being behind the patient, eye contact is prevented, or at least made difficult. This makes ‘easy’ a practice of avoiding eye contact, of looking away metaphorically and literally as the patient endures a procedure which is possibly both painful and uneasy, with sexual undercurrents that could be embarrassing . The students in the fourth and in the final year who had done the prostate examination also spoke of this difficult patient position and the benefits of having the patient laying on his side due to this. This was the position they had most often seen in practice. The most preferable position was thus negotiated in relation to the patients’ feelings of discomfort, rather than the doctor’s best option. And, as the scene of the urologist kneeling beside the examination table to get her finger further into the patient’s anus demonstrates, there is little concern for the ergonomically planned work environment in the urology clinic. Hence, the urology exam positions the patient as the important factor in determining position – the urologist needs to contort their body into odd positions to feel the prostate when the man is laying on his side – while the gynaecological chair is used to position the patient into a way that allows for comfortable examination for the gynaecologist. The material-discursive practices of the two exams do care through privacy or trust to varying degrees. They both employ understandings of discretion and the desire not to expose the patient, but also to different degrees. The gynaecological examination room has an additional lamp that could be adjusted as a spotlight directed at the patient’s genitals and a curtained-off changing room behind which the patient can undress, creating privacy and also articulating when and where the patient is observed by the examining doctor. The students were told not to look at the patient undressing or when climbing into the chair, but to wait for the patient to be in the chair; that is when they should look. When the teaching gynaecologist and the professional patient initially showed how to perform the exam, they both encouraged the students to look. ‘Come closer so you can see’, the gynaecologist said. The students slowly approached, but their unease seemed to switch into amazement when the gynaecologist used the speculums to open the vaginal walls so that the students could see the cervix. In contrast, during the prostate exam, the lights were dimmed, as it helps to see the ultrasound image more clearly. When asked about this, the urologist said of the body on the examination table, ‘there is nothing to see’. When doing a prostate examination, there is nothing to observe on the outside of the body. What is observed is first felt by the inserted fingers and then digitally visualized through the ultrasound device that shows the contours, size and shape of the prostate and possible abnormal findings. There is a materialized irony of this in the examinations, given that the gynaecological chair produces an exposed and vulnerable body even as that body is protected in the actual care practices of the pelvic exam. Quite the opposite occurs in the prostate examinations, where the darkened room and choice of examination position reduces the patient’s exposure, even as the practices and people examining him do not discursively label this as ‘care’ for his feeling of modesty and integrity. That ‘there is nothing to see’ when doing a prostate exam does not mean that there is nothing to be exposed. We suggest that dimming the lights can be read as a way of caring about the patient and his exposed and vulnerable position, in a way that is not considered (necessary) for the gynaecology patients because care for their integrity is done in other practices. The material aspects in the context-specific sites of urology and gynaecology contribute to discussions on different affective subjects produced in medical care practice, what is cared for and what is dismissed. As show, care in practice is continuously done between practitioners, the ones being cared for, the material, but also its history and context. The material world of gynaecology creates and exposes a tenuous subject position for the female body. The material world of urology allows for a protected and respected male body to be examined. We will discuss historical trajectories of these two fields and their impact on affective subjectivities, below. We began this article by introducing aspects of care that appear to differing degrees around intimate exams and which we want to continue discussing, with a concern for care as a medical practice; that care is much more complex than the mere practice of providing an exam in the name of health care; care allows us to ask questions about subject positions; care is entangled with assumptions about the patient experiences and influenced by values attributed to sexgender, sexuality, age and health. Through a perspective on care with a concern for the material-discursive production of intimate exams, the article has detailed medical care practices that create specific patient and doctor affective subjectivities. It has done so by looking into how medical students are taught and reflect upon such an intimate examination as the digital rectal examination of the prostate and comparing and contrasting this with another intimate examination, the bimanual pelvic examination, asking who is being cared for and reading bodies as material-semiotic actors with attention to affect and the subject . The students were aware that both of the exams were producing emotional responses (affective subjectivities) in themselves as care givers and potentially in the patients. Yet, as we have shown, there are differences in how the meeting between the prostate exam patient and doctor and how the pelvic exam patient and doctor is taught to students. We have been struck by the production of caring and respectful gynaecologists who speak to the patient, look her in the eye, prepare her for the exam, explain what will happen and what the doctor is about to do, all the time working with an understanding of the patient as a sensitive subject who may be experiencing this as an unpleasant exam and trying to make it slightly less unpleasant. This became even more striking when we saw how it contrasts with the scene in the prostate exam, and the production of a less affective subject, one which is less concerned with emotions, or with the affect of the patient. Hence, the evening courses for the students with professional patients were meant to provide insight into the patient’s experience. The way the pelvic exam is embedded in clinical work practices engages discretion to make the patient feel comfortable and secure. In contrast, even though the urologist would turn her back to let the prostate patient undress, and the students reflected on the affective nature of the prostate exam to some degree, the structural elements that produced the affective gynaecology patient were missing in the prostate exam. We suggest that there are historical and structural explanations for this. The gynaecology teaching programme we observed was initiated and run by an older, well-established gynaecologist at the university hospital who had, as a young medical student, been shocked by the use of unknowing, anaesthetized female patients as bodies to train the gynaecological exam upon. The professional patient programme was developed as a way of improving the care women can receive. This resonates with the reasons for using professional gynaecology patients in many other teaching hospitals and medical schools , and reflects a concern for the patient experience that may be traced to the impact the women’s health movement has had on medical training and practice . The teaching and practicing of the urology exam does not seem to address the potential for embarrassment or discomfort with the same articulated practices or structural/clinical routines, even though established urologists interviewed for a different part of this study indicated that they have, in the course of their career, adapted their patient-doctor demeaner to address patient discomfort and embarrassment . Men’s health care has not experienced the same political pressure to adapt to patient demands for changed care practices or epistemological critique, which, as Epstein notes, may reflect the political untenability of men to organize around their identity as, specifically, men . Though, we realize that this has not stopped some men from trying, and that patient/disease-specific activism would appear to be coalescing around particular diseases associated with male bodies, like prostate cancer. The different historical developments of the fields or urology and gynaecology, the various political pressures they have felt from activists inside and outside the medical establishment, and the way the patients they treat are imagined, produce different affective encounters. We also note that we see that the structures around teaching and providing care are reproducing gendered understandings of the patient. We have observed variations of what, in their extreme, is a sensitive, sexually vulnerable woman easily embarrassed or triggered by an exam of her reproductive tract (c.f. ) and a strong, insensitive man who just gets the job done (c.f. ). These are also reproducing gendered understandings of the professional performing the care provision. Gynaecology is still, in Sweden, a very female dominated field (67%) and urology is male (only 16% women) . While the students we followed were a mix of male and female, the exams they were learning were connected to single sex dominated fields. Given the observations about the female dominance of care and emotional work, especially in health care , it is perhaps no surprise that the field that is populated by female bodies (gynaecology) is the one that discursively embedded the exam in articulated work to produce sensitive, caring practice. This reproduces a sexgendered understanding of the future gynaecologist. Likewise, even though the doing of care and discretion were observed in the urology exam (as mentioned above) this field is not reproducing female sexgendered practitioners who are expected to be ‘caring’ in the same way, and is, rather reproducing a male urologist ideal (even if the body actually doing the care happens to be female), one which the literature on care would suggest may be less interested in vocalizing the emotional work of care. And we emphasize: this is not to say that urologists do not care nor do emotional work; they do. However, articulating this aspect of their professional identity by talking about it and discursively reinforcing it in the teaching was not something we observed to the same extent. The teaching of these two examinations in the medical education of doctors contributes to these different subject positions for the patients and doctors. Gynaecology students are taught by professional patients and are actively encouraged to think about their own and the patient’s emotions, background and past experiences, and also how to express themselves during the patient encounter. The urology students are taught by practicing urologists during actual examinations. Looking into teaching situations also draws out the collective aspects of medical students learning a profession and learning the ‘proper’ emotional response in the examining situations. As we watched students learning the prostate and pelvic exams, we heard them expressing affect in very different ways and observed very specific types of subjects being performed. The prostate exams produced affected medical professionals who were displaying affective neutrality. In the gynaecological exams, we saw the production of affected subjects who expressed awe and respect for the body they were confronted with. The bodies being examined were expected to perform the patient in normative ways which reproduced their sexgendered status and norms and values related to that (often narrowly and heteronormatively) sexgendered subject. Using a professional patient will produce a very different environment for teaching and learning than practicing an intimate exam on a patient who is there because of concerns and worry about their health. And we wonder whether the overwhelming presence of cancer fear is relevant to consider in the urology exam, but also if the average age and status of a urology patient are useful in explaining these differences. This is something which Gleisner has analysed elsewhere with her interlocuters in the field . While it is not the focus of this article, we suggest that these different circumstances are also related to the different imaginaries of the patient’s needs and expectations, and are impacted by the historical developments of the fields and the sexgendered subjectivities imagined by the doctors about the patients (c.f. ; ). And, as Gleisner notes, it may be beneficial to think about how the use of professional patients for prostate exams – something which does occur at some urology departments in the USA – could be engaged more . To conclude, we want to return to the knots of material-discursive entanglements in practices of care, that we, inspired by and , have tried to loosen and discuss. The affective patient and doctors are produced in material-discursive constellations of human and non-human actors, of tables and tools as well as naked bodies and those wearing hospital scrubs. The body of a subject becomes particularly relevant in medical examinations, as its entanglements with social characteristics, affective responses and relational intra-actions with other people and things in an examination room produce the object of examination, the particular anatomical body which the subject becomes during the exam. But, as our examples show, that anatomical body is much less flesh and blood and much more affective responses and relational intra-actions than a medical textbook would suggest. And the ways medical students are taught to approach the patient’s body in particular exams are an important part of producing affective subjects. Finally, the ‘affective subjects’ that are being cared for demonstrate the entangled knots of affect, sexuality, masculinity, femininity, patienthood and care in the practices of pelvic and prostate exams. Affective subjects are done in relation to each other throughout the care elements of the exam – producing authority for the doctor by a neutral, emotionally unaffected authority in one case and a sensitive and respectful authority in the other.
Fat embolism after intraosseous catheters in pediatric forensic autopsies
f62e28cc-963c-41f9-bf29-dc6aad5ec000
10085886
Forensic Medicine[mh]
Fat embolism is a well-known phenomenon in orthopedic surgery and in forensic medicine, essentially in cases of blunt trauma with bone fractures, involving mainly the lungs and more rarely the brain or the kidneys. In forensic medicine, in some cases of traumatic deaths, the occurrence of pulmonary fat embolism (PFE) can be considered a vital reaction . Some studies showed that PFE can also be found without bone fractures, in cases with corticosteroid treatment, fatty liver, diabetes, osteomyelitis, burns, liposuction, cardiopulmonary bypass surgery, decompression sickness, parenteral lipid infusion, hemorrhagic pancreatitis, carbon tetrachloride poisoning, massive hepatic necrosis with fatty liver, heat exposure, and sickle-cell disease or in cases of diffuse soft tissue contusions [ – ]. In the end, it is known that PFE can be detected after cardio-pulmonary resuscitation (CPR) by external cardiac massage with rib cage fractures . In our center, we performed the autopsy of a child who died from drowning and presented, at histological post-mortem examination of lungs, a major pulmonary fat embolism, with a score 2 according to Falzi scoring system . The child was resuscitated for approximately 1 h by external cardiac massage, airway intubation, and infusion by two intraosseous catheters in the tibial region, without return to a spontaneous circulation. No traumatic lesions and no natural diseases were observed after multiple post-mortem investigations (PMCT, MRI, autopsy, histological examination, and biochemical analysis). In the absence of traumatic lesions or preexisting pathologies classically related to a risk of PFE, we considered the infusion by intraosseous catheter (IIC) as the only possible explanation for PFE. The PFE after IIC is described in some animal models [ – ], but is only poorly described in humans, especially in a pediatric population. For this reason, we have carried out the present study to verify the occurrence of PFE after IIC in a pediatric resuscitated population. The pediatric population is an ideal population for this type of study, as thoracic fractures after CPR are less frequent than in adults, thus limiting this potential confounding factor. Pediatric forensic autopsy cases of the University Center of Legal Medicine of Lausanne and Geneva (Switzerland) were reviewed from 2014 to 2020. We selected pediatric cases of deaths (age < 15 years) in which a CPR was performed. The cases were divided in two groups, group A with IIC and group B without IIC. In all cases, an autopsy had been performed, preceded by a PMCT and followed by toxicological, biochemical, and histological analysis. To avoid any alternative cause of PFE, we excluded cases with bones fractures, large soft tissue contusions, preexisting disease correlated to a risk of PFE (for example fatty liver, diabetes, sickle-cell disease), and corticosteroid treatment. As exclusion criteria, we also used a post-mortem interval > 72 h and a survival time interval > 72 h. For each case, we recovered samples of the five lung lobes that had been taken during the autopsy and stored in formaldehyde in our archives. To perform a histological examination specifically aimed at the search for PFE, the frozen histological slides were stained by a specific staining for fat droplets, the Oil Red O staining (ORO). The ORO-stained slides were examined by two examiners, with a final consensual evaluation. To quantify the degree of PFE, a score was assigned to each case by using the Falzi scoring system, adapted by Janssen (Table ) . The results of the two groups were compared to search for a statistically significant difference by using a Student t -test. We selected a total of 20 cases of pediatric deaths in which a CPR was performed: 13 cases with IIC (group A) and 7 cases without IIC (group B). In group A, 7 cases received one tibial intraosseous catheter and 6 cases two tibial intraosseous catheters. The median age was 2.3 years for the group A and 7.3 years for the group B. The causes of deaths in group A were drowning ( n = 4), hanging ( n = 1), sudden infant death ( n = 4), infection ( n = 2), food inhalation ( n = 1), and febrile seizures ( n = 1). The causes of deaths in group B were drowning ( n = 2), hanging ( n = 2), sudden infant death ( n = 2), and infection ( n = 1). The demographic data and causes of death are reported in Table . In group A, 8 cases showed PFE (61%), 4 with a score 1 of Falzi and 4 with a score 2 of Falzi (Fig. ). In group B, no cases showed PFE. The difference between the two groups was statistically significant ( p = 0.0119). The results are shown in Table for group A and Table for group B. In group A, we did not find any correlation between PFE and other factors such as the age or the number of intraosseous catheters. The use of intraosseous devices in critical patients is known since 1922 and is currently the standard alternative to intravenous access, especially in the pediatric population. Many techniques can be used for intraosseous infusions, the three main types being the manual, the impact-driven, and the power-driven needles . The needle passes through the bone cortex into the marrow cavity, where the infusion, usually of an electrolytic solution, takes place. Due to a smaller medullary cavity in young patients, intraosseous catheter placement is more difficult than in adults, and complications are more frequent. The main complications that have been described in the literature are extravasation of fluid, necrosis, limb ischemia, infection, fracture, or compartment syndrome [ , , ]. In the literature, PFE following IIC has been investigated in some studies on animal models (dogs, piglets, and swines), sometimes with contradictory results. In two studies on piglets and swines, the authors did not find PFE after IIC . On the contrary, in another study on piglets, the authors found the presence of PFE after IIC in about 30% of the cases . Furthermore, in this study, the authors tested different methods of IIC and concluded that the volume and pressure of infusion do not influence the incidence of PFE. In another study on swines, the authors concluded that PFE is a common consequence of IIC and that its magnitude is influenced by the site of cannulation and the infusion forces . Concerning humans, in a study on an animal model (dogs), the authors mentioned two cases of children who had received IIC and in which they have found PFE, but the link between PFE and IIC in these cases was not further discussed . To the best of our knowledge, this phenomenon has never been well investigated until now. In a 2010 review about intraosseous infusions for pediatric use, the authors conclude that despite these animal studies, there have been no documented cases of fat embolism after IIC in infants and children . Even in a more recent review about use of intraosseous needles in neonates, there is no mention of human studies . To the best of our knowledge, our study is the first specifically aimed to verifying the occurrence of PFE after IIC by using autopsy data of a pediatric population. The results of our study seem to confirm that IIC can lead to PFE in a pediatric population and above all show that PFE after IIC can be important (up to score 2 of Falzi). Concerning the pathophysiology of PFE, the studies on animal models suggest a mechanical origin, related to the fact that intraosseous infusion increases the intramedullary pressure and can lead to microscopic fractures of the metaphysis of the bone and vascular damage, with possible penetration of fat droplets in the blood circulation. This mechanism seems to be plausible to explain the presence of PFE in our pediatric cases, especially in relation to the smaller medullary cavity of children compared to adults. According to the literature, PFE after CPR related to rib fractures is usually mild (score 1 of Falzi) and not considered a factor that can limit the effectiveness of the resuscitation . However, in our study, we detected an important PFE (score 2 of Falzi) in about 30% of children that received an intraosseous infusion. Although this score is usually not considered a potential cause of death in itself, it is questionable whether it can reduce the effectiveness of CPR, for example, by hindering the tissue oxygenation and/or increasing right ventricular postload . In a 2001 study, Hasan et al. already concluded that clinical relevance of PFE after IIC was unclear and that more studies were necessary to describe this phenomenon more precisely . Our study suggests that IIC during CPR can lead to PFE in a pediatric population, possibly by an increase of the intramedullary pressure with microscopic bone fractures and penetration of fat droplets into the blood circulation. Furthermore, about 30% of our cases that received IIC showed an important PFE with a score 2 of Falzi. This result requires questioning about the possible risk that PFE after IIC can reduce the effectiveness of the resuscitation in a pediatric population. Considering the need to use multiple exclusion criteria, the fact that the number of pediatric forensic autopsies is low and that the cases without IIC are just a few, we were not able to review a consistent number of cases. However, considering the potential clinical implication of our results, we believe that further research is needed.
Validation and optimization of the diatom L/D ratio as a diagnostic marker for drowning
2b01357b-e717-4ac3-815b-68787ea6ea30
10085902
Pathology[mh]
The diagnosis of drowning is a very difficult, and yet crucial aspect in forensic routine. Although there are numerous known macroscopic and microscopic indicators of drowning, most of them are insufficiently specific, only transiently applicable, and/or suffer from significant limitations depending upon injury, drowning medium, and/or post-mortem interval (PMI) . Especially in the case of advanced decomposition, conventional methods have limited scope. In search of a method that can provide reliable and unambiguous evidence, one repeatedly comes across the examination of organs for diatoms. These microscopic algae, present in almost every natural waterbody, are assumed to be incorporated by inhalation of the drowning medium during the drowning process and to (at least partially) pass the alveolo-capillary membrane, thus reaching distinct organs through distribution via the bloodstream . If the examination of organs distant to the lungs, like the liver, kidney, or bone marrow, exposes a certain amount of diatoms, this can be regarded as supportive evidence for drowning. The previous methodology of the diatom test included the digestion of biological tissues with strong acids, followed by purification and deacidification steps, and lastly the assessment of the acid-stable silica diatom frustules via light microscopy . Over the years, various studies repeatedly confirmed or criticized the supportive evidence of the diatom test, not least due to its vulnerability for contamination effects and false positive results in non-drowning cases . Especially the examination of diatoms in lung tissue leaves this subject in controversial discussion, as diatoms are presumed to infiltrate the lungs post-mortem during the submersion period . A recently suggested method, the microwave digestion–vacuum filtration–automated scanning electron microscopy technique (MD-VF-Auto SEM), appears to be a promising new development in the field of diatom examination to minimize erroneous results from contamination and diatom loss during centrifugation steps. Combining diatom-sensitive microwave digestion and membrane filtration with automated scanning electron microscopy, this method achieves a remarkable quality of diatom recovery and ensures qualitative and quantitative examination at high resolution . The additional establishment of a diagnostic marker (L/D ratio), which represents the proportion between the diatom concentration in 1g of lung tissue and 1ml of drowning medium, allows clearer distinction between drowning and post-mortal immersion, as an active aspiration of fluid in the case of drowning results in a relatively higher concentration of diatoms in the lung tissue than in the drowning medium (L/D ratio >1), whereas in the case of post-mortal immersion, the diatom concentration in lung tissue can at most reach equality with the concentration of the drowning medium (L/D ratio ≤ 1) . Despite these advantages, this highly elaborated technique impedes routine application by requiring expensive “high-tech” devices including automated SEM for reliable diatom identification and counting, which are frequently unavailable. In order to enable routine application on existing equipment, process steps as digestion, filtration, and image acquisition were thoroughly broken down, optimized, and ultimately validated in confirmed drowning cases, without detracting the method’s reliability and precision. Sample collection All tissue samples analyzed in the present study were collected with thoroughly cleaned instruments during routine autopsies at the Department of Legal Medicine of the University of Salzburg. For each case, approximately 10 g of lung tissue (left superior lobe) was removed and preserved at −20°C until further investigations. Additionally, 10 g of liver and kidney tissue was collected and stored under the same conditions to test diatom presence in peripheral tissues. Water samples and putative drowning media (required for comparison, respectively L/D calculations) were collected in clean plastic bottles and sampled with caution, to avoid extraction close to the surface or close to benthic layers (minimum distance 20 cm, if possible). Control samples for protocol optimization Lung tissue samples of three confirmed drowning cases served as controls (A, B, C) to evaluate, modify, and optimize the digestion procedure with nitric acid and hydrogen peroxide. In addition, these samples were tested for regular diatom dispersion and the correlation between diatom quantity and investigated tissue mass by assessing the number of diatoms from lung tissue samples of different weight. This was particularly important to enable adjustment of the tissue mass (dilution) in the case of membrane clogging or diatom-overload on the membrane, or to enhance digestive capability. SEM image acquisition procedures were tested and validated at three dilutions (1:1, 1:2, 1:5) of diatom-rich water samples from a local pond. Study cases Five autopsy cases (three males, two females, all found during spring/summer) were selected to validate the adapted processing conditions and to perform L/D ratio calculations. Corresponding drowning media was collected by the police upon discovery of the bodies and stored in dark environment at 4°C until further analysis. Four of the cases presented distinct classical drowning signs, such as emphysema aquosum, foamy liquid in the airways, splenic anemia, and/or liquid in the sphenoid sinuses (Svechnikov’s sign), and thus were diagnosed as drowning cases. By contrast, the autopsy of case 4, which was in a state of advanced decomposition, solely exhibited liquid in the sphenoid sinuses and drowning was only presumed. For each case, 0.5–1.0 g lung tissue and 10 ml putative drowning medium were processed and digested under previously established conditions (see the “ ” section) and analyzed via SEM (see the “ ” section) to enable calculation of respective L/D ratios. Notably, the putative drowning medium of case 5 contained a significant amount of debris (plant particles), but was also processed as described. Study case data are presented in Table . Contamination tests To rule out false positive tests due to diatom-containing chemicals and contamination effects during sample processing, digestion reagents (nitric acid, hydrogen peroxide) and all cleaning and rinsing components (ultrapure water, tap water, ethanol) were filtrated onto acid-stable membranes and separately investigated for diatom content via SEM. In addition, a diatom-rich water sample was subjected to an evaporation test to rule out possible diatom loss and/or cross-contamination of samples during the digestion process. For this purpose, the digestion tube was fitted with a membrane underneath its cap. All reagents and other liquids, just as the evaporation membrane, were found entirely free of diatoms (see Supplements ). Acid digestion and filtration Overall preparation was conducted under sterile conditions and high safety precautions. All samples (i.e., study case tissues, control tissues, drowning media, and water controls) were transferred into 50-ml screw cap plastic tubes (Greiner) and as an essential safety measure, tube caps were provided with small perforation holes to enable gas emission. Samples were then treated with a digestive solution of nitric acid (HNO 3 65%, CarlRoth) and hydrogen peroxide (H 2 O 2 30%, CarlRoth) while being heated in a water bath of 100°C until the solutions turned clear (at least for 2h) and afterwards left at room temperature for cooling. Instead of conventional deacidification by repetitive centrifugation and replacement of the supernatant, samples were then directly filtrated through acid-stable polyvinylidenfluoride (PVDF) membrane filters (Ø = 1.0 cm, pore size 0.45 μm) with a custom-built syringe pump system comprising NEMA 17 stepper motors linked to A4983 Big Easy driver chips and a PurrData-software-operated Teensy 3.2 microcontroller (Fig. ). Membranes were subsequently deacidified with ultrapure water, desiccated with pure ethanol, and air dried at 40°C. As the digestive solution of lung tissue can contain relevant amounts of organic residues which potentially clog the membrane filters, provision was made to ascertain best conditions of organic matter digestion. Therefore, three different volumes of the nitric acid–hydrogen peroxide medium (5 ml, 10 ml, 15 ml) were each applied to 0.5 g, 1.0 g, and 1.5 g lung tissue control samples to test their digestive capability. In more detail, three samples of each weight were respectively digested with 4 ml HNO 3 + 1 ml H 2 O 2 , 8 ml HNO 3 + 2 ml H 2 O 2 , and 12 ml HNO 3 + 3 ml H 2 O 2 , and further processed as described above. Membranes were then qualitatively examined for the presence of diatom fragments or incompletely digested tissue remnants via SEM. Based on the related results (see the “ ” section), study case tissue samples of 0.5–1.0 g weight and 10 ml of reference liquids (drowning media/water controls) were equally treated with a 10 ml batch of the HNO 3 /H 2 O 2 digestion medium. SEM analysis and image acquisition After being attached to aluminum pin stubs with double-sided adhesive carbon tabs, membranes were sputter-coated with gold and analyzed in a Philips/FEI XL30 ESEM scanning electron microscope at 15 kV. Peripheral tissues (liver and kidney) were qualitatively examined for the presence of diatoms by classification into one of the following categories: − (0 diatoms), + (1–4 diatoms), ++ (5–9 diatoms), or +++ (10 or more diatoms). To quantitatively assess lung and water samples, the membranes were manually scanned at a magnification of 1000× to cover areas of appropriate size ensuring representative quantification while also allowing for easy identification of small diatoms. Due to the limitation of non-automated (manual) SEM imaging, full coverage of the membranes would hardly be feasible as requiring more than 1700 images. Therefore, two alternative strategies with reduced imaging demand—transectial acquisition and scatter acquisition—were applied on diatom-rich water samples in three dilutions (1:1, 1:2, 1:5) and tested for their time requirement and capability to assess diatom abundance of ≥ 95 % accuracy. Transectial image acquisition was performed in line of the filter’s diameter as quarter, half, full, and double transect, whereas scatter image acquisition was performed as eighth, quarter, half, and full scatter at uniformly distributed coordinates (Fig. ). Both strategies were evaluated concerning their efficiency (number of required images) to reach an extrapolation-accuracy of 95% to the total diatom count (i.e., double transects plus full scatter). Data processing and statistical analysis Images of transectial and scatter image acquisition were quantitatively analyzed for diatoms by manual counting, using the cell counter tool of the ImageJ 1.53p software to keep track of the process and document the results. Individual counts were compiled to a total number of diatoms per filter, allowing to calculate diatoms per g values for tissue samples and diatoms per ml values for drowning media, which were then used to determine L/D ratios. Statistical analyses, including correlation analysis between diatom number and tissue weight, diatom count projection, and L/D calculations, were performed using Microsoft Excel and SPSS 27.0 software, regarding p < 0.05 as statistically significant. All tissue samples analyzed in the present study were collected with thoroughly cleaned instruments during routine autopsies at the Department of Legal Medicine of the University of Salzburg. For each case, approximately 10 g of lung tissue (left superior lobe) was removed and preserved at −20°C until further investigations. Additionally, 10 g of liver and kidney tissue was collected and stored under the same conditions to test diatom presence in peripheral tissues. Water samples and putative drowning media (required for comparison, respectively L/D calculations) were collected in clean plastic bottles and sampled with caution, to avoid extraction close to the surface or close to benthic layers (minimum distance 20 cm, if possible). Lung tissue samples of three confirmed drowning cases served as controls (A, B, C) to evaluate, modify, and optimize the digestion procedure with nitric acid and hydrogen peroxide. In addition, these samples were tested for regular diatom dispersion and the correlation between diatom quantity and investigated tissue mass by assessing the number of diatoms from lung tissue samples of different weight. This was particularly important to enable adjustment of the tissue mass (dilution) in the case of membrane clogging or diatom-overload on the membrane, or to enhance digestive capability. SEM image acquisition procedures were tested and validated at three dilutions (1:1, 1:2, 1:5) of diatom-rich water samples from a local pond. Five autopsy cases (three males, two females, all found during spring/summer) were selected to validate the adapted processing conditions and to perform L/D ratio calculations. Corresponding drowning media was collected by the police upon discovery of the bodies and stored in dark environment at 4°C until further analysis. Four of the cases presented distinct classical drowning signs, such as emphysema aquosum, foamy liquid in the airways, splenic anemia, and/or liquid in the sphenoid sinuses (Svechnikov’s sign), and thus were diagnosed as drowning cases. By contrast, the autopsy of case 4, which was in a state of advanced decomposition, solely exhibited liquid in the sphenoid sinuses and drowning was only presumed. For each case, 0.5–1.0 g lung tissue and 10 ml putative drowning medium were processed and digested under previously established conditions (see the “ ” section) and analyzed via SEM (see the “ ” section) to enable calculation of respective L/D ratios. Notably, the putative drowning medium of case 5 contained a significant amount of debris (plant particles), but was also processed as described. Study case data are presented in Table . To rule out false positive tests due to diatom-containing chemicals and contamination effects during sample processing, digestion reagents (nitric acid, hydrogen peroxide) and all cleaning and rinsing components (ultrapure water, tap water, ethanol) were filtrated onto acid-stable membranes and separately investigated for diatom content via SEM. In addition, a diatom-rich water sample was subjected to an evaporation test to rule out possible diatom loss and/or cross-contamination of samples during the digestion process. For this purpose, the digestion tube was fitted with a membrane underneath its cap. All reagents and other liquids, just as the evaporation membrane, were found entirely free of diatoms (see Supplements ). Overall preparation was conducted under sterile conditions and high safety precautions. All samples (i.e., study case tissues, control tissues, drowning media, and water controls) were transferred into 50-ml screw cap plastic tubes (Greiner) and as an essential safety measure, tube caps were provided with small perforation holes to enable gas emission. Samples were then treated with a digestive solution of nitric acid (HNO 3 65%, CarlRoth) and hydrogen peroxide (H 2 O 2 30%, CarlRoth) while being heated in a water bath of 100°C until the solutions turned clear (at least for 2h) and afterwards left at room temperature for cooling. Instead of conventional deacidification by repetitive centrifugation and replacement of the supernatant, samples were then directly filtrated through acid-stable polyvinylidenfluoride (PVDF) membrane filters (Ø = 1.0 cm, pore size 0.45 μm) with a custom-built syringe pump system comprising NEMA 17 stepper motors linked to A4983 Big Easy driver chips and a PurrData-software-operated Teensy 3.2 microcontroller (Fig. ). Membranes were subsequently deacidified with ultrapure water, desiccated with pure ethanol, and air dried at 40°C. As the digestive solution of lung tissue can contain relevant amounts of organic residues which potentially clog the membrane filters, provision was made to ascertain best conditions of organic matter digestion. Therefore, three different volumes of the nitric acid–hydrogen peroxide medium (5 ml, 10 ml, 15 ml) were each applied to 0.5 g, 1.0 g, and 1.5 g lung tissue control samples to test their digestive capability. In more detail, three samples of each weight were respectively digested with 4 ml HNO 3 + 1 ml H 2 O 2 , 8 ml HNO 3 + 2 ml H 2 O 2 , and 12 ml HNO 3 + 3 ml H 2 O 2 , and further processed as described above. Membranes were then qualitatively examined for the presence of diatom fragments or incompletely digested tissue remnants via SEM. Based on the related results (see the “ ” section), study case tissue samples of 0.5–1.0 g weight and 10 ml of reference liquids (drowning media/water controls) were equally treated with a 10 ml batch of the HNO 3 /H 2 O 2 digestion medium. After being attached to aluminum pin stubs with double-sided adhesive carbon tabs, membranes were sputter-coated with gold and analyzed in a Philips/FEI XL30 ESEM scanning electron microscope at 15 kV. Peripheral tissues (liver and kidney) were qualitatively examined for the presence of diatoms by classification into one of the following categories: − (0 diatoms), + (1–4 diatoms), ++ (5–9 diatoms), or +++ (10 or more diatoms). To quantitatively assess lung and water samples, the membranes were manually scanned at a magnification of 1000× to cover areas of appropriate size ensuring representative quantification while also allowing for easy identification of small diatoms. Due to the limitation of non-automated (manual) SEM imaging, full coverage of the membranes would hardly be feasible as requiring more than 1700 images. Therefore, two alternative strategies with reduced imaging demand—transectial acquisition and scatter acquisition—were applied on diatom-rich water samples in three dilutions (1:1, 1:2, 1:5) and tested for their time requirement and capability to assess diatom abundance of ≥ 95 % accuracy. Transectial image acquisition was performed in line of the filter’s diameter as quarter, half, full, and double transect, whereas scatter image acquisition was performed as eighth, quarter, half, and full scatter at uniformly distributed coordinates (Fig. ). Both strategies were evaluated concerning their efficiency (number of required images) to reach an extrapolation-accuracy of 95% to the total diatom count (i.e., double transects plus full scatter). Images of transectial and scatter image acquisition were quantitatively analyzed for diatoms by manual counting, using the cell counter tool of the ImageJ 1.53p software to keep track of the process and document the results. Individual counts were compiled to a total number of diatoms per filter, allowing to calculate diatoms per g values for tissue samples and diatoms per ml values for drowning media, which were then used to determine L/D ratios. Statistical analyses, including correlation analysis between diatom number and tissue weight, diatom count projection, and L/D calculations, were performed using Microsoft Excel and SPSS 27.0 software, regarding p < 0.05 as statistically significant. Digestive capability Digestive capability tests, performed with different volumes of the HNO 3 /H 2 O 2 digestion reactant on lung tissue samples of variable weight, showed the following results: The 5-ml batch (4 ml HNO 3 + 1 ml H 2 O 2 ) reached optimal digestion results with samples of 0.5 g (Fig. a) but resulted in partial (incomplete) digestion with samples of ≥1.0 g (Fig. b–c). By contrast, 10 ml of the reactant (8 ml HNO 3 + 2 ml H 2 O 2 ) was capable to completely dissolve samples of 0.5 g and 1.0 g (Fig. d–e) with slight diatom-dissolution at lower tissue weight (Fig. d), while tissue residues persisted with 1.5 g (Fig. f). Fifteen milliliters of the reagent (12 ml HNO 3 + 3 ml H 2 O 2 ) had a clearly higher digestive potential with 1.5 g samples (Fig. i) but promoted diatom disintegration in samples of lower weight (Fig. g–h). In addition, higher total volumes required longer filtration times, also potentially affecting diatom integrity. SEM imaging Diatom abundances of each imaging strategy (transectial acquisition and scatter acquisition), initially evaluated in three dilutions (1:1, 1:2, 1:5) of diatom-rich control water samples and eventually combined for coherent accuracy assessment, yielded the following results: diatom count extrapolations from quarter transects (10 images), half transects (20 images), and full transects (40 images) achieved accuracy levels of 79.8%, 87.1%, and 92.2%, thus remaining below the required confidence limit of 95%. By contrast, the double transect method reached a value above the limit (95.6%) but required 80 images. Results from scatter acquisition proved distinctly different from those of transect acquisition. While the eighth scatter (8 images), quarter scatter (16 images) and half scatter (32 images) strategies (86.1%, 91.0%, and 93.8%) remained below the confidence limit as their counterparts in transect acquisition, the full scatter method achieved a value of 97% based on the analysis of only 69 images. Calculations to reach the 95% accuracy limit using the obtained logarithmic regression formulae resulted in theoretical thresholds of 78 images for transectial acquisition and 50 images for scatter acquisition (Fig. ). Correlation between diatom quantity and tissue weight Correlation analyses results between diatom quantity and tissue mass (weight) of three control drowning cases (A, B, C) are summarized in Table . The Kruskal-Wallis comparison did not indicate significant differences between corresponding lung tissue samples of each case. Spearman’s correlation analysis showed a positive correlation between diatom number and tissue weight at high significance levels for all samples. Diatoms per gram calculations showed almost identical values for lung tissue samples within each respective case at a maximal deviation of 2.5% (Fig. ). Case analysis applying L/D ratios All investigated drowning media and lung tissue samples contained diatoms in quantities sufficient for analytical application. Peripheral tissue analysis (qualitative SEM analysis) revealed diatom presence in all kidney samples, as well as in liver tissue samples from cases 1, 4, and 5 (Supplements ). All cases showed species conformance between tissues and drowning media. Except for case 5, all lung tissue samples displayed higher diatom numbers than the corresponding drowning media, consequently resulting in L/D ratios above 1, even exceeding a value of 2 (Fig. ), thus showing that the diatom concentration in lung tissue was at least twice as high than in the corresponding drowning medium. By contrast, case 5—although displaying a relatively high diatom number per gram of lung tissue—reached an L/D ratio of 0.1, as the concentration of diatoms per ml drowning medium reached a value almost ten times higher than in the lung (Table ). Digestive capability tests, performed with different volumes of the HNO 3 /H 2 O 2 digestion reactant on lung tissue samples of variable weight, showed the following results: The 5-ml batch (4 ml HNO 3 + 1 ml H 2 O 2 ) reached optimal digestion results with samples of 0.5 g (Fig. a) but resulted in partial (incomplete) digestion with samples of ≥1.0 g (Fig. b–c). By contrast, 10 ml of the reactant (8 ml HNO 3 + 2 ml H 2 O 2 ) was capable to completely dissolve samples of 0.5 g and 1.0 g (Fig. d–e) with slight diatom-dissolution at lower tissue weight (Fig. d), while tissue residues persisted with 1.5 g (Fig. f). Fifteen milliliters of the reagent (12 ml HNO 3 + 3 ml H 2 O 2 ) had a clearly higher digestive potential with 1.5 g samples (Fig. i) but promoted diatom disintegration in samples of lower weight (Fig. g–h). In addition, higher total volumes required longer filtration times, also potentially affecting diatom integrity. Diatom abundances of each imaging strategy (transectial acquisition and scatter acquisition), initially evaluated in three dilutions (1:1, 1:2, 1:5) of diatom-rich control water samples and eventually combined for coherent accuracy assessment, yielded the following results: diatom count extrapolations from quarter transects (10 images), half transects (20 images), and full transects (40 images) achieved accuracy levels of 79.8%, 87.1%, and 92.2%, thus remaining below the required confidence limit of 95%. By contrast, the double transect method reached a value above the limit (95.6%) but required 80 images. Results from scatter acquisition proved distinctly different from those of transect acquisition. While the eighth scatter (8 images), quarter scatter (16 images) and half scatter (32 images) strategies (86.1%, 91.0%, and 93.8%) remained below the confidence limit as their counterparts in transect acquisition, the full scatter method achieved a value of 97% based on the analysis of only 69 images. Calculations to reach the 95% accuracy limit using the obtained logarithmic regression formulae resulted in theoretical thresholds of 78 images for transectial acquisition and 50 images for scatter acquisition (Fig. ). Correlation analyses results between diatom quantity and tissue mass (weight) of three control drowning cases (A, B, C) are summarized in Table . The Kruskal-Wallis comparison did not indicate significant differences between corresponding lung tissue samples of each case. Spearman’s correlation analysis showed a positive correlation between diatom number and tissue weight at high significance levels for all samples. Diatoms per gram calculations showed almost identical values for lung tissue samples within each respective case at a maximal deviation of 2.5% (Fig. ). All investigated drowning media and lung tissue samples contained diatoms in quantities sufficient for analytical application. Peripheral tissue analysis (qualitative SEM analysis) revealed diatom presence in all kidney samples, as well as in liver tissue samples from cases 1, 4, and 5 (Supplements ). All cases showed species conformance between tissues and drowning media. Except for case 5, all lung tissue samples displayed higher diatom numbers than the corresponding drowning media, consequently resulting in L/D ratios above 1, even exceeding a value of 2 (Fig. ), thus showing that the diatom concentration in lung tissue was at least twice as high than in the corresponding drowning medium. By contrast, case 5—although displaying a relatively high diatom number per gram of lung tissue—reached an L/D ratio of 0.1, as the concentration of diatoms per ml drowning medium reached a value almost ten times higher than in the lung (Table ). The diatom test has long been a controversial technique in forensic drowning diagnosis, most prominently due to its sensitivity to contamination and risk of incorrect results even in qualitative analysis . In recent years, an increasing interest of quantitative diatom testing has promoted re-evaluation of the approach . Especially the comparison of diatom content in lung tissue and drowning medium (L/D value) has changed the significance of the method . Rather than utilizing timesaving and less expensive light microscopy, the preference of SEM application empowered diatom-based forensic drowning diagnosis providing much higher resolution and the options for automated scanning and diatom counting , as well as species identification by artificial intelligence . However, to date the method is only applied in a limited number of institutes, not least because of its technical requirements. Results of the present work enabled the establishment of a protocol for quantitative diatom analysis using/requiring comparably limited resources. Notes on methodology To warrant the accuracy of the modified method for SEM-based forensic diatom testing, some critical aspects have to be considered: (i) All reagents and supportive materials need to be validated for diatom absence. This is of particular relevance as diatomaceous earth is a commonly used filtration aid, traces of which may remain in chemicals . Compared to qualitative investigation, these traces may be deemed marginal in quantitative approaches , but are of special importance for bodies of water with low diatom concentrations. (ii) Diatom treatment (protocols) should have the capability to assure complete dissolution of all organic matter while leaving diatom frustule integrity intact. It appears optimal to digest tissue samples at around 1.0 g with 10 ml of a digestion medium composed of 8 ml HNO 3 and 2 ml H 2 O 2 , whereas samples of 0.5 g or less may also be sufficiently digested at smaller volume of the digestion reagent. Analogously, tissue samples over 1.0 g in weight should be treated with a digestive volume larger than 10 ml to reach satisfactory results, which clearly confirmed that the volume of digestion media applied to a certain amount of tissue can substantially influence the quality of the performance, regardless of whether quantification is carried out automated or manually. (iii) In this context, the present work specified the necessity to assess the replicability of the methods outcome. We could show that, regardless of the applied tissue mass, calculated diatom concentrations per gram are highly stable. This seems especially important when membranes clog during filtration and/or when the initial analysis results in very high (diatoms in clusters or layers) or very low diatom numbers on the filter. In such cases, it may be necessary to modify (enlarge or reduce) tissue sample volumes in order to ensure a reliable analysis of the diatom number. (iv) Another crucial aspect for a wider practical application is to optimize the method`s cost and effort without negatively influencing the diagnostic quality. In this respect, a major task was to determine the minimum number of required images to obtain representative data for the entire membrane, irrespective of whether dissolved tissue or diatom-containing water (drowning media) are examined. Experimental conditions revealed that regular diatom distribution on the membrane is not granted. An evaluation of 10 equidistantly scattered images over the filter resulted in a mean error of 12%. This error accumulated to 19% when the same number of images were taken equidistantly along a transect, clearly suggesting that scatter imaging should be preferred over transectial acquisition. Our data indicate that a total of 50 equidistant scatter images distributed over the entire filter should be sufficient to produce reliable results beyond the 95% probability limit. Significance in practical application Despite the small sample size, the present trial conducted on five autopsy cases indicates that the modified method of SEM-based diatom testing has indeed true application value in forensic drowning diagnosis. Previous studies reported drowning probabilities of 96% for L/D values >1 and even 100% for L/D values >2, while yet evincing limited conclusiveness between drowning and post-mortal immersion at values ≤1 . Investigation of the cases 1–3, which all presented distinct classical drowning signs , achieved L/D values between 2.4 and 10.9, which therefore can be considered strong evidence for drowning . Case 4 showed insufficient evidence to be diagnosed as drowning case based on classical drowning signs, most likely due to advanced decomposition. However, the determined L/D value of 6.7 confirmed drowning as the cause of death, as a 6.7-fold diatom concentration in the lung compared to the drowning medium is very difficult to explain by other mechanisms than active aspiration and pressure filtration of the drowning medium in the lung . This case underlines the diagnostic potential of the modified SEM-based diatom test in cases of advanced decomposition which frequently lack other drowning signs . Case 5, although exhibiting several classical drowning signs, only reached an L/D value of 0.1, rather indicating post-mortal immersion than drowning. However, specific circumstances of this case leave some caveats. The collected drowning medium contained plenty of debris from aquatic plants, which is most likely the cause for the very high number of diatoms detected (>24.000 diatoms per ml), as diatoms are known to colonize on aquatic plants epiphytically . It must be questioned whether the secured materials represent the actual diatom concentration of the drowning medium at the time the body got into the water. Perhaps, actively aspired water contained less diatoms as suggested by the present analysis, consequently the debris would have had a major effect on the L/D value and the interpretation of the result. This depicts the limitation, that methods of SEM-based diatom testing are inherently dependent on the reliability of the secured drowning medium. Consequently, sampling requires special caution to not distract the water body’s diatom homogeneity. Other factors potentially affecting this reliability include sampling from wrong depth or location and delayed sampling, allowing for diatom concentration changes in the waterbody ., e.g., after heavy weather conditions . In this relation, additional research providing reference data on species and abundances from spatial, temporal, and seasonal diatom mapping of local natural waters, as already implemented in some countries, could greatly improve the validation of drowning media . To warrant the accuracy of the modified method for SEM-based forensic diatom testing, some critical aspects have to be considered: (i) All reagents and supportive materials need to be validated for diatom absence. This is of particular relevance as diatomaceous earth is a commonly used filtration aid, traces of which may remain in chemicals . Compared to qualitative investigation, these traces may be deemed marginal in quantitative approaches , but are of special importance for bodies of water with low diatom concentrations. (ii) Diatom treatment (protocols) should have the capability to assure complete dissolution of all organic matter while leaving diatom frustule integrity intact. It appears optimal to digest tissue samples at around 1.0 g with 10 ml of a digestion medium composed of 8 ml HNO 3 and 2 ml H 2 O 2 , whereas samples of 0.5 g or less may also be sufficiently digested at smaller volume of the digestion reagent. Analogously, tissue samples over 1.0 g in weight should be treated with a digestive volume larger than 10 ml to reach satisfactory results, which clearly confirmed that the volume of digestion media applied to a certain amount of tissue can substantially influence the quality of the performance, regardless of whether quantification is carried out automated or manually. (iii) In this context, the present work specified the necessity to assess the replicability of the methods outcome. We could show that, regardless of the applied tissue mass, calculated diatom concentrations per gram are highly stable. This seems especially important when membranes clog during filtration and/or when the initial analysis results in very high (diatoms in clusters or layers) or very low diatom numbers on the filter. In such cases, it may be necessary to modify (enlarge or reduce) tissue sample volumes in order to ensure a reliable analysis of the diatom number. (iv) Another crucial aspect for a wider practical application is to optimize the method`s cost and effort without negatively influencing the diagnostic quality. In this respect, a major task was to determine the minimum number of required images to obtain representative data for the entire membrane, irrespective of whether dissolved tissue or diatom-containing water (drowning media) are examined. Experimental conditions revealed that regular diatom distribution on the membrane is not granted. An evaluation of 10 equidistantly scattered images over the filter resulted in a mean error of 12%. This error accumulated to 19% when the same number of images were taken equidistantly along a transect, clearly suggesting that scatter imaging should be preferred over transectial acquisition. Our data indicate that a total of 50 equidistant scatter images distributed over the entire filter should be sufficient to produce reliable results beyond the 95% probability limit. Despite the small sample size, the present trial conducted on five autopsy cases indicates that the modified method of SEM-based diatom testing has indeed true application value in forensic drowning diagnosis. Previous studies reported drowning probabilities of 96% for L/D values >1 and even 100% for L/D values >2, while yet evincing limited conclusiveness between drowning and post-mortal immersion at values ≤1 . Investigation of the cases 1–3, which all presented distinct classical drowning signs , achieved L/D values between 2.4 and 10.9, which therefore can be considered strong evidence for drowning . Case 4 showed insufficient evidence to be diagnosed as drowning case based on classical drowning signs, most likely due to advanced decomposition. However, the determined L/D value of 6.7 confirmed drowning as the cause of death, as a 6.7-fold diatom concentration in the lung compared to the drowning medium is very difficult to explain by other mechanisms than active aspiration and pressure filtration of the drowning medium in the lung . This case underlines the diagnostic potential of the modified SEM-based diatom test in cases of advanced decomposition which frequently lack other drowning signs . Case 5, although exhibiting several classical drowning signs, only reached an L/D value of 0.1, rather indicating post-mortal immersion than drowning. However, specific circumstances of this case leave some caveats. The collected drowning medium contained plenty of debris from aquatic plants, which is most likely the cause for the very high number of diatoms detected (>24.000 diatoms per ml), as diatoms are known to colonize on aquatic plants epiphytically . It must be questioned whether the secured materials represent the actual diatom concentration of the drowning medium at the time the body got into the water. Perhaps, actively aspired water contained less diatoms as suggested by the present analysis, consequently the debris would have had a major effect on the L/D value and the interpretation of the result. This depicts the limitation, that methods of SEM-based diatom testing are inherently dependent on the reliability of the secured drowning medium. Consequently, sampling requires special caution to not distract the water body’s diatom homogeneity. Other factors potentially affecting this reliability include sampling from wrong depth or location and delayed sampling, allowing for diatom concentration changes in the waterbody ., e.g., after heavy weather conditions . In this relation, additional research providing reference data on species and abundances from spatial, temporal, and seasonal diatom mapping of local natural waters, as already implemented in some countries, could greatly improve the validation of drowning media . With some adaptions in sample processing and SEM imaging, we were able to apply a new setup of quantitative diatom assessment at our institute. Our data show that the modified method of SEM-based diatom testing has high potential to become a standard technique in forensic drowning investigation, particularly in cases of advanced decomposition, despite the necessity to critically consider the limitations of the application and outcome interpretation. ESM 1: Supplements (PDF 1287 kb)
Assessing retinal hemorrhages with non-invasive post-mortem fundus photographs in sudden unexpected death in infancy
7bdc0dc7-eea1-4735-b12f-717f8f4db919
10085933
Forensic Medicine[mh]
Each year in Europe, 35 infants per 100,000 live births die suddenly and unexpectedly before the age of one: sudden death in infancy (SUDI) is the first cause of death after the neonatal period in France . Sudden infant death syndrome (SIDS) is defined as the sudden death of an infant under one year of age, which remains unexplained after a thorough case investigation, including performance of a complete autopsy, examination of the death scene, and review of the clinical history . When pediatricians and forensic pathologists are confronted to SUDI, they must thoroughly research a cause of death before concluding SIDS. Abusive head trauma (AHT) is defined as an injury to the skull or intracranial contents of an infant under 5 years of age, due to inflicted blunt force impact and/or shaking . Infanticide by fatal AHT is a special cause of SUDI firstly because it is difficult to suspect and to confirm, and secondly because its recognition will lead to a judicial enquiry . As retinal hemorrhages (RH) are a crucial hallmark for AHT (sensitivity=75% and specificity=94%) , the thorough case investigation must include systematic eye examination. To date, there is no consensus on the best approach to detect RH. The American Academy of Pediatrics recommends post-mortem eye removal in case of SUDI under 5 years of age that have not clearly died of witnessed severe accidental head trauma or readily diagnosed systemic medical conditions . However, the French Haute Autorité de la Santé (HAS) recommends systematic post-mortem fundus examination . The relevance and the protocol of post-mortem eye removal as part of the autopsy have been well described . Conversely, reports of post-mortem fundus examination are very rare and no protocol has been yet validated between endoscopy or indirect ophthalmoscopy . Wide field fundus camera such as RetCam (Clarity Medical Systems USA) is the gold standard for the acquisition of retinal images in suspected cases of AHT in living children , but this method has never been described in deceased children. Assessing the capacity of post-mortem fundus photographs to detect RH is a major issue because it is non-invasive, it does not need eye removal, it allows screening of a wider range of children without problems of acceptability and availability, and the response is immediate. The aim of our study was to assess the capacity of post-mortem fundus photographs (PMFP) by Retcam to detect RH. We hypothesized that RetCam PMFP can detect RH and may become a valuable screening test complementary to pathological examination. This bicentric retrospective study was conducted in two French University Hospitals. Inclusion criteria were: SUDI under 2 years of age, PMFP realized by RetCam and available for reinterpretation. The definition of the cases of SUDI followed the international definition, namely the death of an apparently healthy child under the age of 1 year, with an extension to the children up to 2 years of age, as recommended by the French recommendations for the management of SUDI [ , , ]. The following clinical data were collected from medical records and from the French SUDI registry (Observatoire National des Morts Inattendues du Nourrisson registry; OMIN): age at death, sex, post-mortem interval between death and PMFP, final diagnosis after complete case investigation. Post-mortem interval (PMI) used in our analysis was the mean between minimum PMI (interval between the finding of the deceased child and the PMFP) and maximum PMI (interval between the last observation of the living child and the PMFP). Complete post-mortem investigations included: a complete external examination of the skin and biometrical measurements, biological samples (virological and bacteriological analyses of blood, cerebrospinal fluid, urines, feces; complete blood count; biochemical markers on blood and cerebrospinal fluid), genetic samples (with the agreement of both parents), whole body post-mortem imaging (X-ray on the skeleton and post-mortem computed tomography and/or post-mortem MRI), fundus, toxicological analysis, chromatography of organic acids in urine, forensic or scientific autopsy with pathological examination of the organs, solicited by the prosecutor or after acceptance by the parents. A detailed interview of the parents was systematically realized by the medical staff and/or the police officer, to collect all necessary data on the medical background of the family and the child and the context of the death. The prosecutor was warned to check the criminal records of the parents and/or family. The determination of the cause of death was established by a multidisciplinary staff which summarized the opinion of the forensic pathologist, the pediatrician of the referral center, the nurse of the referral center, the ophthalmologist and the radiologist. RetCam (Clarity Medical Systems, USA) is a digital wide field camera developed to assess pediatric eye diseases. It allows a dynamic examination on a large screen and the acquisition of videos and photographs. The examination lasted 5–10 min. The eyelids were kept open with an eyelid speculum. We did not use dilating eye drops as the pupils cannot react after death and are already slightly dilated. An ophthalmic gel was instilled on the eye and added if needed during the examination to maintain the optical contact between the cornea and the camera. If needed, the superficial corneal surface was gently rubbed with a microsponge to remove edematous and cloudy corneal epithelium. For each eye, numerous photographs were acquired to evaluate the posterior pole (macula and optic nerve) and the peripheral retina. The examination was performed either in a hospital room, an autopsy room or in the emergency department, as soon as the child was arrived at the hospital. All PMFP in the center 2 and many PMFP in the center 1 have been performed by forensic pathologists, after a short training period. For each eye, the PMFP series were collected in an anonymous PDF file. The PMFP series were randomly and independently reviewed by three senior ophthalmologists blinded for all clinical data. For each eye, the following data were assessed: image quality to assert presence of RH, presence of a macular retinal fold, horizontal and vertical dimension of the macular fold (in optic disc diameters), presence of a peripheral retinal fold, and papillary vessels enlargement. To assess the image quality, the examiners answered “yes” or “no” to the question “Was the retina sufficiently visible on the images to determine the presence or absence of RH?”. To help them, they were given a large panel of fundus photographs showing retinal hemorrhages, extracted from reference articles in the literature [ , , ]. If RH were visible, they were classified according to the “traumatic hemorrhagic retinopathy (THR) grading system” . The term “papillary vessels enlargement” referred to the presence of blood in the pre papillary vessels which appear enlarged compared to the other retinal vessels. A second review was completed by the same ophthalmologists at least 1 month later, with different anonymization numbers, in a different random order. Patients’ characteristics were presented as the median and interquartile range for continuous variables and as effective and percentage for categorical variables. For univariable comparisons, we used Kruskal–Wallis one-way analysis of variance for the former and Fisher exact test for the latter. Missing data were systematically presented. For descriptive purposes, we have chosen to retain the majority decision to summarize evaluators’ assessments. Each PMFP was evaluated 6 times (3 evaluators, 2 series). The quality of the fundus pictures to assess the presence of RH was regarded as sufficient if it was positively evaluated more than 3 times by the assessors, insufficient if it was positively evaluated less than 3 times or discordant if it was positively evaluated 3 times. There was a complete agreement if it was positively evaluated 0 or 6 times. Concordance was measured using Cohen’s Kappa for intra-rater reliability and 2 by 2 inter-raters’ reliability with 95% confidence intervals estimates. Analysis was performed using R version 4.0.4 . The study followed the tenets of the Declaration of Helsinki. Following the French rules on medical research, no institutional review board approval was required because of our study’s non-interventional and retrospective design, and anonymization of the cases. The parents or people who have parental authority gave informed and written consent before inclusion in the OMIN registry. Sixty eyes from 30 cases of SUDI between March 2017 and July 2021 were included, 17 girls and 13 boys (Table ). PMFP was performed either by resident or senior, ophthalmologists, or forensic pathologists. Median age was 3.5 months (interquartile (IQR) [1.6; 6.0]). Regarding the causes of death, the main causes which were retained after all post-mortem investigation was asphyxiation ( n =13) and SIDS ( n =5). The cause of death was considered as undetermined in eight cases due to the lack of some post-mortem results at the time of the study. No child died from AHT in our series. Of 60 eyes, image quality was sufficient to assert presence or absence of RH suggestive of AHT in 50 cases (83%) (Fig. ) and insufficient in 6 cases (10%). The assessment of image quality was completely identical between the six examinations for 45 eyes (75%) but was conflicting in 15 eyes (25%) (Table ). Of these 15 eyes, 11 were classified as “sufficient” or “insufficient” by the majority and 4 were classified as “discordant” in the absence of a majority. Intra- and inter-raters’ Cohen’s Kappa led to a moderate to excellent concordance when assessing image quality for RH observation ( κ = 0.41 [0.12–0.70] to κ = 0.84 [0.66–1.00], Table – supplemental data). The classification was identical between the two eyes for 27 children (90%). The three groups were similar with respect to age, center, cause of death, year of death (Table ). Median PMI was significantly lower in “sufficient quality” cases (10.2h [6.3, 13.8]) than in “insufficient quality” cases (19.0h [12.0, 27.6], p = 0.04985). Additionally, sufficient quality rate was significantly higher when PMI was inferior to 18 h (91%, 42/46) than when PMI was superior to 18 h (57%, 8/14, p =0.0096) (Table ). This difference was similar in all centers. RH were found in six eyes (10 %) of four children (13%) (Figs. and ). The assessment of presence or absence of RH was completely identical between the six examinations for 58 eyes (97%) but was conflicting in two eyes (3%). For these two eyes, the six examinations of the contralateral eye were completely identical. Moreover, the majority was able to categorize the first eye as “not-having RH” and the second as “having RH”. Intra- and inter-raters’ Cohen Kappa led to an excellent to perfect concordance when assessing presence of HR ( κ = 0.91 [0.74–1.00] to κ = 1.00 [1.00–1.00]). Regarding the children with RH, the examination of the death scene and the post-mortem investigations on the children found no evidence for child abuse. In particular, none of the children presented intracranial hemorrhages, bones, or soft tissue traumatic injuries on post-mortem imaging and/or autopsy. Toxicological tests were negative, except for drugs related to resuscitation. None of them presented medical background which could interfere with fundus; in particular, none presented coagulation disorders. They all received prolonged specialized cardiopulmonary resuscitation, with orotracheal intubation, central veinous catheter, and chest compression. Three of the children were admitted between 4 and 96 h in intensive care unit after a transient recovery of cardiac activity. Furthermore, two children were 1 week of age, and therefore, RH could be explained by birth. The two other children were 3 and 4 months old and had few hemorrhages, compatible but not specific for AHT and classified as 1Ai or 1Bi according to the “traumatic hemorrhagic retinopathy (THR) grading system”. The retina was sufficiently visible on the images to affirm that hemorrhages were mostly confined to the posterior pole, only intraretinal, without retinoschisis and few in number (less than 15). Table presents an overview of these different results. On PMFP, two post-mortem artifacts were found: macular and peripheral retinal folds and papillary vessels enlargement (Fig. , Table ). Of 52 eyes with image quality sufficient to assess macular fold, median PMI was significantly higher in cases with macular fold (11.3h [8.6–17.8], min = 3.8h, max = 27.3h, n = 41) than without (4.5h [4.0–7.5], min = 3.9h, max = 8.8h, n = 11, p = 0.0003). Macular fold was significantly correlated with PMI and was always present 9 h after death (Table ). In this pilot study, image quality of RetCam PMFP in case of SUDI was sufficient to assert presence or absence of RH suggestive of AHT in 83 % of all cases, in 91% when fundus was performed within 18h after death and 67 % ( p = 0.0096) when performed more than 18h after death. To our knowledge, this is the first study to document the ability of RetCam to detect RH after death. This result can have a major impact on daily practice because PMFP represents a relevant systematic screening test complementary to pathological examination that is still the gold standard. Eye examination is recommended in case of SUDI to detect RH suggestive of AHT. The American Academy of Pediatrics recommends pathological eyes and orbital tissues examination under 5 years of age if the child has not “clearly died of witnessed severe accidental head trauma or readily diagnosed systemic medical conditions”. But this approach has limitations, the main one being the necessity of eye and orbit tissues removal. If no judicial inquiry is open, parental consent is required and the acceptability of this invasive sampling may be weak for both clinicians and families, even if everything is done to make the change in facial appearance minimal. The technique of eye and orbital tissue removal without disfigurement has been well described , but it requires training and regular practice by the forensic pathologist to be performed correctly: this may be an important limitation compounding the fact that SUDI is a rare condition requiring rapid management in order to not aggravate the parental trauma. Moreover, eye enucleation, fixation, and dissection result in artifacts that may alter interpretation: tissue disarrangement, dislocation, retinal detachment, separation of retinal layers, and mechanical tissue damage [ , , ]. Another limitation is the need of a trained ocular pathologist to examine the tissues so that the sample may need to be transported and the response time that may be long. Considering these limitations, pediatricians and forensic pathologists can be tempted to perform eye examination only in some cases, thus depriving all SUDI of systematic screening for RH. Post-mortem endoscopy has been described by three publications [ , , ]. It provides images of good quality because it circumvents the opacification of cornea and lens. It does not require eye removal, but it is still invasive: it can cause retinal damage leading to artifacts and can alter the presentation of the body at the funeral. Of the 150 cases of post-mortem endoscopy described, it was rarely done in children and performed in only one case of SUDI . No systematic screening in case of SUDI has been described or even proposed. And for good reason: this invasive method has been used by few operators and is limited by the need of an endoscope that is barely used in daily practice in forensic pathology and ophthalmology. Indirect ophthalmoscopy associated with smartphone image acquisition has been previously described but the “do it yourself” association of smartphone and lens is not ergonomic and requires training and practice to be performed correctly . Moreover, no case series has been shown, and data regarding the image quality with indirect ophthalmoscopy is lacking: in our personal experience, post-mortem indirect ophthalmoscopy is difficult, frequently limited by opacification of cornea and lens, whereas visibility is better with a contact camera such as the RetCam. The use of a fundus camera requires a very short learning curve without specific ophthalmologic background: fundus photographs with RetCam can efficiently be performed by nurses or other non-ophthalmologists . Retinal area evaluated with RetCam in pediatric children is significantly higher than with indirect ophthalmoscopy . The quality of the images is a crucial issue; however, it may be difficult to affirm if the quality is sufficient to assert the absence of RH suggestive of SUDI. That notwithstanding, the risk of missing RH suggestive of AHT is low as they present as typically bilateral, extended to the whole fundus, too numerous to count, both intra- and pre retinal and sometimes associated with retinoschisis . In our series, even when image quality seemed low, non-specific and relatively few RH were well visible without any doubt (Fig. ); incidentally, intra- and inter-raters’ reliability to assert RH, assessed by Cohen’s Kappa, was almost perfect in our series (0.91 to 1.00), with 97% of complete agreement between the six interpretations of PMFP of each eye. To help interpretation of PMFP, it is recommended to compare them with available RetCam photographs of living children suffering from AHT and with our normal RetCam PMFP that are, to our knowledge, available for the first time in the literature. When PMFP show RH or when any doubt persists about the quality of the images, it is recommended to remove the eye to perform pathological examination. In our series, the quality was not sufficient in only 17% of cases, mainly when PMI was superior to 18 h. The main limits of our study were the lack of AHT cases, and the lack of systematic pathological examination that is still the gold standard to detect post-mortem RH. The number of cases may seem small, but is relatively high considering the low incidence of SUDI. The lack of AHT cases is compensated by the accurate visualization of non-specific RH in three eyes (two children) and RH explained by birth in three other eyes (two children). The correct documentation of these few RH strengthens the hypothesis of the capacity of RetCam to detect RH suggestive of AHT. However, a longitudinal cohort study of PMFP in French cases of SUDI, including more cases and AHT cases with the support of the French registry of SUDI OMIN is necessary. The comparison with pathological examination could be addressed by systematic eye removal for pathological examination when a doubt persists on the quality of PMFP. To prevent any misuse of this data in court, we wish to stipulate that the purpose of this research was not to discuss the specificity of RH for the diagnosis of AHT. We recommend the reading of well-established literature for the interpretation of RH . In our current practice, the presence of RH was always considered as a sign of possible AHT: this diagnose was finally excluded after all forensic investigations, i.e., medical history, post-mortem imaging, autopsy, and pathological examination. In total, this retrospective pilot study has shown that RetCam PMFP offers many advantages that make it a relevant screening test to be carried out as soon as the deceased child arrives at the hospital. PMFP should be done urgently because image quality decreases rapidly after death. This non-invasive examination can be performed by either ophthalmologists, forensic pathologists, pediatricians, or nurses, thanks to its short learning curve and its broad availability in hospitals with a neonatal service. It does not require eye removal and may become a relevant systematic screening test complementary to pathological examination that is still the gold standard. Further studies including more cases, AHT cases, and pathological examinations are needed to define the best decision algorithm to detect RH in case of SUDI. ESM 1 Table 5 Intra- and inter-raters’ concordance to assess the quality of the image and the presence of retinal hemorrhages. Concordance’s level according to Cohen’s Kappa: no agreement (Kappa<0), slight agreement (0 – 0.20), fair agreement (0.21 – 0.40), moderate (0.41 – 0.60), substantial (0.61 – 0.80) or almost perfect (0.81 – 1). (DOCX 15 kb)
DNA methylation-based age estimation for adults and minors: considering sex-specific differences and non-linear correlations
da91dd18-d727-4a8b-9487-5cdbbbad32c6
10085938
Forensic Medicine[mh]
If the source of a biological trace cannot be identified by conventional DNA comparison, forensic DNA phenotyping (FDP) of biological traces might provide further investigative leads. Information on phenotypical aspects of the donor of a trace, such as skin, eye and hair color, height, or even male baldness patterns [ – ], might help narrowing down the group of potential trace donors. While such characteristics are mainly determined by single nucleotide polymorphisms (SNPs), a trace donor’s age can be estimated based on epigenetic modifications such as age correlated DNA methylation . Beside analyzing DNA traces, further potential fields of applying molecular age estimation comprise the identification of unknown bodies and the objective confirmation of age in potentially underaged individuals: in many countries, unaccompanied underaged refugees are entitled to special protection, and objective age estimation can support such claims. Recent studies revealed high estimation accuracy of age estimation models for blood with a MAD ranging from 3.16 to 10.33 years [ – ]. Best correlations and the lowest estimation errors were found for blood, buccal epithelium, and saliva among other tissues . Currently, most forensic studies on age correlated methylation patterns and model validation are based on blood, while for saliva and buccal swabs, fewer models have been described. Estimation accuracies for saliva and buccal swabs are comparable to those for blood with MAD ranging from 3.13 to 5.8 years and 3.22 to 5.33 years, respectively [ , – ]. Several recent studies focused on methylation patterns of young individuals . While there is a significant overlap between age-associated methylation loci between adults and children, DNA methylation in children changes with up to fourfold higher rates compared to adults . Furthermore, correlation between methylation and age is not always linear but might be logarithmic . Several studies made similar conclusions describing that DNA methylation alters at a more rapid pace between childbirth and adolescence compared to adulthood . The aim of this study was to develop an age estimation model and analyze whether or not considering varying methylation change rates in young versus older individuals improves the prediction model. Human oral mucosa samples were analyzed by minisequencing multiplex PCR. The eight markers used in the study (PDE4C, EDARADD, SST, KLF14, ELOVL2, FHL2, C1orf132, and TRIM59) have been previously reported separately to show a correlation with age [ , , , , , , ]. Especially ELOVL2, KLF14, and TRIM59 have been described as highly accurate markers for age estimation . Sampling, DNA extraction, and quantification Oral mucosa samples from 230 donors (102 male and 128 female) aged 1 to 88 years (mean 38 years) were collected using sterile swabs. The Ethics Committee of the Hamburg Medical Association (Ethikkommission bei der Bundesärztekammer) approved the study protocol (PV6098) and all participants or their legal representatives provided written informed consent. DNA was extracted using the Casework Extraction Kit and Maxwell 16 (Promega) following manufacturer’s recommendations. DNA was quantified using the PowerQuant System (Promega) following manufacturer’s recommendations. Purified DNA samples were stored at 6 °C until further use. Bisulfite conversion DNA samples were bisulfite converted and purified following the instructions of the EpiTect Fast DNA Bisulfite Kit (Qiagen) for high concentration samples. Depending on the determined concentration of each sample, up to 400 ng DNA was used for the treatment. Carrier RNA was not added to Buffer BL. Unmethylated cytosines were converted to uracils by bisulfite treatment, whereas methylated cytosines remained unconverted. To prove a successful bisulfite conversion, a second PowerQuant reaction was performed. The PCR primers should not bind to the converted DNA, meaning that a negative result would prove a complete conversion . Bisulfite converted samples were stored at 6 °C. PCR and minisequencing multiplex The bisulfite converted DNA was amplified by PCR using the PyroMark PCR Kit (Qiagen). Each sample was set in three reactions with primers for eight different markers (for primer sequences see Supplementary Table ); first reaction contained primers for PDE4C, EDARADD, SST, and KLF14, second reaction contained primers for ELOVL2 and C1orf132, and third reaction contained primers for FHL2 and TRIM59. After PCR, 1.25 μl rAPid Alkaline Phosphatase (1 U/μl, Roche) and 0.025 μl Exonuclease I (20 U/μl, Thermo Fisher Scientific) were added to each sample for enzymatic digestion. The samples were incubated for 1 h 35 min at 37 °C followed by denaturation for 15 min at 78 °C. For differentiation between cytosines (methylated) and thymines (originally unmethylated cytosines), a minisequencing reaction was conducted using SNaPshot Multiplex Kit (Thermo Fisher Scientific). Following the sequencing reaction, another 1 μl rAPid Alkaline Phosphatase (1 U/μl, Roche) was added to each sample for enzymatic digestion. The samples were incubated for 1 h 15 min at 37 °C and 15 min at 78 °C and afterwards stored at 6 °C. Capillary electrophoresis and analysis Samples were analyzed by capillary electrophoresis on a 3130 Genetic Analyzer (Applied Biosystems). Size standard 120 LIZ (Applied Biosystems); diluted 1:100 in HiDi formamide (Applied Biosystems) was used and results were evaluated using Gene Mapper ID (v3.2). The proportion of methylated cytosines of the samples was determined by calculating the relative peak heights for adenine and guanine or thymine and cytosine, respectively. Statistics Correlations between chronological age and methylation status of each CpG site was assessed calculating Pearson correlation coefficient ( r ) and corresponding p values. Data was split into a training and validation set, comprising 161 and 69 samples, respectively. Model accuracy was tested for both, training and validation set using the coefficient of determination R 2 , adjusted R 2 value. Mean average deviation (MAD) and the root-mean-square error (RMSE) were computed on the training set (via cross-validation) as well as on the validation set. Statistical analyses were performed using R (version R-4.1.2) including the packages ggplot2 and gridExtra for the creation of figures, Microsoft Office Excel 2016, and IBM SPSS Statistics 25 (IBM Corporation, Somers, NY, USA). To make it easier to follow, a detailed description of model development and validation regarding non-linear dependences and influence of the sex can be found in the “ ” section. Oral mucosa samples from 230 donors (102 male and 128 female) aged 1 to 88 years (mean 38 years) were collected using sterile swabs. The Ethics Committee of the Hamburg Medical Association (Ethikkommission bei der Bundesärztekammer) approved the study protocol (PV6098) and all participants or their legal representatives provided written informed consent. DNA was extracted using the Casework Extraction Kit and Maxwell 16 (Promega) following manufacturer’s recommendations. DNA was quantified using the PowerQuant System (Promega) following manufacturer’s recommendations. Purified DNA samples were stored at 6 °C until further use. DNA samples were bisulfite converted and purified following the instructions of the EpiTect Fast DNA Bisulfite Kit (Qiagen) for high concentration samples. Depending on the determined concentration of each sample, up to 400 ng DNA was used for the treatment. Carrier RNA was not added to Buffer BL. Unmethylated cytosines were converted to uracils by bisulfite treatment, whereas methylated cytosines remained unconverted. To prove a successful bisulfite conversion, a second PowerQuant reaction was performed. The PCR primers should not bind to the converted DNA, meaning that a negative result would prove a complete conversion . Bisulfite converted samples were stored at 6 °C. The bisulfite converted DNA was amplified by PCR using the PyroMark PCR Kit (Qiagen). Each sample was set in three reactions with primers for eight different markers (for primer sequences see Supplementary Table ); first reaction contained primers for PDE4C, EDARADD, SST, and KLF14, second reaction contained primers for ELOVL2 and C1orf132, and third reaction contained primers for FHL2 and TRIM59. After PCR, 1.25 μl rAPid Alkaline Phosphatase (1 U/μl, Roche) and 0.025 μl Exonuclease I (20 U/μl, Thermo Fisher Scientific) were added to each sample for enzymatic digestion. The samples were incubated for 1 h 35 min at 37 °C followed by denaturation for 15 min at 78 °C. For differentiation between cytosines (methylated) and thymines (originally unmethylated cytosines), a minisequencing reaction was conducted using SNaPshot Multiplex Kit (Thermo Fisher Scientific). Following the sequencing reaction, another 1 μl rAPid Alkaline Phosphatase (1 U/μl, Roche) was added to each sample for enzymatic digestion. The samples were incubated for 1 h 15 min at 37 °C and 15 min at 78 °C and afterwards stored at 6 °C. Samples were analyzed by capillary electrophoresis on a 3130 Genetic Analyzer (Applied Biosystems). Size standard 120 LIZ (Applied Biosystems); diluted 1:100 in HiDi formamide (Applied Biosystems) was used and results were evaluated using Gene Mapper ID (v3.2). The proportion of methylated cytosines of the samples was determined by calculating the relative peak heights for adenine and guanine or thymine and cytosine, respectively. Correlations between chronological age and methylation status of each CpG site was assessed calculating Pearson correlation coefficient ( r ) and corresponding p values. Data was split into a training and validation set, comprising 161 and 69 samples, respectively. Model accuracy was tested for both, training and validation set using the coefficient of determination R 2 , adjusted R 2 value. Mean average deviation (MAD) and the root-mean-square error (RMSE) were computed on the training set (via cross-validation) as well as on the validation set. Statistical analyses were performed using R (version R-4.1.2) including the packages ggplot2 and gridExtra for the creation of figures, Microsoft Office Excel 2016, and IBM SPSS Statistics 25 (IBM Corporation, Somers, NY, USA). To make it easier to follow, a detailed description of model development and validation regarding non-linear dependences and influence of the sex can be found in the “ ” section. Examples of the multiplex results are shown in Supplementary Figure . The correlation between chronological age and methylation status of each CpG site was assessed using R 2 (Fig. ), Pearson correlation coefficient ( r ), and corresponding p values (Supplementary Table ). There were statistically noticeable correlations between chronological age and the methylation status at seven of the eight CpG sites (PDE4C, EDARADD, SST, KLF14, ELOVL2, FHL2, and TRIM59). The strongest correlation was detected for the CpG site in TRIM59 ( r = 0.86). In this study, SST, ELOVL2, and TRIM59 revealed the strongest correlations with age, matching the results of previous studies [ , , ]. In contrast, PDE4C (cg17861230 +36 bp) showed weaker correlations with age compared to a previous study . EDARADD and C1orf132 were the only markers in this study showing negative correlations with age. In previous studies , SST cg00481951 was a promising marker for age estimation and was incorporated into the model of Hong et al. . Although SST showed moderate to good correlation and R 2 values in our study, it had to be removed from further analysis due to missing values from two thirds of all samples. Model construction and validation Due to missing values in the data set of SST, this marker was excluded. The rest of the markers showed moderate to strong correlations with chronological age; therefore, the seven CpG sites of the markers PDE4C, EDARADD, KLF14, ELOVL2, FHL2, C1orf132, and TRIM59 of 161 individuals (training set) were included in the regression analysis. It is well known that the relationship between methylation and chronological age is not necessarily linear . Our methylation data shown in Fig. also raises this suspicion. In particular, the epigenetic age advances faster during adolescence (age ≤ 20) and slower for elderly people (age ≥ 80) compared to chronological age. In between, epigenetic and chronological age are assumed to have a linear relationship (see Figure c in ). As the amount of elderly people with an age over 80 is rather small in our data set (3 subjects in the training data set and 1 subject in the validation data set), we focus on the non-linear relationship for adolescents. The following transformation has been suggested to model the described behavior by connecting chronological age y c and epigenetic age y e and improve the performance of subsequent regression analyses : [12pt]{minimal} $$f({y}_c;\;{y}_{c,adult}):= \{ ({y}_c+1)- ({y}_{c, adult}+1)\\ {}\\ {}_{c, adult}}{y_{c, adult}+1}2em \ {y}_c {y}_{c, adult}}{\\ {}}.$$ f y c ; y c , a d u l t : = log y c + 1 - log y c , a d u l t + 1 y c - y c , a d u l t y c , a d u l t + 1 if y c ≤ y c , a d u l t else This transformation is also displayed for y c , adult = 20 on the right hand side of Fig. . Larger values of y c , adult lead to even steeper curves at the origin. For a fixed value of y c , adult , one may then conduct a regression analysis to estimate regression parameters β for the linear model: [12pt]{minimal} $$f({y}_c;{y}_{c,adult})=: {y}_e={}_0+{}_1{x}_1+ +{}_k{x}_k+$$ f y c ; y c , a d u l t = : y e = β 0 + β 1 x 1 + ⋯ + β k x k + ε where x 1 , …, x k denote methylation values from k different CpG sites. The estimated chronological age can be identified afterwards by application of the inverse of f for a fixed value of y c , adult , where [12pt]{minimal} $$$$ β ^ denotes the estimated regression parameters: [12pt]{minimal} $${}_c:= {f}^{-1}({}_e;{y}_{c,adult})={f}^{-1}({}_0+{}_1{x}_1+ +{}_k{x}_k;{y}_{c,adult})$$ y ^ c : = f - 1 y ^ e ; y c , a d u l t = f - 1 β ^ 0 + β ^ 1 x 1 + ⋯ + β ^ k x k ; y c , a d u l t Upon first-time application of this transformation , the value y c , adult was set to 20. Since a study on further choices of this value has not been described in the original work by Horvath, we investigated whether the choice of this cut-off value can be optimized. That is, we regard the cut-off y c , adult from Horvath’s transformation as a hyperparameter whose value shall be determined via a repeated 10-fold cross-validation on the training data. We repeated this cross-validation 100 times to exclude a dependence of the results from the choice of the folds. Furthermore, we investigated potential sex-specific differences in epigenetic development. As sex-specific influences on age estimation models were discussed in previous studies [ , , ], we also examined these effects in our data. Therefore, we repeated the procedure described above for the subsets of data consisting only of women resp. men to find sex-specific differences. That resulted in different cut-off values y c , adult for women and men. The results of the cross-validation procedure on the training data set can be found in Fig. . While the cross-validated RMSE indicates that the choice of y c , adult = 20 is a good choice for a unisex model and a separate model for men, the RMSE for a separate model for women can be improved by choosing a larger cut-off value. This suggests that there are differences in the epigenetic aging pattern between men and women which give rise to establishing sex-specific models to improve prediction, especially for adolescents and young adults. The exact values for the minimal RMSE and the corresponding cut-off value of the cross-validation can be found in Table . Please note that a cut-off value of 0 corresponds to the standard linear model. Finally, we fitted the different models on the training data resp. its sex-specific subsets and computed the RMSE on the validation set. While the observations from the cross-validation could be replicated for women, this was not possible for the unisex model (where the optimized model performed worse than the default choice) and men (where both transformations performed worse than the standard linear model). However, this may be due to the small sample sizes, especially for men with only 29 individuals in the validation set. As shown in Fig. and Table , our results suggest a faster epigenetic aging in men compared to women. These findings are concordant with the results of Hannum et al. . Table shows higher RMSE values for men on the validation set, which supports findings of previous studies [ , , ]. As we are dealing with quite small sample sizes, the sex-specific models have a clear disadvantage in that they can only be fitted with half of the observations. This disadvantage should diminish with an increasing overall sample size . In this light, based on the results obtained here, it appears reasonable to consider sex-specific models and respective transformations for estimating chronological age in future studies with larger sample sizes. Age estimation of the training set revealed a strong correlation with age ( r = 0.942) and a MAD and RMSE of 4.680 and 6.436 years, respectively. Within the training set, the seven CpG site model could explain 88.8% of the age variance ( R 2 = 0.888, adj. R 2 = 0.883). Application of the model In the present situation, it appears most appropriate to apply the unisex model with the default cut-off value of y c , adult = 20 when the chronological age of new subjects shall be estimated. We give a brief outline of how the age estimation for an individual of unknown age can be performed by applying this model with methylation values x PDE 4 C , x EDARADD , x KLF 4 , x ELOVL 2 , x FHL 2 , x C 1 orf 132 , and x TRIM 59 (each between 0 and 1). Firstly, the linear prediction according to our estimation of the epigenetic age [12pt]{minimal} $${}_e$$ y ^ e can be computed via: [12pt]{minimal} $${}_e=-1.5880-0.0400\ {x}_{PDE4C}-1.9120\ {x}_{EDARADD}+5.0157\ {x}_{KLF4}+0.5961\ {x}_{ELOVL2}+1.7463\ {x}_{FHL2}-0.0108\ {x}_{C1 orf132}+3.5634\ {x}_{TRIM59}$$ y ^ e = - 1.5880 - 0.0400 x P D E 4 C - 1.9120 x EDARADD + 5.0157 x K L F 4 + 0.5961 x E L O V L 2 + 1.7463 x F H L 2 - 0.0108 x C 1 o r f 132 + 3.5634 x T R I M 59 If this value is positive, the chronological age can be transformed back to the chronological scale with the linear transformation: [12pt]{minimal} $${}_c={f}^{-1}({}_e;\;20)={}_e (20+1)+20$$ y ^ c = f - 1 y ^ e ; 20 = y ^ e * 20 + 1 + 20 Otherwise (if values are negative), one obtains the estimated chronological age via: [12pt]{minimal} $${}_c={f}^{-1}({}_e;\;20)= ({}_e+ (20+1))-1$$ y ^ c = f - 1 y ^ e ; 20 = exp y ^ e + log 20 + 1 - 1 Prediction intervals for the chronological age can be computed from the same backtransformation procedure after computing prediction intervals on the epigenetic scale with the model’s residual standard error of 0.2905. One should note that the shape of the backtransformation function leads to shorter prediction intervals for subjects that are estimated to be young. Model performance on the validation set The resulting model comprised seven CpG sites of the markers PDE4C, EDARADD, KLF14, ELOVL2, FHL2, C1orf132, and TRIM59, explaining 87.8% of age variance in the validation set ( R 2 = 0.878, adj. R 2 = 0.864). The age predictions of the resulting model have a strong correlation between chronological and predicted age ( r = 0.937) with a MAD of 4.695 years and a RMSE of 6.602 years (Fig. ). As seen in Fig. , age predictions of training and validation sets showed a similarly strong correlation between predicted and chronological age. The high comparability of the training and validation set is also shown in Fig. , which compares the estimation errors of both data sets. Further visualization of the difference between chronological and estimated age is shown in the Bland-Altman plot in Fig. . A mean difference of −1.718 (SD 6.417) years indicates a slight underestimation of age. The 95% limit of agreement ranges from 10.86 to −14.295 years. The plot shows a tendency to overestimate younger individuals, especially from 0 to 20 years, and to underestimate older individuals (50+ years). Studies from Naue et al. and Schwender et al. made similar observations. The largest positive deviation from chronological to estimated age within the validation set (20.904 years) was found for a 30-year-old individual with an estimated age of 50.904 years. The largest negative deviation (−20.685 years) was found for an 86-year-old individual with an estimated age of 65. 315 years. To further assess model performance, we followed the recommendations of Schwender et al. . The validation set was subdivided into age categories and the absolute deviation between estimated and chronological age was split into four categories (up to ±3 years deviation, up to ±4 years, up to ±5 years, and up to ±6 years deviation) as shown in Table . In general, prediction accuracy in younger individuals was higher compared to older individuals. Similar tendencies were observed regarding the MAD, confirming results of previous studies [ , , , , ]. The study presented here comprises two obvious shortcomings: the number of individuals ages 60+ was rather small within our validation set. Consequently, prediction accuracy of the model cannot be reliably assessed within this age group. Secondly, environmental influences have not been taken into account in this study, even though they might play a role in the changing of DNA methylation patterns . Due to missing values in the data set of SST, this marker was excluded. The rest of the markers showed moderate to strong correlations with chronological age; therefore, the seven CpG sites of the markers PDE4C, EDARADD, KLF14, ELOVL2, FHL2, C1orf132, and TRIM59 of 161 individuals (training set) were included in the regression analysis. It is well known that the relationship between methylation and chronological age is not necessarily linear . Our methylation data shown in Fig. also raises this suspicion. In particular, the epigenetic age advances faster during adolescence (age ≤ 20) and slower for elderly people (age ≥ 80) compared to chronological age. In between, epigenetic and chronological age are assumed to have a linear relationship (see Figure c in ). As the amount of elderly people with an age over 80 is rather small in our data set (3 subjects in the training data set and 1 subject in the validation data set), we focus on the non-linear relationship for adolescents. The following transformation has been suggested to model the described behavior by connecting chronological age y c and epigenetic age y e and improve the performance of subsequent regression analyses : [12pt]{minimal} $$f({y}_c;\;{y}_{c,adult}):= \{ ({y}_c+1)- ({y}_{c, adult}+1)\\ {}\\ {}_{c, adult}}{y_{c, adult}+1}2em \ {y}_c {y}_{c, adult}}{\\ {}}.$$ f y c ; y c , a d u l t : = log y c + 1 - log y c , a d u l t + 1 y c - y c , a d u l t y c , a d u l t + 1 if y c ≤ y c , a d u l t else This transformation is also displayed for y c , adult = 20 on the right hand side of Fig. . Larger values of y c , adult lead to even steeper curves at the origin. For a fixed value of y c , adult , one may then conduct a regression analysis to estimate regression parameters β for the linear model: [12pt]{minimal} $$f({y}_c;{y}_{c,adult})=: {y}_e={}_0+{}_1{x}_1+ +{}_k{x}_k+$$ f y c ; y c , a d u l t = : y e = β 0 + β 1 x 1 + ⋯ + β k x k + ε where x 1 , …, x k denote methylation values from k different CpG sites. The estimated chronological age can be identified afterwards by application of the inverse of f for a fixed value of y c , adult , where [12pt]{minimal} $$$$ β ^ denotes the estimated regression parameters: [12pt]{minimal} $${}_c:= {f}^{-1}({}_e;{y}_{c,adult})={f}^{-1}({}_0+{}_1{x}_1+ +{}_k{x}_k;{y}_{c,adult})$$ y ^ c : = f - 1 y ^ e ; y c , a d u l t = f - 1 β ^ 0 + β ^ 1 x 1 + ⋯ + β ^ k x k ; y c , a d u l t Upon first-time application of this transformation , the value y c , adult was set to 20. Since a study on further choices of this value has not been described in the original work by Horvath, we investigated whether the choice of this cut-off value can be optimized. That is, we regard the cut-off y c , adult from Horvath’s transformation as a hyperparameter whose value shall be determined via a repeated 10-fold cross-validation on the training data. We repeated this cross-validation 100 times to exclude a dependence of the results from the choice of the folds. Furthermore, we investigated potential sex-specific differences in epigenetic development. As sex-specific influences on age estimation models were discussed in previous studies [ , , ], we also examined these effects in our data. Therefore, we repeated the procedure described above for the subsets of data consisting only of women resp. men to find sex-specific differences. That resulted in different cut-off values y c , adult for women and men. The results of the cross-validation procedure on the training data set can be found in Fig. . While the cross-validated RMSE indicates that the choice of y c , adult = 20 is a good choice for a unisex model and a separate model for men, the RMSE for a separate model for women can be improved by choosing a larger cut-off value. This suggests that there are differences in the epigenetic aging pattern between men and women which give rise to establishing sex-specific models to improve prediction, especially for adolescents and young adults. The exact values for the minimal RMSE and the corresponding cut-off value of the cross-validation can be found in Table . Please note that a cut-off value of 0 corresponds to the standard linear model. Finally, we fitted the different models on the training data resp. its sex-specific subsets and computed the RMSE on the validation set. While the observations from the cross-validation could be replicated for women, this was not possible for the unisex model (where the optimized model performed worse than the default choice) and men (where both transformations performed worse than the standard linear model). However, this may be due to the small sample sizes, especially for men with only 29 individuals in the validation set. As shown in Fig. and Table , our results suggest a faster epigenetic aging in men compared to women. These findings are concordant with the results of Hannum et al. . Table shows higher RMSE values for men on the validation set, which supports findings of previous studies [ , , ]. As we are dealing with quite small sample sizes, the sex-specific models have a clear disadvantage in that they can only be fitted with half of the observations. This disadvantage should diminish with an increasing overall sample size . In this light, based on the results obtained here, it appears reasonable to consider sex-specific models and respective transformations for estimating chronological age in future studies with larger sample sizes. Age estimation of the training set revealed a strong correlation with age ( r = 0.942) and a MAD and RMSE of 4.680 and 6.436 years, respectively. Within the training set, the seven CpG site model could explain 88.8% of the age variance ( R 2 = 0.888, adj. R 2 = 0.883). In the present situation, it appears most appropriate to apply the unisex model with the default cut-off value of y c , adult = 20 when the chronological age of new subjects shall be estimated. We give a brief outline of how the age estimation for an individual of unknown age can be performed by applying this model with methylation values x PDE 4 C , x EDARADD , x KLF 4 , x ELOVL 2 , x FHL 2 , x C 1 orf 132 , and x TRIM 59 (each between 0 and 1). Firstly, the linear prediction according to our estimation of the epigenetic age [12pt]{minimal} $${}_e$$ y ^ e can be computed via: [12pt]{minimal} $${}_e=-1.5880-0.0400\ {x}_{PDE4C}-1.9120\ {x}_{EDARADD}+5.0157\ {x}_{KLF4}+0.5961\ {x}_{ELOVL2}+1.7463\ {x}_{FHL2}-0.0108\ {x}_{C1 orf132}+3.5634\ {x}_{TRIM59}$$ y ^ e = - 1.5880 - 0.0400 x P D E 4 C - 1.9120 x EDARADD + 5.0157 x K L F 4 + 0.5961 x E L O V L 2 + 1.7463 x F H L 2 - 0.0108 x C 1 o r f 132 + 3.5634 x T R I M 59 If this value is positive, the chronological age can be transformed back to the chronological scale with the linear transformation: [12pt]{minimal} $${}_c={f}^{-1}({}_e;\;20)={}_e (20+1)+20$$ y ^ c = f - 1 y ^ e ; 20 = y ^ e * 20 + 1 + 20 Otherwise (if values are negative), one obtains the estimated chronological age via: [12pt]{minimal} $${}_c={f}^{-1}({}_e;\;20)= ({}_e+ (20+1))-1$$ y ^ c = f - 1 y ^ e ; 20 = exp y ^ e + log 20 + 1 - 1 Prediction intervals for the chronological age can be computed from the same backtransformation procedure after computing prediction intervals on the epigenetic scale with the model’s residual standard error of 0.2905. One should note that the shape of the backtransformation function leads to shorter prediction intervals for subjects that are estimated to be young. The resulting model comprised seven CpG sites of the markers PDE4C, EDARADD, KLF14, ELOVL2, FHL2, C1orf132, and TRIM59, explaining 87.8% of age variance in the validation set ( R 2 = 0.878, adj. R 2 = 0.864). The age predictions of the resulting model have a strong correlation between chronological and predicted age ( r = 0.937) with a MAD of 4.695 years and a RMSE of 6.602 years (Fig. ). As seen in Fig. , age predictions of training and validation sets showed a similarly strong correlation between predicted and chronological age. The high comparability of the training and validation set is also shown in Fig. , which compares the estimation errors of both data sets. Further visualization of the difference between chronological and estimated age is shown in the Bland-Altman plot in Fig. . A mean difference of −1.718 (SD 6.417) years indicates a slight underestimation of age. The 95% limit of agreement ranges from 10.86 to −14.295 years. The plot shows a tendency to overestimate younger individuals, especially from 0 to 20 years, and to underestimate older individuals (50+ years). Studies from Naue et al. and Schwender et al. made similar observations. The largest positive deviation from chronological to estimated age within the validation set (20.904 years) was found for a 30-year-old individual with an estimated age of 50.904 years. The largest negative deviation (−20.685 years) was found for an 86-year-old individual with an estimated age of 65. 315 years. To further assess model performance, we followed the recommendations of Schwender et al. . The validation set was subdivided into age categories and the absolute deviation between estimated and chronological age was split into four categories (up to ±3 years deviation, up to ±4 years, up to ±5 years, and up to ±6 years deviation) as shown in Table . In general, prediction accuracy in younger individuals was higher compared to older individuals. Similar tendencies were observed regarding the MAD, confirming results of previous studies [ , , , , ]. The study presented here comprises two obvious shortcomings: the number of individuals ages 60+ was rather small within our validation set. Consequently, prediction accuracy of the model cannot be reliably assessed within this age group. Secondly, environmental influences have not been taken into account in this study, even though they might play a role in the changing of DNA methylation patterns . The main aim of this study was to evaluate a set of CpG sites as reliable DNA methylation predictors of chronological age in minors as well as in adult individuals and different sexes by performing a minisequencing multiplex assay. Seven CpG sites (cg17861230 (+36 bp), cg09809672 (−12 bp), cg14361627, cg16867657 (−16 bp), cg06639320, cg10501210 (+6 bp), and cg07553761) in EDARADD, KLF14, ELOVL2, in FHL2, in C1orf132, and TRIM59 were included in this study. Validation of the final model revealed a cross-validated MAD and RMSE of 4.680 and 6.436 years in the training set and 4.695 and 6.602 years in the validation set, respectively, making this model likely to be useful in forensic investigations in the future. Regarding RMSE, sex-specific models did not outperform the unisex models in our limited data set. In larger sample sets, however, sex-specific modeling might increase prediction accuracy. DNA methylation analysis by minisequencing has the potential to become a tool in criminal investigation. Compared to massively parallel sequencing approaches, minisequencing has the benefit of being more flexible, less time consuming when analyzing small sample numbers, and easy to implement into forensic laboratories without the need for specified sequencing equipment. ESM 1 (PDF 209 kb) ESM 2 (XLSX 11 kb) ESM 3 (XLSX 41 kb)
Impact of (forensic) expert opinions according to the Istanbul Protocol in Germany—results and insights of the in:Fo-project
82751ddd-e258-4dcc-b525-037e0b7e8dec
10085958
Forensic Medicine[mh]
Torture remains a widespread practice employed by many (para-)governmental actors to subjugate, terrorize and/or dehumanize other persons . The European Union has addressed this issue with the so-called Reception Directive stating that asylum applicants, belonging to vulnerable groups such as “persons who have been subjected to torture”, must be considered to have special needs and to require specific support . EU Member States are obliged to assess whether asylum applicants are indeed such vulnerable persons within a reasonable timespan after an application for international protection is made. EU directives are binding for EU Member States, requiring all EU Member States to establish the necessary structures for such an assessment. Suggestions for a comprehensive and routinely performed assessment have been made , but most often, if at all, countries tackle this challenge via temporary projects . The in:Fo-project (short for German “interdisziplinär: Folterfolgen erkennen und versorgen”) was launched to counteract a lack of structures in Germany and to optimize the medical and psychosocial support of persons with a history of torture, including the assessment of the experienced violence according to the Istanbul Protocol (IP) . The project was funded by the AMIF, the European Asylum, Migration and Integration Fund, and extended from July 1, 2018, to June 30, 2020. By building up a dedicated network of professionals to enable such assessments on a regional level, it was meant to provide insights and serve as a best practice model. In:Fo included a multi-professional approach that aimed on improving the identification of persons with torture experience by training medical staff in shelters for asylum seekers, the clarification of their individual needs by establishing a case management system, the assessment of the alleged torture experience by following the guidelines of the IP and the access to medical and psychosocial support-institutions. The IP is the international guideline for the investigation and documentation of torture. It advocates an interdisciplinary approach comprising a (forensic) medical examination as well as a psychological appraisal. As the forensic evaluation of traumatological findings is a key duty of Institutes of Legal Medicine in Germany, it is only reasonable to include such expertise in these assessments. However, although the IP has already been published in 1999, it is still rather unknown [ – ]. In Germany, forensic medical expert reports in the context of claimed torture have only rarely been written in the past. Even more, the aspect of interdisciplinarity and collaboration with psychiatric/psychological experts has been neglected. In this respect, the in:Fo-project represented an absolute novelty with regard to the assessment of alleged torture. The forensic physical evaluation was covered by expert physicians at the Institute of Legal Medicine in Düsseldorf, whereas the psychological appraisals were organized by three participating psychosocial/psychiatric facilities: the Psychosocial Centre for Refugees in Düsseldorf, the Medical Refugee Help Centre in Bochum and the Transcultural Day Care Unit at the Düsseldorf Clinic for Psychiatry and Psychosomatic Medicine (for easier reading, all three will be referred to as PSCs from here on). As part of the case management process, each study participant was first assessed at one of the participating PSCs. Based on the information gathered during an initial interview and—depending on availability and need—a consultation with a physician at said PSC for a general medical appraisal, a decision was made whether a forensic medical examination was likely to further the clarification of facts. Study participants presenting for a forensic medical examination mostly received a comprehensive forensic medical expert opinion; only in single cases merely a forensic medical report of findings was prepared. Language and culture mediators were provided whenever necessary. While there have been comparable programs in other countries [ – ], the in:Fo-project is Germany’s first large-scale attempt to compile expert opinions following the standards set by the IP. The focus of this publication lies primarily on the forensic medical aspects, especially their significance in the context of an interdisciplinary clarification of facts as proposed by the IP guidelines. We set out to examine whether this unprecedented degree of “forensic medical input” had measurable effects on asylum proceedings. In order to do so, we reviewed the project cases with regard to a possible “connection” between certain characteristics and the progress of the individual asylum proceeding. The methodological approach was based on a master’s thesis of one of the co-authors, submitted to the Department of Psychology at the University of Cologne . However, the presented results are completely original since new statistical calculations, based on a larger data set, had been performed. The study was approved by the local ethics committee (study number 2022–1869). Different variables that might have influenced the study participants’ asylum procedures were drawn from (a) the forensic medical documentation, (b) a questionnaire for PSC counsellors and (c) a query on the asylum status of the study participants. In the following, all variables used are explained in detail and displayed in italic characters. An overview is presented in Fig. . Study participants The participants of the in:Fo-project that are included in the study can be categorized as shown in Fig. . When entering the project, they underwent an extensive assessment of their individual needs. Forensic medical examinations were offered whenever deemed necessary to clarify stated torture experiences. Thus, expert reports were never commissioned by official authorities but were treated as private commissions and handed over to the study participants’ legal counsellors. Most of the examinations resulted in a full forensic medical expert opinion. Whenever feasible, they included an IP grading at the discretion of the forensic medical experts involved. In six cases, a forensic medical expert opinion was deemed irrelevant to the proceedings by the study participants’ legal counsellor after the examination had already been performed. Therefore, the forensic medical documentation was reduced to a simple report of findings (including only the stated history and the examination findings). In one case, even such a shortened documentation was rejected. In 43 cases, a medical appraisal was performed by physicians in the PSCs. These examinations aimed at a rather general physical and psychosocial assessment and were not equivalent to the more specialized forensic medical expert opinions and the psychological appraisals. Evaluation of forensic medical expert opinions/reports of findings All forensic medical expert opinions and reports of findings were retrospectively scrutinized, regarding: Types of violence (see Table for definitions) specified in each case Sum of different violence types Use of IP grading Was the IP grading system applied? If yes: which was the highest ( highest IP grade ) and which was the lowest ( lowest IP grade) grade given? Inclusion of external medical findings Was there explicit mention of other medical findings (e.g. radiological, dental, orthopaedic)? Extent of trauma Number of separate entries regarding injuries Number of pages Certain specifics concerning the examination situation, the anamnesis and the study participant were also covered: Language mediation Language and culture mediator present during the forensic examination? Minimal age of injuries Number of months between the most recent injury (as reported by the participants) and the forensic medical examination Participant’s age Age at the time of the forensic examination Time since arrival Number of months between arrival in the European Union and the forensic examination The forensic medical expert opinions were also reviewed concerning easily recognizable injuries that might possibly influence the decision-making process on the part of German authorities: Visibility of trauma Visible trauma residues in the facial area? Visible disfigurements, amputations, etc.? Obvious and easily noticeable loss of a body function (e.g. pronounced walking impairment)? Questionnaires for PSC counsellors Professional opinions of the responsible PSC counsellors were gathered to operationalize the proceedings for statistical analysis. Following the end of the project duration, questionnaires were sent out for all 130 cases. PSC counsellors had to rate the impact of the participants’ inclusion into the in:Fo-project with its assessment process. Counsellors were first asked to indicate both how distressful and how helpful the inclusion had been—from their client’s point of view. Secondly, they were asked to do so from their own professional perspective, with further emphasis on whether inclusion had been helpful regarding diagnostic and therapeutic aspects. They were given the option to elaborate on this via free-text answers. They also stated whether any expert opinion had been prepared as part of the assessment process ( compilation of forensic medical expert opinion, … of psychological appraisal, … of general medical appraisal ) and whether any of them had actually been introduced into the asylum procedure ( introduction of forensic medical expert opinion, … of psychological appraisal, … of general medical appraisal ). Counsellors were then asked to judge from their subjective point of view whether these expert opinions influenced the asylum procedure upon their introduction: besides a positive or negative influence on the protection status, this also included, for example, the assignment of an appropriate decider , specifically trained for cases of alleged torture. The answers represent the dependent variable (DV) PSC-rated influence on asylum procedure . Finally, they were asked to specify whether a legal counsel and/or an asylum procedure advisor had been involved in each case. Query on asylum status As a follow-up, independent of the above-mentioned questionnaire, the responsible PSC counsellors were also asked to review IP assessment cases to determine, if there had been an objective gain (i.e. higher status) or not (i.e. identical or lower status) regarding the asylum status since entry of the participant into the project. This constituted the dichotomous DV rise in asylum status . Asylum status was differentiated as follows from “highest” to “lowest”: Settlement permit Refugee protection Subsidiary protection Deportation ban Residence authorisation Short-term permit (permission to remain until deported) or undocumented Statistical analyses All calculations were performed using the “jamovi” software ( www.jamovi.org ), including logistic/linear regression and bivariate analysis. Two different operationalizations were chosen: Analysis of DV PSC-rated influence on asylum procedure We tested if the introduction of expert opinions had a measurable influence on the asylum proceedings. Via linear regression, we calculated whether their introduction predicted a perceived influence on the proceedings from the responsible PSC counsellors’ point of view. Three dummy-coded independent variables ( introduction of forensic medical expert opinion/psychological appraisal/general medical appraisal ) were modelled as predictors. We separately controlled for these independent variables. More complex calculations regarding a possible influence of the introduction of more than one report/appraisal were not feasible due to small numbers. Analysis of DV rise in asylum status We calculated via logistic regression whether one or more independent variables could be used to predict a heightened asylum status. The independent variables were used as listed in Fig. , the only addition being that introduction of a forensic medical expert opinion was controlled for with highest IP grade as a covariant. Bivariate analysis We also performed discovery-driven bivariate analysis regarding the abovementioned DVs. For the interval-scaled DV PSC-rated influence on asylum procedure , this included a correlation matrix with the following variables: Highest IP grade Lowest IP grade Minimal age of presented injuries Participant’s age Extent of trauma Total sum of different violence types For the dichotomous DV rise in asylum status , this included chi-square tests with the following variables (which could not be included in the abovementioned regression analysis due to small numbers): Legal counsel Asylum procedure advisor Stage of the asylum procedure The participants of the in:Fo-project that are included in the study can be categorized as shown in Fig. . When entering the project, they underwent an extensive assessment of their individual needs. Forensic medical examinations were offered whenever deemed necessary to clarify stated torture experiences. Thus, expert reports were never commissioned by official authorities but were treated as private commissions and handed over to the study participants’ legal counsellors. Most of the examinations resulted in a full forensic medical expert opinion. Whenever feasible, they included an IP grading at the discretion of the forensic medical experts involved. In six cases, a forensic medical expert opinion was deemed irrelevant to the proceedings by the study participants’ legal counsellor after the examination had already been performed. Therefore, the forensic medical documentation was reduced to a simple report of findings (including only the stated history and the examination findings). In one case, even such a shortened documentation was rejected. In 43 cases, a medical appraisal was performed by physicians in the PSCs. These examinations aimed at a rather general physical and psychosocial assessment and were not equivalent to the more specialized forensic medical expert opinions and the psychological appraisals. All forensic medical expert opinions and reports of findings were retrospectively scrutinized, regarding: Types of violence (see Table for definitions) specified in each case Sum of different violence types Use of IP grading Was the IP grading system applied? If yes: which was the highest ( highest IP grade ) and which was the lowest ( lowest IP grade) grade given? Inclusion of external medical findings Was there explicit mention of other medical findings (e.g. radiological, dental, orthopaedic)? Extent of trauma Number of separate entries regarding injuries Number of pages Certain specifics concerning the examination situation, the anamnesis and the study participant were also covered: Language mediation Language and culture mediator present during the forensic examination? Minimal age of injuries Number of months between the most recent injury (as reported by the participants) and the forensic medical examination Participant’s age Age at the time of the forensic examination Time since arrival Number of months between arrival in the European Union and the forensic examination The forensic medical expert opinions were also reviewed concerning easily recognizable injuries that might possibly influence the decision-making process on the part of German authorities: Visibility of trauma Visible trauma residues in the facial area? Visible disfigurements, amputations, etc.? Obvious and easily noticeable loss of a body function (e.g. pronounced walking impairment)? Professional opinions of the responsible PSC counsellors were gathered to operationalize the proceedings for statistical analysis. Following the end of the project duration, questionnaires were sent out for all 130 cases. PSC counsellors had to rate the impact of the participants’ inclusion into the in:Fo-project with its assessment process. Counsellors were first asked to indicate both how distressful and how helpful the inclusion had been—from their client’s point of view. Secondly, they were asked to do so from their own professional perspective, with further emphasis on whether inclusion had been helpful regarding diagnostic and therapeutic aspects. They were given the option to elaborate on this via free-text answers. They also stated whether any expert opinion had been prepared as part of the assessment process ( compilation of forensic medical expert opinion, … of psychological appraisal, … of general medical appraisal ) and whether any of them had actually been introduced into the asylum procedure ( introduction of forensic medical expert opinion, … of psychological appraisal, … of general medical appraisal ). Counsellors were then asked to judge from their subjective point of view whether these expert opinions influenced the asylum procedure upon their introduction: besides a positive or negative influence on the protection status, this also included, for example, the assignment of an appropriate decider , specifically trained for cases of alleged torture. The answers represent the dependent variable (DV) PSC-rated influence on asylum procedure . Finally, they were asked to specify whether a legal counsel and/or an asylum procedure advisor had been involved in each case. As a follow-up, independent of the above-mentioned questionnaire, the responsible PSC counsellors were also asked to review IP assessment cases to determine, if there had been an objective gain (i.e. higher status) or not (i.e. identical or lower status) regarding the asylum status since entry of the participant into the project. This constituted the dichotomous DV rise in asylum status . Asylum status was differentiated as follows from “highest” to “lowest”: Settlement permit Refugee protection Subsidiary protection Deportation ban Residence authorisation Short-term permit (permission to remain until deported) or undocumented All calculations were performed using the “jamovi” software ( www.jamovi.org ), including logistic/linear regression and bivariate analysis. Two different operationalizations were chosen: Analysis of DV PSC-rated influence on asylum procedure We tested if the introduction of expert opinions had a measurable influence on the asylum proceedings. Via linear regression, we calculated whether their introduction predicted a perceived influence on the proceedings from the responsible PSC counsellors’ point of view. Three dummy-coded independent variables ( introduction of forensic medical expert opinion/psychological appraisal/general medical appraisal ) were modelled as predictors. We separately controlled for these independent variables. More complex calculations regarding a possible influence of the introduction of more than one report/appraisal were not feasible due to small numbers. Analysis of DV rise in asylum status We calculated via logistic regression whether one or more independent variables could be used to predict a heightened asylum status. The independent variables were used as listed in Fig. , the only addition being that introduction of a forensic medical expert opinion was controlled for with highest IP grade as a covariant. Bivariate analysis We also performed discovery-driven bivariate analysis regarding the abovementioned DVs. For the interval-scaled DV PSC-rated influence on asylum procedure , this included a correlation matrix with the following variables: Highest IP grade Lowest IP grade Minimal age of presented injuries Participant’s age Extent of trauma Total sum of different violence types For the dichotomous DV rise in asylum status , this included chi-square tests with the following variables (which could not be included in the abovementioned regression analysis due to small numbers): Legal counsel Asylum procedure advisor Stage of the asylum procedure PSC-rated influence on asylum procedure We tested if the introduction of expert opinions had a measurable influence on the asylum proceedings. Via linear regression, we calculated whether their introduction predicted a perceived influence on the proceedings from the responsible PSC counsellors’ point of view. Three dummy-coded independent variables ( introduction of forensic medical expert opinion/psychological appraisal/general medical appraisal ) were modelled as predictors. We separately controlled for these independent variables. More complex calculations regarding a possible influence of the introduction of more than one report/appraisal were not feasible due to small numbers. rise in asylum status We calculated via logistic regression whether one or more independent variables could be used to predict a heightened asylum status. The independent variables were used as listed in Fig. , the only addition being that introduction of a forensic medical expert opinion was controlled for with highest IP grade as a covariant. We also performed discovery-driven bivariate analysis regarding the abovementioned DVs. For the interval-scaled DV PSC-rated influence on asylum procedure , this included a correlation matrix with the following variables: Highest IP grade Lowest IP grade Minimal age of presented injuries Participant’s age Extent of trauma Total sum of different violence types For the dichotomous DV rise in asylum status , this included chi-square tests with the following variables (which could not be included in the abovementioned regression analysis due to small numbers): Legal counsel Asylum procedure advisor Stage of the asylum procedure Demographic and chronological information as derived from the forensic documents Eighty-seven participants identified themselves as male and 11 participants as female. Ages ranged from 16 to 63 years (mean age: 31.0 years). Countries of origin varied between 27 countries; most frequent were Guinea (19.4%), Sri Lanka (12.2%) and Iraq (7.1%). Language mediation was necessary in 82.7% of the cases, most often for Arabic (11.2%), Tamil (11.2%) and Farsi/Dari (10.2%). Based upon the stated history, the minimal age of the presented scars ranged between 10 and 240 months (mean ≈ 65 months). The participants had arrived in the European Union between 6 and 141 months (mean ≈ 38.5 months) prior to the examination. Most of the participants had a residence authorisation when entering the project. Types of violence, medical information and IP grading as derived from forensic medical expert opinions Blunt force trauma was by far the most frequently named form of violence and was mentioned in a total of 96 cases (≙ 98,0%), with objects (88.8%) and hands/fists (59.2%) as most frequent vectors (Table ). Other common types of violence included thermal violence (38.8%) and sharp force trauma (35.7%). The sum of different types of violence varied markedly between cases. While 17.3% of individuals reported only one type of violence, other cases included up to eight different types of violence (Table ). As for the trauma extent, the length of forensic expert opinions varied between 4 and 14 pages (mean ≈ 7.91), while the number of separate entries varied between 1 and 76 (mean ≈ 22.12). Visible trauma residues in the facial area could be discerned in 50 cases, visible deformities in 12; an obvious loss of function was present in 17 cases. External medical documentation was included in 33 of the forensic medical expert opinions. IP grading was applied in 60 of the 92 forensic medical expert opinions. Grades were given as follows: 39 cases were graded as “consistent with” 12 cases were graded as “highly consistent” 4 cases were graded with varying IP grades and “highly consistent” as highest grade 5 cases were graded with varying IP grades and “typical of” as highest grade The remaining 32 forensic medical expert opinions abstained from an IP grading and included an individually worded plausibility check instead. The proportion of expert opinions without IP grade increased notably over the course of the project: while the first 6-month-period saw only 7.1% of expert opinions without IP grading, the fourth and final 6-month-period included 58.3% of such expert opinions. Questionnaire results Final questionnaires were filled out by the responsible PSC counsellors at least partly in 119 cases (≙ 91.5% of all IP assessment cases), out of which 81 cases had also received a forensic medical expert opinion. Table shows how the counsellors rated the impact of inclusion into the in:Fo-project and its subsequent assessment process on the study participants. The process was mostly deemed distressful but also diagnostically/therapeutically helpful. Counsellors elaborated on this via free-text-items with statements such as “being seen”, “being taken seriously”, “normalisation of the experience” and “acknowledgement of suffering” from their clients’ perspective, as well as “ways out of speechlessness”, “detabooisation” and “insight” regarding therapeutic helpfulness. Table also shows how the responsible PSC counsellors rated the DV PSC-rated influence on asylum procedure . Results were somewhat balanced, with a marked majority stating they were unclear on this topic. Results of queries on asylum status A total of 62 project cases had been classified by the responsible PSC counsellors regarding a possible rise in asylum status . In 34 cases, the asylum status had undergone an improvement, in the remaining 28 cases, there had been no change or a downgrade in the asylum status. In 50 cases, a classification was not possible because the asylum procedures were not yet completed. Results of statistical analysis Results regarding DV PSC-rated influence on asylum procedure The DV PSC-rated influence on asylum procedure was predicted by the introduction of a forensic medical expert opinion into the asylum procedure when controlling for highest IP grade ( p = 0.016) (Table ). Moreover, the introduction of forensic medical expert opinion into the asylum procedure was considered influential when including the second predictor use of IP grading ( p = 0.020) (also Table ). Notably, the beta weight of the second predictor was negative. The other calculations (linear regressions, bivariate analyses) did not yield any significant results regarding this DV. Results regarding DV rise in asylum status All analysis—both logistic regressions as well as chi-square tests—referring to the DV rise in asylum status did not yield statistically significant relations. Table presents examples of the logistic regression calculations. Eighty-seven participants identified themselves as male and 11 participants as female. Ages ranged from 16 to 63 years (mean age: 31.0 years). Countries of origin varied between 27 countries; most frequent were Guinea (19.4%), Sri Lanka (12.2%) and Iraq (7.1%). Language mediation was necessary in 82.7% of the cases, most often for Arabic (11.2%), Tamil (11.2%) and Farsi/Dari (10.2%). Based upon the stated history, the minimal age of the presented scars ranged between 10 and 240 months (mean ≈ 65 months). The participants had arrived in the European Union between 6 and 141 months (mean ≈ 38.5 months) prior to the examination. Most of the participants had a residence authorisation when entering the project. Blunt force trauma was by far the most frequently named form of violence and was mentioned in a total of 96 cases (≙ 98,0%), with objects (88.8%) and hands/fists (59.2%) as most frequent vectors (Table ). Other common types of violence included thermal violence (38.8%) and sharp force trauma (35.7%). The sum of different types of violence varied markedly between cases. While 17.3% of individuals reported only one type of violence, other cases included up to eight different types of violence (Table ). As for the trauma extent, the length of forensic expert opinions varied between 4 and 14 pages (mean ≈ 7.91), while the number of separate entries varied between 1 and 76 (mean ≈ 22.12). Visible trauma residues in the facial area could be discerned in 50 cases, visible deformities in 12; an obvious loss of function was present in 17 cases. External medical documentation was included in 33 of the forensic medical expert opinions. IP grading was applied in 60 of the 92 forensic medical expert opinions. Grades were given as follows: 39 cases were graded as “consistent with” 12 cases were graded as “highly consistent” 4 cases were graded with varying IP grades and “highly consistent” as highest grade 5 cases were graded with varying IP grades and “typical of” as highest grade The remaining 32 forensic medical expert opinions abstained from an IP grading and included an individually worded plausibility check instead. The proportion of expert opinions without IP grade increased notably over the course of the project: while the first 6-month-period saw only 7.1% of expert opinions without IP grading, the fourth and final 6-month-period included 58.3% of such expert opinions. Final questionnaires were filled out by the responsible PSC counsellors at least partly in 119 cases (≙ 91.5% of all IP assessment cases), out of which 81 cases had also received a forensic medical expert opinion. Table shows how the counsellors rated the impact of inclusion into the in:Fo-project and its subsequent assessment process on the study participants. The process was mostly deemed distressful but also diagnostically/therapeutically helpful. Counsellors elaborated on this via free-text-items with statements such as “being seen”, “being taken seriously”, “normalisation of the experience” and “acknowledgement of suffering” from their clients’ perspective, as well as “ways out of speechlessness”, “detabooisation” and “insight” regarding therapeutic helpfulness. Table also shows how the responsible PSC counsellors rated the DV PSC-rated influence on asylum procedure . Results were somewhat balanced, with a marked majority stating they were unclear on this topic. A total of 62 project cases had been classified by the responsible PSC counsellors regarding a possible rise in asylum status . In 34 cases, the asylum status had undergone an improvement, in the remaining 28 cases, there had been no change or a downgrade in the asylum status. In 50 cases, a classification was not possible because the asylum procedures were not yet completed. Results regarding DV PSC-rated influence on asylum procedure The DV PSC-rated influence on asylum procedure was predicted by the introduction of a forensic medical expert opinion into the asylum procedure when controlling for highest IP grade ( p = 0.016) (Table ). Moreover, the introduction of forensic medical expert opinion into the asylum procedure was considered influential when including the second predictor use of IP grading ( p = 0.020) (also Table ). Notably, the beta weight of the second predictor was negative. The other calculations (linear regressions, bivariate analyses) did not yield any significant results regarding this DV. Results regarding DV rise in asylum status All analysis—both logistic regressions as well as chi-square tests—referring to the DV rise in asylum status did not yield statistically significant relations. Table presents examples of the logistic regression calculations. PSC-rated influence on asylum procedure The DV PSC-rated influence on asylum procedure was predicted by the introduction of a forensic medical expert opinion into the asylum procedure when controlling for highest IP grade ( p = 0.016) (Table ). Moreover, the introduction of forensic medical expert opinion into the asylum procedure was considered influential when including the second predictor use of IP grading ( p = 0.020) (also Table ). Notably, the beta weight of the second predictor was negative. The other calculations (linear regressions, bivariate analyses) did not yield any significant results regarding this DV. rise in asylum status All analysis—both logistic regressions as well as chi-square tests—referring to the DV rise in asylum status did not yield statistically significant relations. Table presents examples of the logistic regression calculations. Despite inclusion of numerous items and extensive statistical analysis, our study did not reveal any significant correlations between the asylum status (DV rise in asylum status ) and any of the other variables we examined; neither the given IP grade nor any other factor could be linked to a (positive) change in the asylum status. However, at least from the subjective perspective of the PSC professionals involved, our results suggest that the introduction of a forensic medical expert opinion into an asylum procedure had an impact when a higher IP grade was applied. The opposite was the case when including the predictor use of IP grading . These results imply that forensic medical expert opinions were more likely to be perceived as having a favourable influence on the asylum procedure if they contained a higher IP grade or no IP grade at all, i.e. presenting only an individually worded plausibility check. This finding is somewhat contradictory to other similar studies. In Italy, Franceschetti et al. found a correlation between a favourable outcome of the asylum procedure and higher IP grades and the number of individual lesions/scars as well as certain types of violence (gunshot, sharp force). Aarts et al. reported results from a Dutch study, demonstrating a significant correlation between the presence of physical symptoms and their consistency with the given story and the refugee status decision. The differences to our study might have various reasons. We categorized some forms of violence differently compared to Franceschetti et al. ; also the extend of injuries was quantified based on the number of entries in the expert opinion. And while Aarts et al. conducted statistical analysis based upon the final judicial outcome, we used the last known status of proceedings in comparison to the status at project entry. This was at least in part necessary because asylum procedures in Germany are often very lengthy, making a full follow-up very difficult and causing gaps in the data collection. In some cases, contact between the PSC counsellor and the study participant had been lost. Also, since we were not able to evaluate court files or any official documents, we have no knowledge about other factors apart from our reports/appraisals that might have been decisive for the final verdict. Comparing the Italian, Dutch and our collective, we also found relevant differences regarding the rate at which certain IP grades were applied (Fig. ). Like Franceschetti et al. , we applied the “consistent” grade most often (49.1% and 41.2%). Strong deviations can be seen when it comes to the IP grades representing higher levels of consistency—even more so when comparing to Aarts et al. (Fig. ). This might be in part attributable to differences in the mean injury age. Due to its geographic position, Italy is a major point of entry to EU territory. Persons applying for asylum in Italy might present relatively shorter time spans between injury and (forensic medical) examination—which could result in a better detectability of at least some wounds. As far as the scar age could be retraced, none of the residues presented to us were younger than 10 months. This may have impacted the applicability of the IP grades as the ongoing healing process reduces the informative value of a wound . Furthermore, the participants entered into the in:Fo-project typically in an advanced state of their asylum procedure. Very “obvious” cases—which would have received a high IP grade—were maybe already “filtered out” and had attained full asylum status or the like before even contacting a PSC, thereby bypassing the in:Fo-project completely. Also, some persons that would have been eligible for a forensic medical examination declined the offer. Conceivably this concerned severe/obvious cases at a disproportionate rate. It is quite possible that such external factors could have attributed to a selection bias in our sample population. Even so, the findings in our study raise the question whether international standards such as the IP can be used uniformly under different conditions and whether the assigned IP grades are internationally comparable. Nonetheless, since a general positive effect of the introduction of forensic medical expert reports in asylum procedures can be derived from our results, this study supports the request of a widened application of the IP standards in cases of alleged torture. Besides objective impacts, also subjective gains on a diagnostic and therapeutic level have been stated by the PSC counsellors—although the evaluation process was qualified as being stressful for the study participants. However, (forensic) medical experts must be careful with the simplified, schematic evaluation that is proposed by the IP grades. An informal query among forensic physicians involved in the project identified a “feeling” that the IP grading system did not always adequately reflect the complexity of the cases. The presented torture sequelae had in virtually all cases already undergone full cicatrisation, which hampered an in-depth assessment of injury characteristics and most often precluded high IP grades. It seems that the IP grading system reaches its limits when dealing with persons that suffered from torture a long time ago, though it might certainly help to standardize and simplify the evaluation of injuries shortly after the torture event (circumstances, for which it was primarily developed) and especially for clinicians with a lack of forensic experience and training. In cases in which only scars are left for evaluation, our findings suggest that an individually worded assessment, ideally done by a forensically experienced physician, should be preferred. The importance of a particular training has been shown before , while other authors also reported a modification of the IP grading system . Also, the relevance of an additional psychological appraisal must be underlined. Although an impact on asylum procedures could not be detected statistically in our study, especially in cases in which physical wounds have completely healed and “disappeared” or never existed from the start, an evaluation of psychological consequences of torture is of even greater importance . This publication is subject to some limitations. The data used was gathered from various sources which may have led to discrepancies regarding definitions, categorizations and the number of items. Some cases were primarily handled by external PSCs, who coordinated the entry into the project and the follow-up communication. Those cases often suffered from a reduced reply rate. In some cases, contact to the study participant was lost. Some persons who would have been eligible for referral to the Institute of Legal Medicine declined the offer. A final asylum status could not be determined in all cases since asylum proceedings were not always completed. Several potentially relevant factors were not easy to operationalize (e.g. country of origin: a potential variable would have resulted in 27 values, precluding statistical calculations due to small sample sizes). Lastly, one of the operationalized DVs was dichotomous, which restricted the available statistical options, possibly concealing some underlying associations. The evaluation of the first large-scale attempt in Germany to implement the IP recommendations came up with some unexpected results. Effects on asylum proceedings and the consequent asylum status could be found when a forensic medical expert opinion was introduced and if (a) persons presented considerable injuries resulting in a high IP grading or if (b) the forensic expert opinion abstained from an actual IP grade in favour of an individually worded approach. This raises questions regarding the use of the IP grading system in differing scenarios. Though this easy-to-handle approach is recommendable especially for forensically unexperienced physicians in cases of recent torture events, it reaches its limits when examining persons that suffered from torture a long time ago. Under such difficult circumstances, the evaluation should be performed by an experienced forensic physician who should rely on his/her own words. Apart from that, the expert reports should follow the recommendations of the IP and further efforts must be made to make the IP known, since favourable effects were not only detected with regard to the asylum proceedings, but also with a view to the psychosocial well-being of the study participants.
Biodegradation of COVID19 antibiotic; azithromycin and its impact on soil microbial community in the presence of phenolic waste and with temperature variation
d7ffe9e1-84e2-4f5c-9b4d-79b6dd8ebd5c
10085964
Microbiology[mh]
The consumption of antibiotics has increased over the past 30 years. About 50 to 90% of the antibiotics consumed are excreted as a mixture of parent compounds and bioactive metabolites, eventually, they reach open water bodies. The exposure to small amounts of antibiotics over the long-time results in compromising of human health through disruption of the endocrine system and production of Antibiotic Resistant bacteria (Wang et al ). After the extensive use of antibiotics during COVID 19 pandemic, it was expected that azithromycin will be present in irrigation water. Azithromycin belongs to the macrolides antibiotic class which depends on inhibition of protein synthesis by reversibly binding to 50S ribosomal RNA subunit as a mode of action. It is commonly used by humans for respiratory tract infection (Grenni et al , Jafari Ozumchelouei et al. ). A study performed recently reported that the presence of azithromycin in Persian Gulf area has increased in treatment plants after COVID19 and reached up to 48 times (Mirzaie et al ). Antibiotics reside eventually in agriculture, aquaculture and treatment plants (Saravanan et al ). The fate and removal of antibiotic in soil is determined by the extend of adsorption/desorption and biodegradation by indigenous microbial community (Conde-Cid et al ). The increase in antibiotics is expected to affect soil microbial community and in return will affect microbial performance and catabolic activity, especially in the presence of other contaminants (Liang et al ). There is a potential risk in the release of antibiotics on soil, consequently entering the food chain which will affect agriculture and eventually human health (Conde-Cid et al ). The study of microbiome sheds light on the indigenous microbial community present in a certain environment and in the presence of a certain pollutant. This reflects on how we can manage the bioremediation process. Information generated from studying the microbiome and produced metabolites can add to the tailoring of the bioremediation process to maximize the process while cutting down the cost. Recent research studies have shown that the use of bacterial consortia proved better efficiency in the degradation of dyes than individual bacterial isolates (Krithika et al. ). In addition, the use of the consortium achieves a one-pot treatment regimen, adding to this pot compounds that can assist the consortium will enhance the bacterial catabolic activity and ensure balance to the ecosystem. This can be attributed to the presence of several synergistic metabolic networks created by consortia compared to pure individual microbial isolates. Recent changes in climate have led to changes in microbial activity in ecosystem, it is expected that the combined effect of temperature and antibiotic presence in soil would have a profound effect and will induce changes in the catabolic performance and microbial community in soil. From this standpoint, the aim of the present work is to study the impact of Azithromycin containing water on soil indigenous bacterial community and its catabolic activity in the presence of phenolic wastes at 30 and 40 °C. Soil samples and incubation conditions A soil sample was taken from one of the gardens located at National Center for Radiation Research and Technology (NCRRT) premises in Nasr City, Cairo, Egypt. The garden was cultivated with ornamental flowers and date palms. The soil sample was collected at the depth of 15–20 cm (near the rhizosphere area of 12 years old palm tree), placed in sterile polyethylene bags, and stored at 4 °C until use. The soil sample was divided into portions; 5 g each. A soil samples was irradiated with 25 kGy using Indian Gamma Chamber 4000 A at a dose rate; 0.725 kGy/h and served as a control (sample with no viable bacteria). A final concentration of 10,000 ppm/kg soil azithromycin (Azithromycin®, Pfizer, USA) was added to soil by irrigating the soil portions. Individual portions (with the antibiotic) were incubated at 30 °C and 40 °C in the presence of phenolic wastes and samples were taken after 0 and 7 days to assay the total bacterial count. The results are reported as LogN. The phenolic wastes About 0.1 g phenolic waste compound/1 g soil of wild berry, pomegranate, and red-grape fruits were purchased from local markets in Cairo, and fruits’ pomaces were recovered after squeezing out juices. In addition, spent tea waste was collected from local café shops and restaurants in Cairo and Giza governorates. Afterward, the wastes were dehydrated at 60 °C for 6 h and stored at 4 °C until use (El-Bialy and Abd El-Aziz ). Quantification of azithromycin biproducts At the end of the incubation time, the antibiotic under investigation was extracted and quantified and the microbial load of soil samples was determined using standard protocols (Wolf ). In a preliminary experiment, the microbial load of soil samples received azithromycin at 1000 or 10,000 ppm/kg soil was determined after 5 and 7 days to determine the optimum time using three approaches: UV–visible spectroscopy The azithromycin antibiotic was extracted from the soil samples using acetonitrile as previously described (Miranda et al. ) and determined by acidic hydrolysis with 27N HCl and reading the absorbance at 482 nm using a T60 UV–Vis spectrophotometer (Haleem et al. ). The absorbance readings were converted to azithromycin concentrations regarding a standard curve that was done using increasing concentrations of azithromycin (0.1–0.5 µg/mL). HPLC analysis The treatments and replicates were harvested after 7 days. Azithromycin was extracted by adding different amounts of antibiotic dissolved in DMSO and completing to 1000 µl with HPLC grade methanol. The extracts were filtered directly into amber (0.22 m [12pt]{minimal} $$$$ μ ) HPLC vilas. The conditions for the antibiotic detection and quantification were as follows: Mobile phase: Methanol (50%): Acetonitrile (50%), Column: 120CC-C18 column (Poroshell 120, length 100 mL, diameter 4.6 mm, particle size 2.7 micron; 600 Bar), Temperature of column 25 ºC Flow rate: 1 mL min −1 , Retention time: Approximately 1.179, Wavelength 240 nm. 100 µl of the extracted sample was suspended in a mobile phase, filtered through a 0.45 µm membrane filter. The column was equilibrated for at least 1 h with mobile phase flowing through the chromatographic system before starting the assay. About 5 µl of the standard or sample solution were injected into the chromatograph using conditions described above. Considering the possibility of using this analytical procedure in stability studies, concentrations 15, 30, 60, 125, 250 mg/mL were prepared by dissolving azithromycin in mobile phase, in order to study system linearity response. The regression curve of peak areas versus concentrations proved linear with a coefficient of correlation r = 0.9993 and with confidence intervals at P = 0.05, Y = 17.363x. Fourier transform infrared spectroscopy (FT-IR) Fourier Transform Infrared Spectroscopy (FT-IR) of soil samples containing Azithromycin were added directly for ATR-FTIR analysis. Scanning was performed from 400 to 4000 nm using ATR-FTIR, BRUKER VERTEX 70 optics layout device at NCRRT. The analytical spectrum was then compared to the library to identify the functional groups. Microbiome analysis The microbial DNA was extracted using Qiagen DNeasy powerMax Soil kit® according to manufacturer’s instructions. The 16S metagenomics library preparation kit includes 2 sets of primers that correspond to the hypervariable regions of the 16S rDNA gene in bacteria. The primer sets were V2-4–8, V3-6 and V7-9. The sequencing was carried out at Colours Medical laboratory (Maadi, Egypt) using IonTorrent™ Next Generation Sequencer. Diversity metrics were calculated using core-metrics-phylogenetics. QIIME2 was used to visualize the results in addition to R packages phyloseq and ggplot2. Details of all kits used and links to products are in Supplementary material S1. A soil sample was taken from one of the gardens located at National Center for Radiation Research and Technology (NCRRT) premises in Nasr City, Cairo, Egypt. The garden was cultivated with ornamental flowers and date palms. The soil sample was collected at the depth of 15–20 cm (near the rhizosphere area of 12 years old palm tree), placed in sterile polyethylene bags, and stored at 4 °C until use. The soil sample was divided into portions; 5 g each. A soil samples was irradiated with 25 kGy using Indian Gamma Chamber 4000 A at a dose rate; 0.725 kGy/h and served as a control (sample with no viable bacteria). A final concentration of 10,000 ppm/kg soil azithromycin (Azithromycin®, Pfizer, USA) was added to soil by irrigating the soil portions. Individual portions (with the antibiotic) were incubated at 30 °C and 40 °C in the presence of phenolic wastes and samples were taken after 0 and 7 days to assay the total bacterial count. The results are reported as LogN. About 0.1 g phenolic waste compound/1 g soil of wild berry, pomegranate, and red-grape fruits were purchased from local markets in Cairo, and fruits’ pomaces were recovered after squeezing out juices. In addition, spent tea waste was collected from local café shops and restaurants in Cairo and Giza governorates. Afterward, the wastes were dehydrated at 60 °C for 6 h and stored at 4 °C until use (El-Bialy and Abd El-Aziz ). At the end of the incubation time, the antibiotic under investigation was extracted and quantified and the microbial load of soil samples was determined using standard protocols (Wolf ). In a preliminary experiment, the microbial load of soil samples received azithromycin at 1000 or 10,000 ppm/kg soil was determined after 5 and 7 days to determine the optimum time using three approaches: UV–visible spectroscopy The azithromycin antibiotic was extracted from the soil samples using acetonitrile as previously described (Miranda et al. ) and determined by acidic hydrolysis with 27N HCl and reading the absorbance at 482 nm using a T60 UV–Vis spectrophotometer (Haleem et al. ). The absorbance readings were converted to azithromycin concentrations regarding a standard curve that was done using increasing concentrations of azithromycin (0.1–0.5 µg/mL). HPLC analysis The treatments and replicates were harvested after 7 days. Azithromycin was extracted by adding different amounts of antibiotic dissolved in DMSO and completing to 1000 µl with HPLC grade methanol. The extracts were filtered directly into amber (0.22 m [12pt]{minimal} $$$$ μ ) HPLC vilas. The conditions for the antibiotic detection and quantification were as follows: Mobile phase: Methanol (50%): Acetonitrile (50%), Column: 120CC-C18 column (Poroshell 120, length 100 mL, diameter 4.6 mm, particle size 2.7 micron; 600 Bar), Temperature of column 25 ºC Flow rate: 1 mL min −1 , Retention time: Approximately 1.179, Wavelength 240 nm. 100 µl of the extracted sample was suspended in a mobile phase, filtered through a 0.45 µm membrane filter. The column was equilibrated for at least 1 h with mobile phase flowing through the chromatographic system before starting the assay. About 5 µl of the standard or sample solution were injected into the chromatograph using conditions described above. Considering the possibility of using this analytical procedure in stability studies, concentrations 15, 30, 60, 125, 250 mg/mL were prepared by dissolving azithromycin in mobile phase, in order to study system linearity response. The regression curve of peak areas versus concentrations proved linear with a coefficient of correlation r = 0.9993 and with confidence intervals at P = 0.05, Y = 17.363x. Fourier transform infrared spectroscopy (FT-IR) Fourier Transform Infrared Spectroscopy (FT-IR) of soil samples containing Azithromycin were added directly for ATR-FTIR analysis. Scanning was performed from 400 to 4000 nm using ATR-FTIR, BRUKER VERTEX 70 optics layout device at NCRRT. The analytical spectrum was then compared to the library to identify the functional groups. The azithromycin antibiotic was extracted from the soil samples using acetonitrile as previously described (Miranda et al. ) and determined by acidic hydrolysis with 27N HCl and reading the absorbance at 482 nm using a T60 UV–Vis spectrophotometer (Haleem et al. ). The absorbance readings were converted to azithromycin concentrations regarding a standard curve that was done using increasing concentrations of azithromycin (0.1–0.5 µg/mL). The treatments and replicates were harvested after 7 days. Azithromycin was extracted by adding different amounts of antibiotic dissolved in DMSO and completing to 1000 µl with HPLC grade methanol. The extracts were filtered directly into amber (0.22 m [12pt]{minimal} $$$$ μ ) HPLC vilas. The conditions for the antibiotic detection and quantification were as follows: Mobile phase: Methanol (50%): Acetonitrile (50%), Column: 120CC-C18 column (Poroshell 120, length 100 mL, diameter 4.6 mm, particle size 2.7 micron; 600 Bar), Temperature of column 25 ºC Flow rate: 1 mL min −1 , Retention time: Approximately 1.179, Wavelength 240 nm. 100 µl of the extracted sample was suspended in a mobile phase, filtered through a 0.45 µm membrane filter. The column was equilibrated for at least 1 h with mobile phase flowing through the chromatographic system before starting the assay. About 5 µl of the standard or sample solution were injected into the chromatograph using conditions described above. Considering the possibility of using this analytical procedure in stability studies, concentrations 15, 30, 60, 125, 250 mg/mL were prepared by dissolving azithromycin in mobile phase, in order to study system linearity response. The regression curve of peak areas versus concentrations proved linear with a coefficient of correlation r = 0.9993 and with confidence intervals at P = 0.05, Y = 17.363x. Fourier Transform Infrared Spectroscopy (FT-IR) of soil samples containing Azithromycin were added directly for ATR-FTIR analysis. Scanning was performed from 400 to 4000 nm using ATR-FTIR, BRUKER VERTEX 70 optics layout device at NCRRT. The analytical spectrum was then compared to the library to identify the functional groups. The microbial DNA was extracted using Qiagen DNeasy powerMax Soil kit® according to manufacturer’s instructions. The 16S metagenomics library preparation kit includes 2 sets of primers that correspond to the hypervariable regions of the 16S rDNA gene in bacteria. The primer sets were V2-4–8, V3-6 and V7-9. The sequencing was carried out at Colours Medical laboratory (Maadi, Egypt) using IonTorrent™ Next Generation Sequencer. Diversity metrics were calculated using core-metrics-phylogenetics. QIIME2 was used to visualize the results in addition to R packages phyloseq and ggplot2. Details of all kits used and links to products are in Supplementary material S1. Growth of indigenous bacterial soil samples in the presence of azithromycin and at two different temperatures Changes in LogN at zero- and 7-days incubation period were monitored for soil samples grown incubated at 30 and 40 °C. The obtained results show that in the presence of phenolic wastes, the growth has changed, and that the highest growth was shown for soil samples incubated with spent tea waste for samples incubated at 30 °C which increased 1.16-fold after 7 days as compared to 1.26-fold increase for control soil sample without phenolic wastes. The remaining used phenolic wastes showed minimal or low change in LogN after 7 days of incubation (Fig. ). On the other hand, soil samples incubated at 40 °C showed 1.15-fold increase for control samples, and a 1.05-fold decrease in LogN for soil samples incubated with spent tea waste for samples after 7 days incubation. The remaining phenolic wastes showed more decrease that reached 1.26-fold when added to soil and incubated at 40 °C for 7 days (Fig. ). Degradation of azithromycin at two different temperatures, with phenolic wastes and using different assays Examining the degradation pattern of azithromycin was performed in the presence of different phenolic wastes added as phenolic wastes to soil, and at two temperatures. The residual antibiotic was assayed after incubation for 5 and 7 days at 30 and 40 °C, and the data are represented in Fig. , , respectively. The results show that incubation with different wastes led to discrepancy in the degradation activity of the microbial consortia. The highest degradation can be detected for soil samples with no phenolic waste amendments at 30 °C, while that at 40 °C required the addition of spent tea waste. The pattern of degradation also was observed to be slow for some samples at incubation of 5 days, on the other hand, samples with and without addition of phenolic wastes showed close degradation pattern which indicates that time is a key factor. Highest degradation can be observed for samples incubated with spent tea waste after 5 days at 40 °C which was higher than that observed for samples without phenolic waste amendment or wild berry waste, grape waste, or pomegranate wastes. On the other hand, incubation for 7 days resulted in close results in the degradation percentage for samples incubated with spent tea waste (99.164%) and for samples without any amendments (98.56%). Degradation was confirmed as biotic process since the exposure of soil samples to gamma radiation using the sterilization dose 25 kGy led to no degradation and the degradation was shown to be 2.95 and 2% for samples incubated for 5 days at 30 and 40 °C. While for samples incubated for 7 days the degradation was 2.77 and 2% for incubation at the abovementioned temperatures. FTIR spectrum of azithromycin concentrations of 125, 250 and 375 mg/mL showed a peak at 1625.9 cm-1 and 1010 cm-1 were detected for azithromycin that increased with the increase in azithromycin concentration (Fig. a). Extraction of residual azithromycin from soil samples incubated at 30 °C and 40 °C with spent tea showed for samples containing antibiotics showed a decrease in azithromycin characteristic peak for 40 °C with spent tea than control samples incubated at 30 °C (Fig. b). The results resonate with the residual azithromycin results shown above in Fig. . Residual azithromycin assayed using HPLC also showed the same pattern as those obtained by UV assay and FTIR, where the residual azithromycin was 1053 µg/mL for soil sample incubated at 30 °C and 766.96 µg/mL for that incubated at 40 °C with spent tea for 7 days (Fig. ). Soil microbial consortium in the samples containing azithromycin at two different temperatures To understand the relationship between the presence of azithromycin and soil microbial community at 30 °C and 40 °C with spent tea, the whole bacterial community was identified at the family and genus level. Dominant families obtained after incubation of soil sample at 30 °C were: Pseudomonadaceae, Rhizobiacaea, Desulfobacteriacea, Deinococcaceae, Bacillaeace, Sphingiomonadaceae. Soil samples incubated at 30 °C Genus majority in descending order are Bacillus, Krasilinkovia, Lysinibacillus, Rhodococcus, Sphingobium, Rubrivivax, Paenibacillus. The relative abundance for families and genus with cut off ˃1000 are represented in Fig. a, b. On the other hand, soil samples incubated at 40 °C with spent tea showed dominant families of Enterobacteriaceae, Bacilleacea, Paneibacilleacea, Sphingiobacteriaceae, Bradyrhizobaceae. Genus majority in descending order are Bacillus, Nitrate reducers, Bervibacillus, Microbacterium, Serratia, Paeniebacillus, and Enterobacter. The relative abundance for families and genus with cut off ˃1000 are represented in Fig. a, b. The images of complete families and genus for both samples are represented in S2. Changes in LogN at zero- and 7-days incubation period were monitored for soil samples grown incubated at 30 and 40 °C. The obtained results show that in the presence of phenolic wastes, the growth has changed, and that the highest growth was shown for soil samples incubated with spent tea waste for samples incubated at 30 °C which increased 1.16-fold after 7 days as compared to 1.26-fold increase for control soil sample without phenolic wastes. The remaining used phenolic wastes showed minimal or low change in LogN after 7 days of incubation (Fig. ). On the other hand, soil samples incubated at 40 °C showed 1.15-fold increase for control samples, and a 1.05-fold decrease in LogN for soil samples incubated with spent tea waste for samples after 7 days incubation. The remaining phenolic wastes showed more decrease that reached 1.26-fold when added to soil and incubated at 40 °C for 7 days (Fig. ). Examining the degradation pattern of azithromycin was performed in the presence of different phenolic wastes added as phenolic wastes to soil, and at two temperatures. The residual antibiotic was assayed after incubation for 5 and 7 days at 30 and 40 °C, and the data are represented in Fig. , , respectively. The results show that incubation with different wastes led to discrepancy in the degradation activity of the microbial consortia. The highest degradation can be detected for soil samples with no phenolic waste amendments at 30 °C, while that at 40 °C required the addition of spent tea waste. The pattern of degradation also was observed to be slow for some samples at incubation of 5 days, on the other hand, samples with and without addition of phenolic wastes showed close degradation pattern which indicates that time is a key factor. Highest degradation can be observed for samples incubated with spent tea waste after 5 days at 40 °C which was higher than that observed for samples without phenolic waste amendment or wild berry waste, grape waste, or pomegranate wastes. On the other hand, incubation for 7 days resulted in close results in the degradation percentage for samples incubated with spent tea waste (99.164%) and for samples without any amendments (98.56%). Degradation was confirmed as biotic process since the exposure of soil samples to gamma radiation using the sterilization dose 25 kGy led to no degradation and the degradation was shown to be 2.95 and 2% for samples incubated for 5 days at 30 and 40 °C. While for samples incubated for 7 days the degradation was 2.77 and 2% for incubation at the abovementioned temperatures. FTIR spectrum of azithromycin concentrations of 125, 250 and 375 mg/mL showed a peak at 1625.9 cm-1 and 1010 cm-1 were detected for azithromycin that increased with the increase in azithromycin concentration (Fig. a). Extraction of residual azithromycin from soil samples incubated at 30 °C and 40 °C with spent tea showed for samples containing antibiotics showed a decrease in azithromycin characteristic peak for 40 °C with spent tea than control samples incubated at 30 °C (Fig. b). The results resonate with the residual azithromycin results shown above in Fig. . Residual azithromycin assayed using HPLC also showed the same pattern as those obtained by UV assay and FTIR, where the residual azithromycin was 1053 µg/mL for soil sample incubated at 30 °C and 766.96 µg/mL for that incubated at 40 °C with spent tea for 7 days (Fig. ). To understand the relationship between the presence of azithromycin and soil microbial community at 30 °C and 40 °C with spent tea, the whole bacterial community was identified at the family and genus level. Dominant families obtained after incubation of soil sample at 30 °C were: Pseudomonadaceae, Rhizobiacaea, Desulfobacteriacea, Deinococcaceae, Bacillaeace, Sphingiomonadaceae. Soil samples incubated at 30 °C Genus majority in descending order are Bacillus, Krasilinkovia, Lysinibacillus, Rhodococcus, Sphingobium, Rubrivivax, Paenibacillus. The relative abundance for families and genus with cut off ˃1000 are represented in Fig. a, b. On the other hand, soil samples incubated at 40 °C with spent tea showed dominant families of Enterobacteriaceae, Bacilleacea, Paneibacilleacea, Sphingiobacteriaceae, Bradyrhizobaceae. Genus majority in descending order are Bacillus, Nitrate reducers, Bervibacillus, Microbacterium, Serratia, Paeniebacillus, and Enterobacter. The relative abundance for families and genus with cut off ˃1000 are represented in Fig. a, b. The images of complete families and genus for both samples are represented in S2. The presence of antibiotics in the environment can modify native microbial communities and diversity thereby altering natural biogeochemical cycling, causing potentially detrimental effects on agriculture as well as contributing to the growing worldwide antibiotic resistance epidemic (Maier and Tjeerdema ). The present work demonstrates the changes that took place due to the presence of Azithromycin in soil under 2 different temperatures and in the presence of different composts. The results show that the total bacterial community has changes in terms of count and dynamics. The persistence of macrolide antibiotics in the soil depends on the proliferation of biodegrading microorganisms in the soil and is independent of prior exposure to the drug (Topp et al. ). The higher initial azithromycin concentration was a key variable for biodegradation kinetics. Although many reports previously described the positive impact of manure addition on the biodegradation of chemicals in agricultural soils, Topp et al. ( ) found the sorptive interactions of azithromycin with the organic matter will reduce macrolide bioavailability for biodegradation. Microbial respiration in azithromycin contaminated soil was significantly greater in the biosolids alone than in the amended manured sand treatments, reflecting the greater organic matter and nutrient contents of biosolids than of the manured sand. Amendment with biosolids (1% w/w) increased the organic matter and nutrient content of the manured sand; but did not significantly affect microbial respiration (Sidhu et al. ). Optimizing time in the removal reactions will save the cost of utilization and energy consumption (Bazrafshan et al. ). Terzic et al. ( ) revealed that the elimination efficiency of macrolide antibiotic; azithromycin in activated sludge reached 99% after a prolonged incubation period exceeding 160 h. The half-life of azithromycin in outdoor mesocosms over a period of three years was calculated to be in the range of 770 ± 181 to 11.77 ± 7.34 days when soil-biosolid mixtures were incubated together after previous soil exposures to macrolide antibiotics (Maier and Tjeerdema ), although there was no evidence for the accelerated degradation of many pharmaceuticals in Mexican soils that have received untreated wastewater for up to 100 years (Topp et al. ). The elevation of temperature significantly increases the cavitation intensity and leads to an increase in azithromycin ionization and its reduction whereas the concentration of radical hydroxyl deceases at a lower temperature and subsequently decreases the degradation of biocides and pharmaceuticals (Tao, et al., 2015). Yazdani and Sayadi ( ) demonstrated the removal rate of organic compounds is directly proportional to the temperature because organic molecules migrate from the solution to the region where the hydroxyl radical concentration is high. The typical macrolide antibiotics are relatively large molecules, which consist of a macrocyclic lactone ring containing 14 to 16 atoms, substituted with hydroxyl, alkyl, and ketone groups and with neutral or amino sugars bound to the ring by substitution of hydroxyl groups (Terzic et al. ). One of the key initial steps in azithromycin transformation is enzymatic hydrolytic opening of the macrolactone ring, most probably mediated by the enzyme macrolide esterase; this could be the reason for the clinically relevant resistance (Morar et al. ). Esterification was formed either by the removal of one or both sugar units and some modification of desosamine sugar moiety (Voigt and Jaeger ). This was followed by the formation of the corresponding phosphorylated or glycosylated transformation products (Terzic et al. ). Since phosphorylation is a well-known microbial strategy for the inactivation of macrolide antibiotics (Dinos ). The macrolactone ring opening was followed by two biotransformation steps including two subsequent water losses, which could have occurred at two different positions. After that, azithromycin mineralization to carbon dioxide and inorganic salts was efficiently biotransformed both under aerobic and anaerobic conditions (Terzic et al. ). The detected FTIR spectra in our work showed peaks characteristic for C–H for methyl groups, C=O group of lactone and C–O. Robaina et al ( ) reported that FTIR can be used to detect azithromycin and that the characteristic is the peak for C=O group of lactone and that it can also be used for quantitative analysis. Miranda et al ( ) also reported the applicability of using spectroscopy coupled with Fourier Transform for the detection of azithromycin. While both used acetonitrile detection of the antibiotic from soil prior to spectroscopical analysis, our work represents the detection directly in soil samples which makes it easier and practical to use if FTIR is coupled to a hand held device. Assi et al ( ) validated the use of portable near infra-red spectroscopy for detection of several groups of antibiotics in their pure form. The results obtained showed an increase in transmittance peaks that were proportional to the azithromycin concentrations in the soil sample. The results also followed the same patterns detected using UV–Visible and HPLC in the present study. This result encourages the use of FTIR directly to soil samples without extraction. The presence of azithromycin and phenolic wastes in soil incubated at elevated temperature resulted in change in the bacterial community. Although little information has been published with the three tested parameters, the administration of any of these parameters has been known to change the microbiota. Liang et al ( ) reported that the presence of antibiotics changed the kinetics of degradation and dominance of antibiotic resistant bacteria in a biofilm reactor. Cerqueira et al ( ) reported changes in the microbiome and resistome of soil irrigated with three different antibiotics and the prevalence of Xantomonadales species in the root microbiome of lettuce grown in this soil. The presence of fertilizer was reported to manipulate the soil microbiome altering soil resistome (Li et al ). In the present study, soil samples incubated at 40 °C with spent tea waste has led to the highest degradation of azithromycin, however, Enterobacteriaceae became predominant which represented almost half the microbial community. The Enterobacteriaceae family is known to include several gram-negative pathogenic bacteria. On the other hand, the microbial community of soil sample incubated at 30 °C without adding the compost resulted in less azithromycin degradation but had several predominant families such as Pseudomonadaceae, Xanthomonadaceae, Rhizobiaceae and Sphingobacteriaceae. This confirms the reports above that changes in soil microbial community can result from additions of antibiotics or other compounds. In conclusion, this study highlights that the accumulation of antibiotics in soil leads to disturbance of natural catabolic activity of indigenous microbial community. The dominance of a family over another controls the soil indigenous microbial activity in terms of degradation which can also affect the soil community in terms of plant activity and can be expected to affect the plant growth and/or pathogenesis. The presence of compost also plays a role in soil catabolic activity and more studies are recommended to understand its contribution, especially with climate change. The fact that we can detect the residual antibiotic in soil using FTIR paves the way to preparing an IR sensor for simple detection as opposed to laborious analytical methods and will also reduce time needed for detection. The novel IR sensors are nowadays small and can be coupled with mobile phones. This technology will be the future detection tools of environmental pollutants. More work is in the pipeline to build upon the obtained results. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 425 KB)
Pediatrics in Disasters
e7927aa7-4fe7-4a1d-bee9-f1e54b279ae7
10086103
Pediatrics[mh]
• This report describes the Pediatrics in Disasters (PEDS) course during a hybrid in-person and virtual pilot due to the coronavirus disease 2019 pandemic. • The hybrid course included multinational faculty and participants, with international and local faculty collaborating on needed revisions before the course. • Course activities included synchronous and asynchronous lectures and small group sessions co-led by in-person and virtual faculty, followed by student knowledge tests and evaluations. • Student and facilitator 2021 surveys and 2019 to 2021 student feedback reported overall satisfaction with the course while suggesting needed improvements to maximize international and virtual student participation. • The hybrid PEDS course structure successfully achieved course goals and incorporated international faculty; lessons learned will guide future course revisions and fellow global health educators. Man-made or natural disasters—due to armed conflict, environmental catastrophes, forced displacements, and epidemics—affect thousands of people worldwide each year with loss of home, livelihood, or possessions . Complex humanitarian disasters and emergencies are increasing due to international conflict or civil war . The coronavirus disease 2019 (COVID-19) pandemic further highlights the need for efficient and equitable disaster responses, particularly for those in low- and middle-income countries (LMICs) . The need for pediatric-specific considerations and care during disasters, particularly during humanitarian emergencies and in resource-limited settings with baseline high burdens of preventable illness and poor access to care , is an important and sometimes overlooked component of disaster response . Children make up a disproportionately large percent of the victims of natural or man-made disasters, largely due to their developmental and physical vulnerabilities . Children with complex health-care needs or disabilities are especially at risk . Catastrophic consequences for children due to gaps in preparedness and response have also been demonstrated in high-income (HIC) settings (ie, Hurricane Katrina). Previous articles called for better training for all levels of medical trainees and disaster relief workers to improve pediatric disaster preparedness . The Pediatrics in Disasters (PEDS) course is a 1-week, 10 module in-person or online training program designed for health trainees and professionals. It differs from other pediatric emergency trainings in that it targets LMICs and includes a comprehensive curriculum incorporating multiple facets of pediatric disaster response . The original course content resulted from a multi-institutional collaboration between the Center for Global Health (Colorado School of Public Health; CGH), the American Academy of Pediatrics, the Pan American Health Organization, the United States Military, and the Association for Health Research and Development; course aims and organization have been described previously . Please see for the PEDS course description . Box 1 Pediatrics in disasters course description [19] Abbreviations: LMIC, low- to middle-income country; PEDS, pediatrics in disasters course. The CGH has coordinated and implemented the PEDS course since its inception in 2008, and courses have taken various forms over the years. Between 2008 and 2013, training teams from CGH conducted 19 in-person courses for 730 participants in 12 LMICs . Internationally, between 2014 and 2018, 292 participants (262 from Kenya and 30 from Ghana) completed the course. Locally, in 2012, CGH began offering the PEDS course annually to University of Colorado (CU) graduate medical trainees (residents, fellows, and health professionals and public health students) and external health professionals, training 67 people during the 2012 to 2013 academic year. Additionally, since 2014, adapted PEDS course modules and small group materials have been available online through CGH ( http://cgh.mycrowdwisdom.com/diweb/start ) for asynchronous participation; 147 participants, including participants from Kenya and the Philippines, participated in the course exclusively online between 2013 and 2021. Due to a lack of funding, over the years there was a shift away from international course iterations and coordination with LMIC partners toward in-person, USA-based courses. The course also evolved to address changing audiences after inclusion in the CU medical student curriculum. Global events (such as the H1N1 pandemic) presented new funding opportunities if new course sub-aims and objectives were added to course content. These external factors affected the course over time, shifting away from its original aims (see ). Due to COVID-19-related disruptions to in-person Global Health Education (GHE) activities , in 2020, the annual in-person CGH PEDS course occurred completely virtually, with local CU or affiliate faculty leading the course for local virtual learners. In 2021, the CGH team offered the course via a novel mixed in-person and virtual hybrid structure, as CU had relaxed restrictions for in-person activities. For the first time, course directors invited international facilitators and students to participate in the course virtually as a pilot initiative. Planning for a hybrid, multinational course reopened discussion about how to collaborate with partners in LMICs and the value in updating course material for international audiences. Course directors used the 2021 course as an opportunity to update out-of-date information and prepare for a larger course revision to refocus the course back to its original aims (see ). This report describes our preparation processes for the novel 2021 PEDS hybrid course while conducting needed course updates based on nominal group technique and analysis of previous years’ course feedback. We aim to provide suggestions for future iterations of the course and to share lessons learned with global health educators seeking to transition from in-person to virtual/hybrid learning or to attract global participation in existing courses. In early 2021, our group conducted discussions with key stakeholders (current and previous facilitators, course directors, and CU faculty) regarding earlier course successes and challenges, as well as ideas for how to improve and revise the course. Involved parties participated in standard nominal group technique for structured brainstorming to identify ways to update and improve the course. Directors then organized feedback into “short-term updates” (for the 2021 course iteration) and “longer term updates” (to improve international applicability of the course moving forward). Course directors recruited international colleagues from CGH networks with prior experience or interest in the PEDS course to participate in the precourse content revision. CU and international faculty grouped into teams based on their module assignments or areas of expertise and communicated via email, WhatsApp messaging, and zoom to review course module content and feedback. Teams completed short-term or “low hanging fruit” updates for the 2021 hybrid course. Course administrative teams created materials and adapted course structure and start times to fit international audience and facilitator needs. Course directors also invited the international colleagues to virtually cofacilitate the 2021 hybrid course. The hybrid structure included lectures and interdisciplinary small group discussions available synchronously (livestreaming concurrent with in-person sessions) and asynchronously (daily recorded sessions available online). To assess overall participant and facilitator hybrid course satisfaction following the 2021 course, we conducted an online, anonymous postcourse satisfaction survey consisting of a Likert scale and free response questions stored via REDCap online cloud platform (Vanderbilt University, http://project-redcap.org ) . All course participants completed the postcourse satisfaction survey as a course requirement, while facilitators completed surveys on an optional basis. We obtained institutional review board approval from the Colorado Multiple Institution Review Board (University of Colorado, Aurora, CO, USA; #21–4090). We summarized descriptive statistics for participant demographics and satisfaction responses. We implemented thematic induction to draw conclusions about free-text responses to survey data querying the strongest and weakest aspects of the course as well as suggestions for improvement. In November 2021, course directors met and debriefed virtually with all international facilitators, whose feedback informed our longer term plan for course organization and revisions. Course directors also compared routine student postcourse feedback and daily knowledge assessments from the 2019 (in-person), 2020 (virtual), and 2021 (hybrid virtual and in-person) PEDS courses to identify common themes and describe performance trends. Course facilitation Course directors recruited 28 Colorado-based facilitators and 10 international facilitators (from Uganda, Kenya, United States, Ethiopia, Zambia, Peru, and South Africa), all of whom were involved with both module content updates and 2021 course facilitation. Six course modules had one international facilitator assigned to the team, whereas 4 modules had 2 international facilitators per module. Course directors conducted 2 facilitator orientation sessions before the 2021 course, organized and disseminated course content via Dropbox cloud storage ( https://www.dropbox.com/home ) accessible to multinational teams with different institutional access and established communication channels between local and international facilitators (primarily email and WhatsApp Messenger). Short-term revision planning Nineteen key stakeholders participated in the nominal group technique to provide overall course feedback and guide future course revisions. Before the 2021 course, module facilitator teams reviewed this information and successfully identified and integrated “short-term” content revisions for all 10 modules. In addition, each facilitator team identified module-specific revisions that were more appropriate for the bigger planned course revision to expand international applicability. Course organization, participation, and completion Regarding the 2021 course organization, there were 3 to 4 small group sessions per module, one of which was a small group exclusively for virtual participants. The international facilitator co-led the virtual small group session with an in-person CU facilitator to ensure smooth coordination should international facilitators have connectivity challenges. Before the course, administrators performed technological trial runs with the module teams. During the course, administrators recorded lectures and virtual small groups daily and posted them online (via Canvas Instructure, https://canvas.instructure.com/login/canvas ) for any asynchronous international participants. Regarding course participation, among 32 total participants, 21 attended in-person, 4 attended virtually, and 7 attended both in-person and virtually. Participants represented health professionals and trainees both locally (United States; 30/32, 94%) and internationally (Uganda, and Kenya; 2/32, 6%). There were a further 2 local asynchronous participants who engaged with the online-only course due to unavailability during the course week. Regarding course completion, 100% (30/30) of US-based participants completed the full course. Neither of the 2 virtual international participants did so, despite having all course content available asynchronously, noting on postcourse evaluations that they participated in approximately 25% to 50% of course activities. These participants reported they were unable to fully complete the course due to Internet connectivity issues, time zone differences, and lack of protected free time necessary to participate in synchronous sessions and access asynchronous content. All participants who completed the full course successfully passed daily student quizzes, with an average composite score of 86%. Postcourse satisfaction survey Thirty-seven students and faculty completed the postcourse satisfaction survey (31 students [response rate 98%], 6 faculty [response rate 16%]) . Course satisfaction was similar among all participants and facilitators, with 81% (n = 30/37) of respondents agreeing or strongly agreeing that the course materials were helpful and 89% (n = 33/37) agreeing or strongly agreeing that their overall course goals were met and that they received information that could be applied to their current or future professional activities. Regarding previous disaster response experience, 16.7% (n = 6/36) respondents indicated prior involvement with disaster planning before the 2021 course. describes results from the satisfaction survey on perceived strengths and weaknesses of in-person and hybrid course elements, including suggested improvements to the course. Positive themes included international faculty and participant presence and strong course organization with pertinent topics. Notably, participants appreciated the flexibility and broader audience captured by allowing a virtual option in the hybrid version of the course. Negative themes focused on technological limitations (such as lack of reliable Internet or not hearing speakers well on Zoom), not enough small-group discussion time to share international experiences, limited applicability of course to LMIC audiences (which was a previous complaint about the course) , and logistical coordination for virtual participants. One international facilitator commented, “We had to discuss with the videos off [due to poor Internet], so I did not see the participants. As a virtual facilitator, I found it a bit difficult to throw in comments between participants, as I could not tell how the flow was from one candidate to another.” 2019 to 2021 student course feedback describes a summary of 2019 to 2021 student course feedback, reflecting 30 participants in 2019 (in-person), 31 participants in 2020 (virtual), and 26 participants in 2021 (hybrid). We included common themes in the table if 2 or more participants mentioned the topic in their student feedback. Overall, there were similar trends in general course satisfaction year to year despite the in-person versus virtual versus hybrid formats. Course debriefing Finally, course directors conducted Zoom debriefs with most international facilitators (n = 9/10, 90%). Open and frequent communication with course directors and administrators was a positive theme that emerged, as was consideration of their time zone constraints. However, international facilitators would have appreciated being incorporated more in module presentations and moved around to different small groups, instead of always clustering with the virtual participants. One facilitator mentioned that the “virtual facilitator needs to be pushy—it is easy to not be heard or to speak when the rest of the group is in-person.” International facilitators also mentioned that the hybrid course could have benefited from “dress rehearsals” of shared module activities between cofacilitators and standard evaluation practices to guide regular quality improvement cycles. Course directors recruited 28 Colorado-based facilitators and 10 international facilitators (from Uganda, Kenya, United States, Ethiopia, Zambia, Peru, and South Africa), all of whom were involved with both module content updates and 2021 course facilitation. Six course modules had one international facilitator assigned to the team, whereas 4 modules had 2 international facilitators per module. Course directors conducted 2 facilitator orientation sessions before the 2021 course, organized and disseminated course content via Dropbox cloud storage ( https://www.dropbox.com/home ) accessible to multinational teams with different institutional access and established communication channels between local and international facilitators (primarily email and WhatsApp Messenger). Nineteen key stakeholders participated in the nominal group technique to provide overall course feedback and guide future course revisions. Before the 2021 course, module facilitator teams reviewed this information and successfully identified and integrated “short-term” content revisions for all 10 modules. In addition, each facilitator team identified module-specific revisions that were more appropriate for the bigger planned course revision to expand international applicability. Regarding the 2021 course organization, there were 3 to 4 small group sessions per module, one of which was a small group exclusively for virtual participants. The international facilitator co-led the virtual small group session with an in-person CU facilitator to ensure smooth coordination should international facilitators have connectivity challenges. Before the course, administrators performed technological trial runs with the module teams. During the course, administrators recorded lectures and virtual small groups daily and posted them online (via Canvas Instructure, https://canvas.instructure.com/login/canvas ) for any asynchronous international participants. Regarding course participation, among 32 total participants, 21 attended in-person, 4 attended virtually, and 7 attended both in-person and virtually. Participants represented health professionals and trainees both locally (United States; 30/32, 94%) and internationally (Uganda, and Kenya; 2/32, 6%). There were a further 2 local asynchronous participants who engaged with the online-only course due to unavailability during the course week. Regarding course completion, 100% (30/30) of US-based participants completed the full course. Neither of the 2 virtual international participants did so, despite having all course content available asynchronously, noting on postcourse evaluations that they participated in approximately 25% to 50% of course activities. These participants reported they were unable to fully complete the course due to Internet connectivity issues, time zone differences, and lack of protected free time necessary to participate in synchronous sessions and access asynchronous content. All participants who completed the full course successfully passed daily student quizzes, with an average composite score of 86%. Thirty-seven students and faculty completed the postcourse satisfaction survey (31 students [response rate 98%], 6 faculty [response rate 16%]) . Course satisfaction was similar among all participants and facilitators, with 81% (n = 30/37) of respondents agreeing or strongly agreeing that the course materials were helpful and 89% (n = 33/37) agreeing or strongly agreeing that their overall course goals were met and that they received information that could be applied to their current or future professional activities. Regarding previous disaster response experience, 16.7% (n = 6/36) respondents indicated prior involvement with disaster planning before the 2021 course. describes results from the satisfaction survey on perceived strengths and weaknesses of in-person and hybrid course elements, including suggested improvements to the course. Positive themes included international faculty and participant presence and strong course organization with pertinent topics. Notably, participants appreciated the flexibility and broader audience captured by allowing a virtual option in the hybrid version of the course. Negative themes focused on technological limitations (such as lack of reliable Internet or not hearing speakers well on Zoom), not enough small-group discussion time to share international experiences, limited applicability of course to LMIC audiences (which was a previous complaint about the course) , and logistical coordination for virtual participants. One international facilitator commented, “We had to discuss with the videos off [due to poor Internet], so I did not see the participants. As a virtual facilitator, I found it a bit difficult to throw in comments between participants, as I could not tell how the flow was from one candidate to another.” describes a summary of 2019 to 2021 student course feedback, reflecting 30 participants in 2019 (in-person), 31 participants in 2020 (virtual), and 26 participants in 2021 (hybrid). We included common themes in the table if 2 or more participants mentioned the topic in their student feedback. Overall, there were similar trends in general course satisfaction year to year despite the in-person versus virtual versus hybrid formats. Finally, course directors conducted Zoom debriefs with most international facilitators (n = 9/10, 90%). Open and frequent communication with course directors and administrators was a positive theme that emerged, as was consideration of their time zone constraints. However, international facilitators would have appreciated being incorporated more in module presentations and moved around to different small groups, instead of always clustering with the virtual participants. One facilitator mentioned that the “virtual facilitator needs to be pushy—it is easy to not be heard or to speak when the rest of the group is in-person.” International facilitators also mentioned that the hybrid course could have benefited from “dress rehearsals” of shared module activities between cofacilitators and standard evaluation practices to guide regular quality improvement cycles. Our results describe the 2021 implementation of a novel hybrid PEDS course during the COVID-19 pandemic and suggestions for general and virtual course improvements. Feedback themes were consistent between facilitators, students, and key stakeholders over several years’ worth of course feedback data. Overall feedback supported the importance of and interest in a pediatric disaster course for the global health community, and the consistency in feedback will guide course directors in prioritizing future revisions and iterations. Although virtual approaches to GHE are spreading rapidly and have been used worldwide in health care , there are limited current literature documenting enablers and barriers when pivoting from in-person to virtual formats, particularly from the perspective of LMIC partners . A further challenge is how to prioritize the needs of LMIC learners and colleagues while reinforcing general global health learner competencies . Descriptions of virtual GHE activities, such as our novel hybrid PEDS course, will inform future discussions and help establish best practice recommendations on virtual GHE. International facilitators The inclusion of international facilitators in the pilot hybrid course was deemed as a positive contribution to student and faculty experiences, successful 2021 completion of minor content revisions, and collaboration for future course updates. However, while grouping participants and facilitators into virtual and in-person teams improved our 2021 day-of course logistics and coordination, a near total separation of virtual participants and facilitators—who predominately came from LMIC settings—diminished cross-national and cultural communication, exchanges, and engagement opportunities per student and facilitator feedback. Allocating dedicated time during future courses for international facilitators to interact with all participants may address this challenge. International participants We found that a hybrid course with an international audience requires thoughtful considerations for virtual engagement and course planning. First, some of the challenges reported by international participants included a lack of stable Internet connections, different time zones, and lack of protected time. Course directors’ attempts to mitigate these challenges, such as recording and posting sessions online daily, did not overcome the challenges to participation. Future courses may consider offering virtual-only or in-person-only options for participants with mixed in-person/virtual facilitators, customizing online content for access in lower bandwidth regions, or dividing course modules during a several-week block to better accommodate those in LMIC settings with more limited protected educational time. Second, students consistently reported that hands-on sessions were a major strength of the course. These practical sessions must continue to be a focus of the course, and reasonable alternatives for virtual or low-bandwidth participation must be found, such as high-quality virtual disaster simulations or tabletop drills. Finally, courses conducted within the framework of global health partnerships should discuss how to support needed technology upgrades for LMIC partners to ensure effective participation despite technological limitations . Our team hopes to further mitigate challenges to international participants by following new guidance on standardized virtual GHE communication and logistic practices , continuing to provide regular and multimodal communication from a dedicated administrator, and including adequate technological planning, trial-runs, and support to ensure positive experiences for everyone involved in the course. Incorporating multidisciplinary audiences Another theme identified from discussions with key stakeholders and from 2019 to 2021 participant feedback is how best to incorporate public health students and professionals, a growing audience for the course. Per student feedback, there was a notable positive incorporation of public health students in the virtual 2020 course, which may reflect the fact that facilitators actively sought public health student participation in the virtual small groups. There is a need to balance the course aims and heavy medical content load with identified educational needs for public health personnel and the benefit of having multidisciplinary learner groups, which mimics real-life field experiences. Potential solutions could be creating different small group sessions for public health versus clinical learners, alternative case studies aimed at one specialty versus the other, division of small group participants for case discussions with later presentations to the larger group, or the inclusion of a finite number of case scenarios per module, each discussing a different public health or medical issue. In addition, although we will continue open enrollment in the course for any CU health professional or student, moving forward we plan to more clearly iterate our course aims and expectations for nonmedical participants. Course evaluation and improvement Regarding course evaluations, several stakeholders and facilitators mentioned a need for standard evaluation practices to maintain course quality, effectiveness, and applicability. Future PEDS course evaluations should permanently add student and facilitator satisfaction surveys, using this feedback to regularly (perhaps on a 2–3-year basis) guide standard quality improvement cycles for course updates and changes. Further, directors should reincorporate a 6-month postcourse survey to assess new or ongoing involvement in disaster-related activities. Our course data indicated relatively few participants engaged in disaster planning activities before the 2021 course, and we do not yet know how many will seek out or be asked to participate in such activities in the future; knowing how this rate evolves over time (and what, if any, effect the course had on the participants’ decision to engage in disaster planning) would be helpful to assess real-world course impact. Finally, analyzing differences on course surveys and feedback between facilitators and participants from HIC versus LMIC settings may help improve course quality and sustainability, advocate for securing course funding, or help integrate the course into institutions or ministries of health responsible for disaster response . Limitations Our program evaluation had several limitations. First, although we were able to examine student and facilitator feedback over several years, the 2 major changes between the 2020 and 2021 courses (virtual to hybrid format plus content revisions) versus 1 major change between the 2019 and 2020 courses (in-person to virtual format) may have changed participant and facilitator experiences in a way that makes them incomparable. Despite this, we think that the consistency in feedback themes provides clear areas for improvement and ideas for future courses. Second, in 2021, there was a paucity of data from faculty (only students were required to complete the postcourse satisfaction survey) and international students (due to their inability to fully participate). Third, we did not include a precourse knowledge assessment in the 2020 virtual or 2021 hybrid courses, which limited our ability to show measurable knowledge gains; we do plan to reincorporate these moving forward. Finally, we did not measure the impact of the PEDS course on real-world disaster planning and response, a limitation we share with previous articles . The inclusion of international facilitators in the pilot hybrid course was deemed as a positive contribution to student and faculty experiences, successful 2021 completion of minor content revisions, and collaboration for future course updates. However, while grouping participants and facilitators into virtual and in-person teams improved our 2021 day-of course logistics and coordination, a near total separation of virtual participants and facilitators—who predominately came from LMIC settings—diminished cross-national and cultural communication, exchanges, and engagement opportunities per student and facilitator feedback. Allocating dedicated time during future courses for international facilitators to interact with all participants may address this challenge. We found that a hybrid course with an international audience requires thoughtful considerations for virtual engagement and course planning. First, some of the challenges reported by international participants included a lack of stable Internet connections, different time zones, and lack of protected time. Course directors’ attempts to mitigate these challenges, such as recording and posting sessions online daily, did not overcome the challenges to participation. Future courses may consider offering virtual-only or in-person-only options for participants with mixed in-person/virtual facilitators, customizing online content for access in lower bandwidth regions, or dividing course modules during a several-week block to better accommodate those in LMIC settings with more limited protected educational time. Second, students consistently reported that hands-on sessions were a major strength of the course. These practical sessions must continue to be a focus of the course, and reasonable alternatives for virtual or low-bandwidth participation must be found, such as high-quality virtual disaster simulations or tabletop drills. Finally, courses conducted within the framework of global health partnerships should discuss how to support needed technology upgrades for LMIC partners to ensure effective participation despite technological limitations . Our team hopes to further mitigate challenges to international participants by following new guidance on standardized virtual GHE communication and logistic practices , continuing to provide regular and multimodal communication from a dedicated administrator, and including adequate technological planning, trial-runs, and support to ensure positive experiences for everyone involved in the course. Another theme identified from discussions with key stakeholders and from 2019 to 2021 participant feedback is how best to incorporate public health students and professionals, a growing audience for the course. Per student feedback, there was a notable positive incorporation of public health students in the virtual 2020 course, which may reflect the fact that facilitators actively sought public health student participation in the virtual small groups. There is a need to balance the course aims and heavy medical content load with identified educational needs for public health personnel and the benefit of having multidisciplinary learner groups, which mimics real-life field experiences. Potential solutions could be creating different small group sessions for public health versus clinical learners, alternative case studies aimed at one specialty versus the other, division of small group participants for case discussions with later presentations to the larger group, or the inclusion of a finite number of case scenarios per module, each discussing a different public health or medical issue. In addition, although we will continue open enrollment in the course for any CU health professional or student, moving forward we plan to more clearly iterate our course aims and expectations for nonmedical participants. Regarding course evaluations, several stakeholders and facilitators mentioned a need for standard evaluation practices to maintain course quality, effectiveness, and applicability. Future PEDS course evaluations should permanently add student and facilitator satisfaction surveys, using this feedback to regularly (perhaps on a 2–3-year basis) guide standard quality improvement cycles for course updates and changes. Further, directors should reincorporate a 6-month postcourse survey to assess new or ongoing involvement in disaster-related activities. Our course data indicated relatively few participants engaged in disaster planning activities before the 2021 course, and we do not yet know how many will seek out or be asked to participate in such activities in the future; knowing how this rate evolves over time (and what, if any, effect the course had on the participants’ decision to engage in disaster planning) would be helpful to assess real-world course impact. Finally, analyzing differences on course surveys and feedback between facilitators and participants from HIC versus LMIC settings may help improve course quality and sustainability, advocate for securing course funding, or help integrate the course into institutions or ministries of health responsible for disaster response . Our program evaluation had several limitations. First, although we were able to examine student and facilitator feedback over several years, the 2 major changes between the 2020 and 2021 courses (virtual to hybrid format plus content revisions) versus 1 major change between the 2019 and 2020 courses (in-person to virtual format) may have changed participant and facilitator experiences in a way that makes them incomparable. Despite this, we think that the consistency in feedback themes provides clear areas for improvement and ideas for future courses. Second, in 2021, there was a paucity of data from faculty (only students were required to complete the postcourse satisfaction survey) and international students (due to their inability to fully participate). Third, we did not include a precourse knowledge assessment in the 2020 virtual or 2021 hybrid courses, which limited our ability to show measurable knowledge gains; we do plan to reincorporate these moving forward. Finally, we did not measure the impact of the PEDS course on real-world disaster planning and response, a limitation we share with previous articles . Due to the COVID pandemic, our group developed a hybrid PEDS course, which successfully incorporated international facilitators and participants from LMIC settings. An evaluation of previous and current course feedback highlighted challenges and areas for improvement to future in-person and virtual course iterations. Lessons learned from our novel hybrid PEDS course may inform future approaches for virtual adaptation of GHE activities suitable for global dissemination, particularly within multinational global health groups and partnerships. Funding for salary support was made possible by the CGH (Aurora, CO) . This funding source had no role in the design of this study, during its execution, analyses, or interpretation of the data. L. Umphrey conceived the article idea. L. Umphrey, J. Wathen, and S. Berman created original postcourse survey. A. Chambliss, M. Moua, L. Umphrey, J. Wathen, and S. Berman contributed to overall course data collection. A. Chambliss, L. Morgan, and L. Umphrey performed literature review. K. Kalata performed analysis on student feedback and survey data. All authors contributed to data analysis and synthesis. L. Umphrey created first manuscript draft. All authors contributed to article editing and finalization.
Cardiac Anesthesia Intraoperative Interpretation Accuracy of Transesophageal Echocardiograms: A Review of the Current Literature and Meta-Analysis
312c4c52-25cd-4cd8-8a47-53f64e1a6d84
10086216
Internal Medicine[mh]
In the recent literature, cardiology based training in different procedures and techniques has been garnering a lot of attention. As of 2019 over 90,000 physicians specialize in cardiac-based procedures and interpretation in the United States. One such procedure is echocardiography, and physicians who specialize in interpreting them are called primary echocardiographers, which include cardiologists and radiologists. 22,521 active physicians practice in the field of cardiology and 28,025 active physicians practicing in radiology. In comparison, there are only 1667 anesthesiologists who practice cardiac anesthesia as a subspecialty. Subspecialization in cardiac anesthesiology requires at least 4 years of training in an anesthesiology residency program and at least 1 year of a cardiac anesthesiology fellowship. During their residency and fellowship years, most anesthesiologists will be trained in the use of echocardiography. One such type of echocardiography is transthoracic echocardiogram (TTE), in which a handheld transducer is held outside the heart. , Although other forms of echocardiograms exist, such as intracardiac echo and stress echo, a transesophageal echocardiogram is often the approach in the perioperative setting. Compared to TTE, a transesophageal echocardiogram can be more sensitive at identifying etiologies of an embolic stroke. One study suggests that TEE may be more suitable than TTE for detecting infective endocarditis. Transesophageal echocardiograms can assess the heart’s function and detect symptoms of atherosclerosis, cardiomyopathy, heart failure, and more. , This is because an ultrasound probe is guided into the esophagus, providing a closer view of the heart. Interpreting TEEs have a significant impact throughout perioperative care in order to make a proper diagnosis. Although cardiac anesthesiologists, cardiologists, and radiologists are all trained in interpreting transesophageal echocardiography, an overwhelming majority of perioperative TEEs are performed by cardiac anesthesiologists. A study by Poterack recognized that out of 98 institutions surveyed, 54% of them have anesthesiologists in charge of TEE interpretations. Therefore, it is of utmost importance that cardiac anesthesiologists are well-trained in these procedures. TEE specifically has seen major growth in terms of technology, use, and indications since its introduction to the medical community nearly half a decade ago. These advancements include the increase in TEE use from 29% in 2009 to 45% in 2011, and upgrades in technology such as the 3-D TEE systems. 3-D TEE imaging has been shown to improve detecting infective endocarditis in a study by Chahine et al. Additional advancements include continuous TEE monitoring, strain imaging, and diastolic function assessment. These advancements have also increased the complexity of the procedure itself. For this reason, diagnostic evaluation of the TEE exams may vary disparately depending on who delivers the procedure and the expertise of the examiner. Despite the active role that cardiac anesthesiologists have in the perioperative setting, there is limited literature on the assessment of their ability to interpret intraoperative TEE. Our paper conducts a systematic literature review to assess the effectiveness with which cardiac anesthesiologists interpret TEE examinations compared to primary echocardiographers, such as cardiologists and radiologists. The PRISMA systematic review model was used to execute this study and identify relevant literature. A comprehensive search was used on the MED-LINE database (PubMed) to yield articles used for our study. Step 1 included using a broad keyword search using the phrases “Cardiology Anesthesiology Echocardiogram” and “Echocardiography Anesthesiology” to produce 1114 and 684 articles, respectively, dating from 1952 to 2022. The criteria for inclusion and exclusion are shown below in including but not limited to articles being written in the English language. From the search, a total of 363 articles were included based on the relevance of the title ( , step 1), and duplicates were then removed ( , step 2). The remaining articles were then screened based on their abstract ( , step 3). The last step executed was reading the full article to determine which publications will be used in the study ( , step 4). This process yielded a combination of quantitative and qualitative information that amounted to a total of 9 relevant articles for our topic of interest. Three researchers carried out the procedures to obtain the final sample. The investigation team agreed on the final selection of the literature . After assembly of the 9 articles, they were divided according to whether they contained quantitative or qualitative data. There were three quantitative data containing the accuracy of cardiac anesthesiologists’ TEE readings. Accuracy is defined as the degree to which cardiac anesthesiologists’ TEE interpretation agreed with that of primary echocardiographers. The quantitative studies examined different parameters as part of the TEE procedure and also used different methods to assess accuracy. Cohen’s kappa coefficient and high-fidelity videotape evaluation were the methods of analysis used to evaluate the accuracy of the interpretation of these parameters. The number of correctly interpreted TEEs and the total number of TEEs were obtained from each of the three quantitative studies. These numbers were then used to calculate the mean accuracy in the interpretation of all TEEs to represent the overall accuracy of cardiac anesthesiologists . PRISMA systematic review yielded a total of 3 quantitative studies and 6 qualitative studies for a total of 9 relevant studies. The three quantitative studies contained comparisons between cardiac anesthesiologists and radiologists, cardiac anesthesiologists and cardiologists, and cardiologists and radiologists. Mathew et al contained the concordance rate of TEE interpretations amongst cardiac anesthesiologists, cardiologists, and radiologists. In the study, radiologists interpreted the same number of TEEs as cardiac anesthesiologists. For this reason, we decided to compare anesthesiologists to radiologists in this study. They found that anesthesiologists with less than 5 years of experience underestimated left ventricular fractional area change (FAC). On the other hand, anesthesiologists with greater experience had higher levels of concordance with radiologists, particularly in the assessment of the aorta, right atrium, pulmonary vein flow, and transmitral flow. Furthermore, cardiac anesthesiologists correctly interpreted 83% of TEEs when compared specifically to radiologists. Out of 2464 TEE exams, this comes out to a total of 2045 correctly interpreted TEEs. Nevertheless, comparisons between anesthesiologists and cardiologists (80% concordance) and cardiologists and radiologists (82% concordance) were all similar. The study by Mishra et al contained information regarding the concordance between online interpretation by cardiac anesthesiologists and offline analysis by cardiologists. This study specifically examined left ventricle regional wall motion, valve function, and left and right ventricle function. 3620 out of 4161 TEEs were correctly interpreted by the cardiac anesthesiologists, amounting to an accuracy rating of 87%. Although this study did not state the number of anesthesiologists involved, they examined 3217 TEEs in a group of patients who underwent coronary bypass graftings and 629 TEEs in a group of patients who underwent valve procedures, yielding a total of 3846 TEEs that were interpreted. The final quantitative study by Miller et al compared the performance of anesthesiologists to an expert cardiologist in recording and interpreting TEEs. Parameters measured in this study included size of the heart chambers, FAC, and degree of stenosis or insufficiency of heart valves. They found that their cardiac anesthesiologists correctly interpreted 1242 out of 1572 TEEs, a 79% accuracy rating. As indicated in , these three studies totaled 8197 interpreted TEEs by cardiac anesthesiologists, 84% of which were correctly interpreted. The American Society of Echocardiography suggests that non-cardiologists such as radiologists and cardiac anesthesiologists who provide optimal TEE services should ideally undergo 6 months of full-time training in an active echocardiography training institution. They recommend being involved in 300 total TEE exams and performing at least 150 of those exams, and 15 h of TEE within 3 years per Continuing Medical Education (CME) standards. Thus, all physicians who were not formally trained in TEE should adhere to these standards. It may also be advisable to consider facilitating close interactions between cardiac anesthesiologists and cardiologists or radiologist echocardiographers, at least in the initial training phases. In our study design, we chose to compare the evaluation of TEE studies between attending anesthesiologists to primary attending echocardiographers, either cardiologists or radiologists. A prospective observational cohort study was performed between 1993 and 1997 meant to evaluate TEE as a safe and reliable technique during cardiac surgery. 3217 TEEs were administered to 944 patients who underwent coronary artery bypass grafting (CABG) procedures, and another 629 TEES to 142 patients who underwent heart valve procedures. The attending anesthesiologists who performed the TEE had a minimum hands-on experience of performing and interpreting 500 TEE studies each. Although the study did not disclose the number of anesthesiologists included in the study, they found that there was a rather high concordance between anesthesiologists and cardiologists (87%). This suggests that anesthesiologists can interpret and perform TEE studies in a manner comparable to that of cardiologists. Another study was done at Duke University Medical Center that assessed the concordance of TEE interpretation in a continuous quality improvement (CQI) program. In this study, 10 cardiac anesthesiologists conducted a total of 154 TEE studies that included the estimation of FAC using Bland-Altman methods. Fractional area change is a measure of right ventricular systolic function. It is clinically significant because it can be used to measure any impairments to right ventricle function, such as after a pulmonary valve replacement. All 154 of the TEE studies were reviewed by radiologists, 50 of which were also reviewed by cardiologists. Cardiac anesthesiologists were found to underestimate the FAC when compared to radiologists, especially if the anesthesiologist had less than 5 years of TEE experience. Anesthesiologists with more experience, however, were found to have higher levels of concordance with the radiologists. Ultimately, the high levels of concordance of anesthesiologists to radiologists (83%) and cardiologists (80%) suggest that anesthesiologists are proficient in TEE interpretation. A prospective study done at the Madigan Army Medical Center evaluated the ability of anesthesiologists to perform and interpret TEE after revisions were made to their examination protocol. Namely, these revisions entailed going from a standard 10 view TEE examination to a 12 view in which 8 were from the original and 4 assessed with color Doppler. Eight cardiac anesthesiologists performed 135 TEE examinations, which were then compared with a final expert evaluation by a cardiologist, yielding an accuracy of 79%. Although this is considerably lower than the other studies we analyzed, this is inclusive of TEE examinations with omitted diagnoses (blanks on evaluation sheets). If these TEE examinations had not been included in the study, the rate of correct interpretation would have been 94%. It has been shown in a study done at Aarhus University Hospital that anesthesiologists are capable of providing valuable information in interpreting TEE. A TEE was successfully performed on 525 children undergoing cardiac surgery and according to the results, interpretations of TEE performed by anesthesiologists resulted in a total of 184 alterations to treatment in 143 patients. Additionally, anesthesiologists were able to add 37% of new information and add 8% of decisive information out of all the TEEs interpreted. Although our study indicates how effective anesthesiologists can be in perioperative care, there have been multiple studies that have shown experience and training is still valuable in both carrying out the TEE procedure and interpreting the results. One study compared the length of time it takes to obtain a TEE exam and how accurate the interpretation was between certified anesthesiologists and anesthesiology residents. Attending physicians and residents were recruited from both the Vanderbilt School of Medicine and The Icahn School of Medicine at Mount Sinai for a total of 15 residents and 11 attending physicians. Participants were required to obtain 10 standard views using TEE. The certified anesthesiologists were able to interpret 5 out of 10 images better than the residents, whereas the remaining 5 views were comparable to the residents. Results also indicated that certified anesthesiologists were able to acquire TEE images more quickly, suggesting that experience is necessary to become a proficient echocardiographer. A study done at Mahidol University concurred with this by showing improvement in acquiring TEE images as the procedure was performed more often. An additional study performed at The Icahn School of Medicine at Mount Sinai suggested that more experienced anesthesiologists were able to score higher on multiple-choice questions that involved TEEs. Evidently, experience in echocardiography improves both the theoretical knowledge and the practical application of the skills involved in TEE. Limitations to our study include the circumstances of assessment in our quantitative studies. Specifically, comparisons were made between on-line assessments by anesthesiologists and off-line assessments of the primary echocardiographers. It is plausible that there could have been a higher level of agreement between the two groups if they interpreted TEEs under the same circumstances. For example, there may have been higher concordance if the anesthesiologists evaluated TEE results after operation. Another limitation to our study is that most of our quantitative data were published nearly 20 years ago. If these studies were to be done today, it may be the case that we would see higher concordance between cardiac anesthesiologists and primary echocardiographers, especially because of the guidelines that were established since then. Another notable limitation of this study is that there were variations in the gold standard for interpreting TEEs. Some studies used expert echocardiographers as the gold standard, while others relied on the degree to which there was consensus amongst attending echocardiographers. Quality Improvement A possible method of improving clinical evaluation is by refining current indications for the use of echocardiography. For example, echocardiography currently plays a major role in the diagnosis and management of infective endocarditis (IE) as part of Duke’s criteria. However, many patients are initially misclassified even though IE is a life-threatening emergency. This is partly because a negative echocardiogram does not rule out IE and a false-positive result is not unusual with these tests. The fault here is not so much in the conductor of the test, but the test itself. Therefore, in these cases, it may be worth considering other imaging techniques. An 18F-FDG PET/CT scan has instead shown promising results with these patients. A possible method of improving clinical evaluation is by refining current indications for the use of echocardiography. For example, echocardiography currently plays a major role in the diagnosis and management of infective endocarditis (IE) as part of Duke’s criteria. However, many patients are initially misclassified even though IE is a life-threatening emergency. This is partly because a negative echocardiogram does not rule out IE and a false-positive result is not unusual with these tests. The fault here is not so much in the conductor of the test, but the test itself. Therefore, in these cases, it may be worth considering other imaging techniques. An 18F-FDG PET/CT scan has instead shown promising results with these patients. Based on the studies presented, it is clear that anesthesiologists have an important role in the perioperative stages of patient care by performing and interpreting transesophageal echocardiograms. With continuous quality improvement, cardiac anesthesiologists are shown to function at a level equivalent to that of primary echocardiographers. The implementation of software programs to routinely test physician TEE skills and the implementation of standardized AI interpretation as a possible gold standard are noteworthy considerations for future investigation.
The Asia-Pacific Gynecologic Oncology Trials Group (APGOT): building a Pan-Asian and Oceania women’s cancer research organization
3acc3642-a957-434d-8d33-bcc3d198012b
10086504
Gynaecology[mh]
With a population of 4.4 billion, the Asia-Pacific region accounts for more than half of the world’s population. Moreover, it is the largest region in the world by total gross domestic product (GDP). Nonetheless, significant disparities in healthcare infrastructure and provision exist across the region due to the diversity of socio-economic resources in each country. Overall, the burden of cancer incidence and mortality is rapidly growing in Asia. Furthermore, according to the global cancer statistics, Asia accounted for one-half of all new cancer cases and cancer deaths in 2020. The Asia-Pacific Gynecologic Oncology Trials Group (APGOT) was founded in November 2019 to bring attention to this issue. ‘The ultimate goal of the APGOT is to provide the best treatment to patients with gynecologic cancers on the basis of robust scientific evidence and enable every patient in every Asian-Pacific region to access a clinical trial . The APGOT comprises a research network of international and regional clinical trial units that coordinates and promotes clinical trials within the Asia-Pacific region for patients with gynecologic cancers. This coordination is particularly relevant for academic clinical trials, translational research, research on rare diseases, and clinical trials sponsored by the industry to perform multicenter international studies in the Asia-Pacific region. Organizational Structure, Executive Committee, and Secretariat The APGOT is overseen by an Executive Committee , consisting of representatives appointed by the respective chair of the parties. The Committee shall be collectively responsible for providing overall direction and be the decision-maker on all major issues about the acceptance, management, budgeting, conduct, authorship, and publication of APGOT studies, and oversee the financial and administrative obligations to ensure the funds received meet all governance requirements. The secretariat may be a staff member of any party and be appointed by the chairperson of the Executive Committee. The secretariat will help the chairperson and the Executive Committee organize and carry out biannual meetings, as well as maintain collaborative communications and activities among various APGOT member groups and sites. Member Groups Members of APGOT comprise clinical research groups and sites in the Asia-Pacific region focusing on gynecologic oncology. Founding members include the Gynecologic Cancer Group Singapore (GCGS), Gynecologic Oncology Trial and Investigation Consortium (GOTIC), Korean Gynecologic Oncology Group (KGOG), and Australia New Zealand Gynaecological Oncology Group (ANZGOG). The APGOT member groups have now expanded to include the Kolkata Gynecological Oncology Trial and Translational Group (KolGOTrg), Taiwanese Gynecologic Oncology Group (TGOG), and Shanghai Gynecologic Oncology Group (SGOG). New study groups (or sites) need to apply with information on their capability for conducting trials and their management. APGOT accepts applications from clinical research networks of Asia-Pacific regions as well as high-performing individual research institutions from the Asia-Pacific region, where the formation of research groups may be challenging due to resource constraints. Operation Process Any member of the APGOT may propose and submit a new study proposal . The Committee will decide on the feasibility of the proposal. Priority and eligibility for authorship will be given to those investigators at sites that have contributed data. Badging of the APGOT trial should be given to meet the following criteria: A proposal from one of the APGOT member groups A proposed study must include at least two or more APGOT member groups or institutions contributing in terms of protocol development, patient recruitment, or translational research efforts Industry collaboration is welcome as per the APGOT Industry Partnership Model, which is similar to the European Network for Gynaecological Oncological Trial groups (ENGOT) model Badging is contingent on approval by the Executive Committee. Meetings and Communication The Committee shall meet at least once every 6 months during the calendar year. The Committee’s business meeting shall be held biannually in person or online. Official meetings for all APGOT members consist of ‘New trial proposal meeting’ and ‘Trial update meeting’, which would be held biannually. The Executive Committee meetings could be held concurrently with the official meetings. The owner of these meetings would be the clinical chair, and the operation chair or secretariat from the APGOT office would set up and inform the meeting. All matters discussed by the Committee and all materials distributed to the Committee should be treated as confidential and not circulated outside the Committee unless otherwise stated. The APGOT is overseen by an Executive Committee , consisting of representatives appointed by the respective chair of the parties. The Committee shall be collectively responsible for providing overall direction and be the decision-maker on all major issues about the acceptance, management, budgeting, conduct, authorship, and publication of APGOT studies, and oversee the financial and administrative obligations to ensure the funds received meet all governance requirements. The secretariat may be a staff member of any party and be appointed by the chairperson of the Executive Committee. The secretariat will help the chairperson and the Executive Committee organize and carry out biannual meetings, as well as maintain collaborative communications and activities among various APGOT member groups and sites. Members of APGOT comprise clinical research groups and sites in the Asia-Pacific region focusing on gynecologic oncology. Founding members include the Gynecologic Cancer Group Singapore (GCGS), Gynecologic Oncology Trial and Investigation Consortium (GOTIC), Korean Gynecologic Oncology Group (KGOG), and Australia New Zealand Gynaecological Oncology Group (ANZGOG). The APGOT member groups have now expanded to include the Kolkata Gynecological Oncology Trial and Translational Group (KolGOTrg), Taiwanese Gynecologic Oncology Group (TGOG), and Shanghai Gynecologic Oncology Group (SGOG). New study groups (or sites) need to apply with information on their capability for conducting trials and their management. APGOT accepts applications from clinical research networks of Asia-Pacific regions as well as high-performing individual research institutions from the Asia-Pacific region, where the formation of research groups may be challenging due to resource constraints. Any member of the APGOT may propose and submit a new study proposal . The Committee will decide on the feasibility of the proposal. Priority and eligibility for authorship will be given to those investigators at sites that have contributed data. Badging of the APGOT trial should be given to meet the following criteria: A proposal from one of the APGOT member groups A proposed study must include at least two or more APGOT member groups or institutions contributing in terms of protocol development, patient recruitment, or translational research efforts Industry collaboration is welcome as per the APGOT Industry Partnership Model, which is similar to the European Network for Gynaecological Oncological Trial groups (ENGOT) model Badging is contingent on approval by the Executive Committee. The Committee shall meet at least once every 6 months during the calendar year. The Committee’s business meeting shall be held biannually in person or online. Official meetings for all APGOT members consist of ‘New trial proposal meeting’ and ‘Trial update meeting’, which would be held biannually. The Executive Committee meetings could be held concurrently with the official meetings. The owner of these meetings would be the clinical chair, and the operation chair or secretariat from the APGOT office would set up and inform the meeting. All matters discussed by the Committee and all materials distributed to the Committee should be treated as confidential and not circulated outside the Committee unless otherwise stated. For industry partnerships, there are three possible models . In option 1, trials are led by an academic institution and grant-funded, with the database residing at the lead institution. De-identified data might be shared with other APGOT academic groups and the industry partner that provided the drug or device for the study. Option 3 refers to industry-sponsored trials, which may also be academically initiated but are fully funded and sponsored by the industry partner. APGOT can play a key role in option 3 studies, such as protocol development, feasibility assessment, and site selection in member countries on behalf of the industry partner. Here, the industry partner might operate a database and share the data with APGOT for further evaluation. Finally, option 2 is a hybrid of the two where the study is academically led and sponsored but fully or co-financed by the industry partners. In this case, the lead academic group might operate a database and subsequently share it with the industry partner. There is a growing number of trials with APGOT badging, and we are also actively participating in other groups’ trials. More information on the APGOT trials is available at: http://apgot.org/bbs/content.php?co_id=clinical01 As new member groups have joined, the number of proposals for new trials at the biannual APGOT meeting has increased. The APGOT continue to contribute to international and regional clinical trials through coordinating and promoting clinical trials within the Asia-Pacific region for patients with gynecologic cancers. There are multiple well-established research groups, such as the Gynecologic Cancer InterGroup (GCIG), ENGOT in Europe, and Gynecologic Oncology Group (GOG) Foundation in the USA. The APGOT hopes to be an active participant and collaborator with these networks to improve the outcomes of patients with gynecologic cancers worldwide.
Endostructural and periosteal growth of the human humerus
9509b699-95a6-4d52-96cf-35f50effd314
10086792
Anatomy[mh]
INTRODUCTION This article sets out to understand the fashion in which localized cortical thickness, biomechanical resistance to torsional stress, localized surface curvature and overall diaphyseal curvature vary throughout ontogeny and, the degree to which these factors co‐vary (or not). As such, in this section, firstly we briefly summarize the state of knowledge of ontogeny of long bones, and specifically the human humerus. We then focus on the previous work on juvenile long‐bone biomechanics, including cross‐sectional geometry and diaphyseal curvature. Finally, we look at the advantages of virtual morphometric methods such as geometric morphometrics (GMM) and “morphometric mapping” (Zollikofer & Ponce de León, ) and how these can be of specific use for the analysis of external morphology and endostructural variation of long bones throughout growth. 1.1 Why the humerus? The humerus is a major long bone in the human body that after the acquisition of bipedality is not involved in habitual locomotion. As such, differences between groups can often be larger than that in the bones associated with walking (e.g., the tibia and femur) (e.g., Churchill, ; De Groote, , ; Pearson et al., ; Ruff, , ) as the humerus is not strictly a weight‐bearing bone but potentially responds to a diverse range of biomechanical stresses generated by upper limb usage. It grows in the same manner as other long bones, however, with the shaft being the major portion that grows (Gray & Gardner, ). Approximately, 80% of humerus growth occurs at the proximal growth plate (Pritchett, ), as can be seen by the distinctive shape the distal portion assumes even at very early embryonic stages. Prior to birth, it is assumed that much of the form of the humerus is genetically pre‐programmed, as the forces it withstand are not as high in utero as those experienced by the tibia and femur (which withstands greater forces due to the kicking reflex) (Verbruggen et al., , ). Aspects of the development of this bone appear to be linked to expression of two genes, Collagen X and Indian hedgehog, which work in tandem with biophysical stimuli of embryonic muscle contractions (Nowlan et al., ) and knockout mouse models suggest that the development of musculature in this region has an important factor to play in the normal development of the humerus (Nowlan et al., ). Examination of individuals who have continued with normal postnatal musculoskeletal development in this region allows us to understand possible developmental pathways. It can also allow a more nuanced understanding of how pathological conditions may manifest themselves than a simple visual analysis will. 1.2 Some previous developmental studies of long bone size and shape There is debate as to how much of the internal structure of the long bones is dictated by either genetics or behavior. Long bones are often modeled by morphologists as straight cylinders so that their key functional properties such as resistance to bending, torsion, and so forth can be assessed by means of simple analyses of cross‐sectional shape. Unfortunately, not all long bones are straight cylinders, and in fact, correction for curvature can alter estimates of bone stiffness by up to 15% (Brassey et al., ). Much of the work on cross‐sectional geometry of juvenile long bones has built on a range of studies including analyses of adult hominin and primate skeletal material (e.g., Churchill, ; Davies & Stock, ; Niinimäki et al., ; Rhodes & Knüsel, ; Ruff & Trinkaus, ; Shaw et al., ; Trinkaus et al., ) as well as experimental work in non‐primates (e.g., Lieberman et al., , ) and in living humans (e.g., Nikander et al., ; Shaw & Stock, , ). To understand the functional significance and also the variation of adult long bone morphology, it is important to ascertain the stages within the growth trajectory at which morphological change occurs and their significance (Cowgill, ; Gosman et al., ; Ruff, , ; Smith & Buschang, ). To this end, several studies have looked at growth of human long bones using different techniques, namely: histology (Cambra‐Moo, ; Kember & Sissons, ; Maggiano et al., ); radiography (Tanner, ); intermembral indices and/or cross‐sectional geometry (e.g., Cowgill, ; Gosman et al., ; Harrington, ; Kondo & Dodo, , ; Osipov et al., ; Ruff, , ; Ruff et al., ; Ruff et al., ; Trinkaus et al., ; Zilhão & Trinkaus, ); GMM (Frelat & Mitteroecker, ); and analysis of torsion (Cowgill, ). In very early development, normal stresses will enable the genetically programmed process of bone formation to occur, but abnormal stresses , either caused by external factors (e.g., pathogens attacking the mother, severe malnutrition of mother, and trauma) or internal genetic factors such as deleterious mutations, will interfere with this program of bone formation. To this end, it is important in baseline studies to try and study nonpathological material for the creation of reference sequences. Cowgill was able to demonstrate that observable differences in population‐level cross‐sectional properties at the 50% of maximum length (hereafter referred to as increment) of both the humerus and femur existed very early, often before 1 year of age. This suggests that a complex long‐term interplay between population genetics and environment influences both long bone robusticity and bone shape. It is apparent, however, that analysis of just the cross‐section of the humerus at the 50% increment may miss out variation and patterns that may be biomechanically meaningful. To this end, it is increasingly common for studies to analyze several locations throughout the diaphysis (Churchill, ; Davies & Stock, ; Niinimäki et al., ; Rhodes & Knüsel, ; Shaw et al., ). However, with decreasing cost and increasing resolution of imaging modalities (which broadly follow an equivalent of “Moore's Law”), it is increasingly desirable to attempt a holistic “whole bone” approach. Davies and Stock examined the periosteal contours of adult long bones from laser scans in order to extract cross‐sectional properties, examining 1% longitudinal increments. Shaw et al. examined 5% increments of the femur to examine sex differences in cortical structure. Morimoto et al. analyzed the entire diaphysis of femora from chimpanzees ranging from infant to adult stages using a technique previously dubbed “morphometric mapping” (Zollikofer & Ponce de León, ). This technique borrows broadly from the techniques used in functional brain imaging and enables the quantification of surfaces that are relatively landmark free, such as immature long bone diaphyses. This technique, using a different statistical treatment, was also used by Puymerail (and also Puymerail, Ruff, et al., , Puymerail, Volpato, et al., , Puymerail et al., ; Ruff et al., ) in the exploration of the differences between the adult femur of Homo sapiens , Homo neanderthalensis , Homo erectus , and Pan troglodytes . In this study, using automated extraction of percentage cortical area, second moment of area, and maximum/minimum second moments of area ( I max / I min ), we aim to bring more fine‐grained data to bear on the question of structural differentiation of areas of the humerus throughout growth. Development of humeral curvature has recently been studied by Hambücken . Here, she looked at the curvature of the humerus in a medieval cemetery sample in the medial view throughout development and concluded that the humerus tends to start development as a relatively straight bone, and after the age of 1 year, a convex posterior curvature manifests itself. This tends to persist to around 12 years of age. At the age of around three and a half years, a distal curvature can be observed as well; however, a definite curvature is not fully apparent until skeletal maturity is reached (Hambücken, ). More work on further samples is needed to establish whether these observations can be generalized to multiple populations. This article will approach this through a GMM workflow, which although yielding slightly different results, should be complementary in interpretation. 1.3 Virtual morphometric approaches 1.3.1 GMM approaches The suite of statistical techniques associated with GMM has proved to be extremely useful in distinguishing group affiliation in multiple species and multiple anatomical regions (see e.g., Adams & Otarola‐Castillo, ). The adoption of GMM analysis in the study of the growth of juvenile long bones has, however, been less common, as much of young long bones lack recognizable Type 1 or Type 2 landmarks. Authors including Morimoto et al. have proposed the extraction of “pseudo‐landmarks” at predefined intervals to try and overcome this lack. This is especially appropriate to the study of diaphyseal morphology here, as it would mean that the morphology is easily subjected to standardized GMM workflows post extraction of the pseudo‐landmarks. Morimoto et al. applied this workflow to the diaphysis of juvenile chimpanzee femora and demonstrated that this approach yielded very satisfactory results. We aim to further demonstrate the utility of this approach in the extraction and analysis of pseudo‐landmark data in our samples (detailed below). 1.3.2 Landmark free approaches Where properties such as localized cortical thickness or surface curvature, are of interest, the approach dubbed “morphometric mapping” has been suggested (Zollikofer & Ponce de León, ). This technique borrows inspiration from functional brain imaging, where localized differences in thickness (in our case, cortical thickness) are highlighted on a three‐dimensional model through colorization using a scaled “heatmap.” As the diaphysis of a long bone is broadly cylindrical in form, the heat map can be “unzipped” and “unrolled” (Bondioli et al., ; Morimoto et al., ; Zollikofer & Ponce de León, ) and projected to a two‐dimensional (2D) graphical representation (the “map”) with relatively little distortion in the humerus. Here we look at two characteristics: cortical thickness and radial curvature of the periosteum. Cortical thickness is measured radially at each slice from the per‐slice centroid of each cross section of the diaphysis, following Jashashvili et al. . The technique of Bondioli et al. made measurements from the linear medial axis, which can introduce distortions in thickness measurements (Dupej et al., ). The use of simple linear rays for this measurement (rather than tangents of the internal points used by Morimoto et al., ) is appropriate for the humerus as the cross section is approaching circular. It also reduces computational complexity. The code describing this is available in Supplementary Information . Periosteal curvature is a description of the shape of the external contour of the bone surface at each slice. As such, it can give a fine‐grained description of the entheseal markings present on a bone. We use the elliptical Fourier descriptor of the surface curvature, after Morimoto et al. ( , 2018) using the formulae described by Kuhl and Giardina . Here a closed line ( L ) has the x and y coordinates of the line point expressed as functions of a total path length t . L = x t y t . We then decomposed the x and y coordinates separately using Fourier analysis using the following equations: x t = A 0 + ∑ n = 1 N a n cos nt + ∑ n = 1 N b n sin nt , y t = C 0 + ∑ n = 1 N c n cos nt + ∑ n = 1 N d n sin nt . Once these are obtained, a coefficient of surface curvature, k , is calculated: k t = x ′ t y ′ ′ t − y ′ t x ′ ′ t x ′ t 2 + y ′ t 2 3 / 2 These coefficients are then assembled into a matrix. Areas of high curvature (e.g., spikes) will have high values of k (either positive or negative, depending on whether curvature is concave or convex). Areas of low curvature (i.e., tending toward a flat line) will have values of k closer to zero. The code describing this is available as Supplementary Information . 1.3.3 Size standardization of morphometric maps and minimization of intermap distance As the resulting projection is a 2D matrix, the underlying data can be subjected to standardization and statistical analysis. There are several options for this, of which we shall describe the three most popular one. All rely on an equal number of samples between all objects of interest, that is, the same number of slices are sampled and in each slice, the same number of measurements are taken. Option 1 Morimoto et al. suggested standardization of the matrix to its median value. The matrix elements then undergo a discrete Fourier transform using the below formula: k = ∑ j = 1 n X j W n j − 1 ) ( k − 1 , where, W n = e − 2 πi n is one of n roots of unity (both equations from Frigo & Johnson, ). This transform returns both the complex and real component of each vector. The complex component of the vector is discarded for subsequent calculations. The Fourier‐transformed matrices are then compared to their group means (e.g., for age group 0–3 months, the mean of all matrices) and the matrices undergo a circular shift (i.e., columns are moved from the start of the matrix to the end) in order to minimize the distance from the mean. This has the effect of minimizing user error in original orientation of the stacks (Morimoto et al., ). One can also do this iteratively, that is, create a group mean, shift the matrices toward the mean, create a new mean of the shifted matrices, and repeat the shifting. Option 2 Puymerail et al. suggested standardizing the matrix values to positive integers between 0 and 1. A thin plate spline regression following Wood is used to align all the maps. Option 3 One could keep the raw measurements and not standardize the data at all. This is followed by Lacoste Jeanson et al. as they argue that these measurements are highly correlated with body mass, and to standardize in this fashion may mask intergroup differences. 1.4 Aims and objectives Here we will seek to apply the technique of morphometric mapping to an ontogenetic sample of H. sapiens to assess both endostructural variability (through the proxy of cortical thickness) and localized periosteal curvature from 20 to 80% increments of diaphyseal length. We seek to answer the following questions: How does cross‐sectional geometry vary along the shaft during ontogeny? Is it uniform in distribution and timing or is this a highly variable process? What does the local variation in cortical thickness look like when projected to a “map” and how does this vary through ontogeny? How sensitive is curvature mapping in detecting inter‐group differences when applied to the periosteal surface? Do our GMM analyses of humeral diaphyseal curvature indicate that distal anteroposterior curvature becomes more prevalent through adolescence, confirming the findings of Hambücken ? Is dense pseudo‐landmarking better at detecting differences between age groups than landmark free methods? Why the humerus? The humerus is a major long bone in the human body that after the acquisition of bipedality is not involved in habitual locomotion. As such, differences between groups can often be larger than that in the bones associated with walking (e.g., the tibia and femur) (e.g., Churchill, ; De Groote, , ; Pearson et al., ; Ruff, , ) as the humerus is not strictly a weight‐bearing bone but potentially responds to a diverse range of biomechanical stresses generated by upper limb usage. It grows in the same manner as other long bones, however, with the shaft being the major portion that grows (Gray & Gardner, ). Approximately, 80% of humerus growth occurs at the proximal growth plate (Pritchett, ), as can be seen by the distinctive shape the distal portion assumes even at very early embryonic stages. Prior to birth, it is assumed that much of the form of the humerus is genetically pre‐programmed, as the forces it withstand are not as high in utero as those experienced by the tibia and femur (which withstands greater forces due to the kicking reflex) (Verbruggen et al., , ). Aspects of the development of this bone appear to be linked to expression of two genes, Collagen X and Indian hedgehog, which work in tandem with biophysical stimuli of embryonic muscle contractions (Nowlan et al., ) and knockout mouse models suggest that the development of musculature in this region has an important factor to play in the normal development of the humerus (Nowlan et al., ). Examination of individuals who have continued with normal postnatal musculoskeletal development in this region allows us to understand possible developmental pathways. It can also allow a more nuanced understanding of how pathological conditions may manifest themselves than a simple visual analysis will. Some previous developmental studies of long bone size and shape There is debate as to how much of the internal structure of the long bones is dictated by either genetics or behavior. Long bones are often modeled by morphologists as straight cylinders so that their key functional properties such as resistance to bending, torsion, and so forth can be assessed by means of simple analyses of cross‐sectional shape. Unfortunately, not all long bones are straight cylinders, and in fact, correction for curvature can alter estimates of bone stiffness by up to 15% (Brassey et al., ). Much of the work on cross‐sectional geometry of juvenile long bones has built on a range of studies including analyses of adult hominin and primate skeletal material (e.g., Churchill, ; Davies & Stock, ; Niinimäki et al., ; Rhodes & Knüsel, ; Ruff & Trinkaus, ; Shaw et al., ; Trinkaus et al., ) as well as experimental work in non‐primates (e.g., Lieberman et al., , ) and in living humans (e.g., Nikander et al., ; Shaw & Stock, , ). To understand the functional significance and also the variation of adult long bone morphology, it is important to ascertain the stages within the growth trajectory at which morphological change occurs and their significance (Cowgill, ; Gosman et al., ; Ruff, , ; Smith & Buschang, ). To this end, several studies have looked at growth of human long bones using different techniques, namely: histology (Cambra‐Moo, ; Kember & Sissons, ; Maggiano et al., ); radiography (Tanner, ); intermembral indices and/or cross‐sectional geometry (e.g., Cowgill, ; Gosman et al., ; Harrington, ; Kondo & Dodo, , ; Osipov et al., ; Ruff, , ; Ruff et al., ; Ruff et al., ; Trinkaus et al., ; Zilhão & Trinkaus, ); GMM (Frelat & Mitteroecker, ); and analysis of torsion (Cowgill, ). In very early development, normal stresses will enable the genetically programmed process of bone formation to occur, but abnormal stresses , either caused by external factors (e.g., pathogens attacking the mother, severe malnutrition of mother, and trauma) or internal genetic factors such as deleterious mutations, will interfere with this program of bone formation. To this end, it is important in baseline studies to try and study nonpathological material for the creation of reference sequences. Cowgill was able to demonstrate that observable differences in population‐level cross‐sectional properties at the 50% of maximum length (hereafter referred to as increment) of both the humerus and femur existed very early, often before 1 year of age. This suggests that a complex long‐term interplay between population genetics and environment influences both long bone robusticity and bone shape. It is apparent, however, that analysis of just the cross‐section of the humerus at the 50% increment may miss out variation and patterns that may be biomechanically meaningful. To this end, it is increasingly common for studies to analyze several locations throughout the diaphysis (Churchill, ; Davies & Stock, ; Niinimäki et al., ; Rhodes & Knüsel, ; Shaw et al., ). However, with decreasing cost and increasing resolution of imaging modalities (which broadly follow an equivalent of “Moore's Law”), it is increasingly desirable to attempt a holistic “whole bone” approach. Davies and Stock examined the periosteal contours of adult long bones from laser scans in order to extract cross‐sectional properties, examining 1% longitudinal increments. Shaw et al. examined 5% increments of the femur to examine sex differences in cortical structure. Morimoto et al. analyzed the entire diaphysis of femora from chimpanzees ranging from infant to adult stages using a technique previously dubbed “morphometric mapping” (Zollikofer & Ponce de León, ). This technique borrows broadly from the techniques used in functional brain imaging and enables the quantification of surfaces that are relatively landmark free, such as immature long bone diaphyses. This technique, using a different statistical treatment, was also used by Puymerail (and also Puymerail, Ruff, et al., , Puymerail, Volpato, et al., , Puymerail et al., ; Ruff et al., ) in the exploration of the differences between the adult femur of Homo sapiens , Homo neanderthalensis , Homo erectus , and Pan troglodytes . In this study, using automated extraction of percentage cortical area, second moment of area, and maximum/minimum second moments of area ( I max / I min ), we aim to bring more fine‐grained data to bear on the question of structural differentiation of areas of the humerus throughout growth. Development of humeral curvature has recently been studied by Hambücken . Here, she looked at the curvature of the humerus in a medieval cemetery sample in the medial view throughout development and concluded that the humerus tends to start development as a relatively straight bone, and after the age of 1 year, a convex posterior curvature manifests itself. This tends to persist to around 12 years of age. At the age of around three and a half years, a distal curvature can be observed as well; however, a definite curvature is not fully apparent until skeletal maturity is reached (Hambücken, ). More work on further samples is needed to establish whether these observations can be generalized to multiple populations. This article will approach this through a GMM workflow, which although yielding slightly different results, should be complementary in interpretation. Virtual morphometric approaches 1.3.1 GMM approaches The suite of statistical techniques associated with GMM has proved to be extremely useful in distinguishing group affiliation in multiple species and multiple anatomical regions (see e.g., Adams & Otarola‐Castillo, ). The adoption of GMM analysis in the study of the growth of juvenile long bones has, however, been less common, as much of young long bones lack recognizable Type 1 or Type 2 landmarks. Authors including Morimoto et al. have proposed the extraction of “pseudo‐landmarks” at predefined intervals to try and overcome this lack. This is especially appropriate to the study of diaphyseal morphology here, as it would mean that the morphology is easily subjected to standardized GMM workflows post extraction of the pseudo‐landmarks. Morimoto et al. applied this workflow to the diaphysis of juvenile chimpanzee femora and demonstrated that this approach yielded very satisfactory results. We aim to further demonstrate the utility of this approach in the extraction and analysis of pseudo‐landmark data in our samples (detailed below). 1.3.2 Landmark free approaches Where properties such as localized cortical thickness or surface curvature, are of interest, the approach dubbed “morphometric mapping” has been suggested (Zollikofer & Ponce de León, ). This technique borrows inspiration from functional brain imaging, where localized differences in thickness (in our case, cortical thickness) are highlighted on a three‐dimensional model through colorization using a scaled “heatmap.” As the diaphysis of a long bone is broadly cylindrical in form, the heat map can be “unzipped” and “unrolled” (Bondioli et al., ; Morimoto et al., ; Zollikofer & Ponce de León, ) and projected to a two‐dimensional (2D) graphical representation (the “map”) with relatively little distortion in the humerus. Here we look at two characteristics: cortical thickness and radial curvature of the periosteum. Cortical thickness is measured radially at each slice from the per‐slice centroid of each cross section of the diaphysis, following Jashashvili et al. . The technique of Bondioli et al. made measurements from the linear medial axis, which can introduce distortions in thickness measurements (Dupej et al., ). The use of simple linear rays for this measurement (rather than tangents of the internal points used by Morimoto et al., ) is appropriate for the humerus as the cross section is approaching circular. It also reduces computational complexity. The code describing this is available in Supplementary Information . Periosteal curvature is a description of the shape of the external contour of the bone surface at each slice. As such, it can give a fine‐grained description of the entheseal markings present on a bone. We use the elliptical Fourier descriptor of the surface curvature, after Morimoto et al. ( , 2018) using the formulae described by Kuhl and Giardina . Here a closed line ( L ) has the x and y coordinates of the line point expressed as functions of a total path length t . L = x t y t . We then decomposed the x and y coordinates separately using Fourier analysis using the following equations: x t = A 0 + ∑ n = 1 N a n cos nt + ∑ n = 1 N b n sin nt , y t = C 0 + ∑ n = 1 N c n cos nt + ∑ n = 1 N d n sin nt . Once these are obtained, a coefficient of surface curvature, k , is calculated: k t = x ′ t y ′ ′ t − y ′ t x ′ ′ t x ′ t 2 + y ′ t 2 3 / 2 These coefficients are then assembled into a matrix. Areas of high curvature (e.g., spikes) will have high values of k (either positive or negative, depending on whether curvature is concave or convex). Areas of low curvature (i.e., tending toward a flat line) will have values of k closer to zero. The code describing this is available as Supplementary Information . 1.3.3 Size standardization of morphometric maps and minimization of intermap distance As the resulting projection is a 2D matrix, the underlying data can be subjected to standardization and statistical analysis. There are several options for this, of which we shall describe the three most popular one. All rely on an equal number of samples between all objects of interest, that is, the same number of slices are sampled and in each slice, the same number of measurements are taken. Option 1 Morimoto et al. suggested standardization of the matrix to its median value. The matrix elements then undergo a discrete Fourier transform using the below formula: k = ∑ j = 1 n X j W n j − 1 ) ( k − 1 , where, W n = e − 2 πi n is one of n roots of unity (both equations from Frigo & Johnson, ). This transform returns both the complex and real component of each vector. The complex component of the vector is discarded for subsequent calculations. The Fourier‐transformed matrices are then compared to their group means (e.g., for age group 0–3 months, the mean of all matrices) and the matrices undergo a circular shift (i.e., columns are moved from the start of the matrix to the end) in order to minimize the distance from the mean. This has the effect of minimizing user error in original orientation of the stacks (Morimoto et al., ). One can also do this iteratively, that is, create a group mean, shift the matrices toward the mean, create a new mean of the shifted matrices, and repeat the shifting. Option 2 Puymerail et al. suggested standardizing the matrix values to positive integers between 0 and 1. A thin plate spline regression following Wood is used to align all the maps. Option 3 One could keep the raw measurements and not standardize the data at all. This is followed by Lacoste Jeanson et al. as they argue that these measurements are highly correlated with body mass, and to standardize in this fashion may mask intergroup differences. GMM approaches The suite of statistical techniques associated with GMM has proved to be extremely useful in distinguishing group affiliation in multiple species and multiple anatomical regions (see e.g., Adams & Otarola‐Castillo, ). The adoption of GMM analysis in the study of the growth of juvenile long bones has, however, been less common, as much of young long bones lack recognizable Type 1 or Type 2 landmarks. Authors including Morimoto et al. have proposed the extraction of “pseudo‐landmarks” at predefined intervals to try and overcome this lack. This is especially appropriate to the study of diaphyseal morphology here, as it would mean that the morphology is easily subjected to standardized GMM workflows post extraction of the pseudo‐landmarks. Morimoto et al. applied this workflow to the diaphysis of juvenile chimpanzee femora and demonstrated that this approach yielded very satisfactory results. We aim to further demonstrate the utility of this approach in the extraction and analysis of pseudo‐landmark data in our samples (detailed below). Landmark free approaches Where properties such as localized cortical thickness or surface curvature, are of interest, the approach dubbed “morphometric mapping” has been suggested (Zollikofer & Ponce de León, ). This technique borrows inspiration from functional brain imaging, where localized differences in thickness (in our case, cortical thickness) are highlighted on a three‐dimensional model through colorization using a scaled “heatmap.” As the diaphysis of a long bone is broadly cylindrical in form, the heat map can be “unzipped” and “unrolled” (Bondioli et al., ; Morimoto et al., ; Zollikofer & Ponce de León, ) and projected to a two‐dimensional (2D) graphical representation (the “map”) with relatively little distortion in the humerus. Here we look at two characteristics: cortical thickness and radial curvature of the periosteum. Cortical thickness is measured radially at each slice from the per‐slice centroid of each cross section of the diaphysis, following Jashashvili et al. . The technique of Bondioli et al. made measurements from the linear medial axis, which can introduce distortions in thickness measurements (Dupej et al., ). The use of simple linear rays for this measurement (rather than tangents of the internal points used by Morimoto et al., ) is appropriate for the humerus as the cross section is approaching circular. It also reduces computational complexity. The code describing this is available in Supplementary Information . Periosteal curvature is a description of the shape of the external contour of the bone surface at each slice. As such, it can give a fine‐grained description of the entheseal markings present on a bone. We use the elliptical Fourier descriptor of the surface curvature, after Morimoto et al. ( , 2018) using the formulae described by Kuhl and Giardina . Here a closed line ( L ) has the x and y coordinates of the line point expressed as functions of a total path length t . L = x t y t . We then decomposed the x and y coordinates separately using Fourier analysis using the following equations: x t = A 0 + ∑ n = 1 N a n cos nt + ∑ n = 1 N b n sin nt , y t = C 0 + ∑ n = 1 N c n cos nt + ∑ n = 1 N d n sin nt . Once these are obtained, a coefficient of surface curvature, k , is calculated: k t = x ′ t y ′ ′ t − y ′ t x ′ ′ t x ′ t 2 + y ′ t 2 3 / 2 These coefficients are then assembled into a matrix. Areas of high curvature (e.g., spikes) will have high values of k (either positive or negative, depending on whether curvature is concave or convex). Areas of low curvature (i.e., tending toward a flat line) will have values of k closer to zero. The code describing this is available as Supplementary Information . Size standardization of morphometric maps and minimization of intermap distance As the resulting projection is a 2D matrix, the underlying data can be subjected to standardization and statistical analysis. There are several options for this, of which we shall describe the three most popular one. All rely on an equal number of samples between all objects of interest, that is, the same number of slices are sampled and in each slice, the same number of measurements are taken. Option 1 Morimoto et al. suggested standardization of the matrix to its median value. The matrix elements then undergo a discrete Fourier transform using the below formula: k = ∑ j = 1 n X j W n j − 1 ) ( k − 1 , where, W n = e − 2 πi n is one of n roots of unity (both equations from Frigo & Johnson, ). This transform returns both the complex and real component of each vector. The complex component of the vector is discarded for subsequent calculations. The Fourier‐transformed matrices are then compared to their group means (e.g., for age group 0–3 months, the mean of all matrices) and the matrices undergo a circular shift (i.e., columns are moved from the start of the matrix to the end) in order to minimize the distance from the mean. This has the effect of minimizing user error in original orientation of the stacks (Morimoto et al., ). One can also do this iteratively, that is, create a group mean, shift the matrices toward the mean, create a new mean of the shifted matrices, and repeat the shifting. Option 2 Puymerail et al. suggested standardizing the matrix values to positive integers between 0 and 1. A thin plate spline regression following Wood is used to align all the maps. Option 3 One could keep the raw measurements and not standardize the data at all. This is followed by Lacoste Jeanson et al. as they argue that these measurements are highly correlated with body mass, and to standardize in this fashion may mask intergroup differences. Morimoto et al. suggested standardization of the matrix to its median value. The matrix elements then undergo a discrete Fourier transform using the below formula: k = ∑ j = 1 n X j W n j − 1 ) ( k − 1 , where, W n = e − 2 πi n is one of n roots of unity (both equations from Frigo & Johnson, ). This transform returns both the complex and real component of each vector. The complex component of the vector is discarded for subsequent calculations. The Fourier‐transformed matrices are then compared to their group means (e.g., for age group 0–3 months, the mean of all matrices) and the matrices undergo a circular shift (i.e., columns are moved from the start of the matrix to the end) in order to minimize the distance from the mean. This has the effect of minimizing user error in original orientation of the stacks (Morimoto et al., ). One can also do this iteratively, that is, create a group mean, shift the matrices toward the mean, create a new mean of the shifted matrices, and repeat the shifting. Puymerail et al. suggested standardizing the matrix values to positive integers between 0 and 1. A thin plate spline regression following Wood is used to align all the maps. One could keep the raw measurements and not standardize the data at all. This is followed by Lacoste Jeanson et al. as they argue that these measurements are highly correlated with body mass, and to standardize in this fashion may mask intergroup differences. Aims and objectives Here we will seek to apply the technique of morphometric mapping to an ontogenetic sample of H. sapiens to assess both endostructural variability (through the proxy of cortical thickness) and localized periosteal curvature from 20 to 80% increments of diaphyseal length. We seek to answer the following questions: How does cross‐sectional geometry vary along the shaft during ontogeny? Is it uniform in distribution and timing or is this a highly variable process? What does the local variation in cortical thickness look like when projected to a “map” and how does this vary through ontogeny? How sensitive is curvature mapping in detecting inter‐group differences when applied to the periosteal surface? Do our GMM analyses of humeral diaphyseal curvature indicate that distal anteroposterior curvature becomes more prevalent through adolescence, confirming the findings of Hambücken ? Is dense pseudo‐landmarking better at detecting differences between age groups than landmark free methods? MATERIALS AND METHODS 2.1 Sample An ontogenetic sample of humeri from the medieval urban site of Newcastle Blackgate was selected for micro‐CT scanning. This site, located in northern England, is a medieval burial ground from the later Anglo‐Saxon through to early Norman periods (Mahoney‐Swales, ; Niinimäki et al., ) with the skeletal remains of 638 individuals of which 231 are immature. For this study, intact diaphyses of humeri of 59 immature individuals were obtained representing all ages from fetal to ~18 years of age. The ages at death of these individuals were previously estimated using dental development patterns, epiphyseal fusion and/or regressions based on long bone lengths (Mahoney‐Swales, ). This site previously had a sample of adult humeri analyzed using pQCT (Niinimäki et al., ). The composition of the sample is shown in Table . The sexes of the individuals are unknown, as it is impossible to ascertain the sex of immature skeletal remains solely through morphological methods and a DNA work has not yet been attempted on this sample. We can, however, assume that it is a mixed‐sex sample as it derives from a community cemetery. Left humeri were preferentially selected as these were generally better preserved in this sample and to avoid having to correct for bilateral asymmetry within individuals. A number of right humeri were also scanned, however, when intact left humeri were unavailable these scans were mirrored to augment the sample size. The sample composition of the study is shown in Table . 2.2 Data acquisition All humeri were batch scanned using a Nikon‐Metris custom bay microCT scanner at the Henry Moseley X‐Ray Imaging Facility, University of Manchester. All humeri were mounted vertically in inert foam and the foam was fixed to the scanner turntable, and highest achievable resolution for each batch was used. The method of batch scanning was chosen to maximize sample throughout, because the research questions for this study did not require a voxel resolution lower than 40 μm. Samples were scanned at 50 KeV/235 μA with isotropic voxel sizes ranging from 0.0384 to 0.108 mm, on continuous scanning mode with 2001 projections per volume. Projections were reconstructed into stacks using CTPro (Nikon Ltd.) on a dedicated workstation. Due to scanning volume constraints, some of the larger bones had to be scanned in two passes, and these stacks were automatically aligned using Avizo® 9.0 (FEI/Thermo Fisher Scientific, Inc.). Individual bone stacks were extracted using the ROI crop function in Avizo 9.0. Individual scan parameters for each specimen are available in the Supplementary Information . 2.3 Stack segmentation Stacks were aligned vertically to their principal axes using a combination of the “moments of inertia” tool in BoneJ 1.4.2 (Doube et al., ) and “Reorient3_TP” (available from http://www.med.harvard.edu/JPNM/ij/plugins/AlignStacks.html ). The stacks were oriented with the bone vertical with the coronal plane, parallel to the y ‐axis. Each complete stack was then resampled to 200 equal slices the transverse plane for the whole diaphyseal length, giving 0.5% increments using the resample tool with spline interpolation in Avizo 9.0. (For our analyses we were only interested in the 20–80% margin but this was the simplest method of obtaining this subsample.) This resulted in very few artifacts from partial volume averaging (a known problem with downsampling of CT data; Abel et al., ) (Supplementary Figure ). In older specimens where the epiphyses with the articular processes were fused to the diaphysis, scans were cropped at the epiphyseal margins, to ensure comparability over all age groups. Images were semi‐manually segmented using a Wacom Bamboo® (Wacom Co.) graphics tablet and the segmentation editor “Edit label field” in Avizo® 9.0. Slices were thresholded using the magic wand tool and inner contours were manually corrected. This approach was taken as fully automated routines such as in Zebaze et al. or Buie et al. it did not yield satisfactory results. This is probably due to the fact that these algorithms were all developed using adult bone, whereas in immature bones the boundaries between trabecular and cortical bone are often diffused and para‐cortical bone is also present, especially in younger age groups. All holes caused by nutrient foramina or postdepositional cracks in the bone were manually filled in, using the surrounding internal and external contour as a guide. 2.4 Processing of segmented stacks Using a series of image processing routines written in Matlab, we were able to extract the following data from whole segmented Micro‐CT stacks: Cortical thickness at regular radial intervals per slice, periosteal surface curvature, biomechanical properties, and pseudo‐landmarks. Firstly, conventional biomechanical indices were analyzed to establish if differences are discernable between groups. These indices were automatically quantified at every 0.5% longitudinal increment of the diaphysis and a secondary aim was to see if the increment locations usually quantified (e.g., 20%) were actually of utility, or if the analysis of different locations along the bone shaft would be more useful. Secondly, mapping of the thickness of cortical bone was undertaken, again to track change throughout development and to see if differences between age groups were statistically significant. Significant results could have a bearing on helping to narrow down age ranges for fragmentary juvenile material. Thirdly, analysis of curvature of the external periosteal contour was undertaken to track the development regions of high curvature to see if the contours reflect muscle attachment sites and to see if any significant correlations exist between this and cortical thickness. Finally, a GMM analysis of the entire diaphysis using coordinate data of the periosteal contour was undertaken, to compare the utility of this versus our more landmark free approach. It also gives an alternative, GMM analysis of the proximal–distal curvature of the humerus. After segmentation, each stack of images was analyzed using a custom routine written in Matlab (Mathworks Ltd) by WIS and TO'M which characterized the thickness of the cortex in the segmented stacks. Our method for characterizing thickness broadly follows that of Bondioli et al. , as the long bones of interest in this article are broadly cylindrical in cross‐section and can be measured using one radial line from the centroid, rather than a point orthogonal to the tangent at a point on the external surface as in Morimoto et al. . We, however, used the centroid position for each slice, rather than the medial line of the whole bone. This was also done to reduce computational complexity, although we do acknowledge that a tangentially based thickness measure may be more appropriate for bones such as ribs, which are much more ellipsoid in cross‐section. The Matlab code was set to only sample slices between the 20 and 80% margin (i.e., 120 slices). Each slice had 360 measurements taken in a clockwise direction from an automatically defined centroid and the following outputs were produced: Matrix of raw thickness values; slice based inner coordinates; slice based outer coordinates; XYZ array of inner coordinates; XYZ array of outer coordinates; subsamples of XYZ coordinates (subsampling was user defined); heat map of thickness matrix, both labeled and unlabeled. The routine also automatically extracted the following biomechanical parameters for each slice: total area; percentage cortical area; I max / I min ; I x / I y ; and J . The Matlab code for this and thickness calculations are available in Supplementary Information . To define external curvature, another routine was written in Matlab using the broad definitions provided in Morimoto et al. . Here, a folder containing the CSV files of outer points was chosen interactively. The routine then applied an elliptical Fourier smoothing on the coordinates to remove noise in the data, and the curvature coefficient, k (after Kuhl & Giardina, ) was defined per point. Outputs from this routine were: Heatmap of k matrix; CSV file of k matrix; 2D graph of original contour points; 2D graph of smoothed points (the latter help with spotting accidental oversights in alignment or segmentation). The Matlab code for this is available in Supplementary Information . An easy to digest summary of the above workflow is shown in Figure . 2.5 Analysis of data generated For conventional biomechanical indices, group means and standard deviations were calculated. Coefficients of variation for each 0.5% increment in length were also calculated to establish positions at which differences between groups could be discerned. For percentage cortical area, a measure of the “spike” in the data, full width at the half maximum height (FWHMH) of the data was also measured (Weisstein, ). For thickness based morphometric maps, all thicknesses were standardized in size before further analysis. In this case, we have decided to size standardize using the method outlined by Puymerail et al. and did not shift the matrices. This resulted in a process which is easy to both understand and to replicate. The decision made not to shift the matrices, either through the circular shift technique or through a spline, was because the stacks had been oriented prior to segmentation and the subsequent shifting of matrices removed this homology. This is probably due to the process of cortical drift in young specimens where remodeling occurred at different speeds. The code for our size standardization is in Supplementary Information . We also repeated the caution of Morimoto et al. that this technique does not presume perfect point‐to‐point homology among specimens; it is used to analyze variation of global patterns around and along entire diaphysis. For curvature maps, a size standardization step was not necessary as the data was already at the same scale. The matrices were then combined by reshaping to one line each and adding to an overall matrix. This routine was also written in Matlab and is available as part of the supplementary material (Supplementary Information ). Principal components analysis (PCA) of the matrices was conducted in Matlab. Subsequently, the principal component (PC) scores were subjected to linear discriminant analysis (LDA) in PAST version 3.25 (Hammer et al., ). Although Puymerail and Morimoto et al. both suggested differing decomposition methods to ease subsequent data analysis, as the data have already been standardized, it is no longer technically necessary to do this, and separation of groups was actually improved by removing this step. To establish how different the analysis of periosteal curvature based on Fourier decomposition would be to that of a more familiar GMM analysis, the same external coordinates were subjected to Procrustes superposition in the R package “Morpho” (Schlager, ). Discriminant function analysis (DFA) of the resulting eigenvectors was analyzed in PAST V3 (Hammer et al., ). Extremes of the plotted PC axes were also generated using the package Geomorph (Adams & Otarola‐Castillo, ). Sample An ontogenetic sample of humeri from the medieval urban site of Newcastle Blackgate was selected for micro‐CT scanning. This site, located in northern England, is a medieval burial ground from the later Anglo‐Saxon through to early Norman periods (Mahoney‐Swales, ; Niinimäki et al., ) with the skeletal remains of 638 individuals of which 231 are immature. For this study, intact diaphyses of humeri of 59 immature individuals were obtained representing all ages from fetal to ~18 years of age. The ages at death of these individuals were previously estimated using dental development patterns, epiphyseal fusion and/or regressions based on long bone lengths (Mahoney‐Swales, ). This site previously had a sample of adult humeri analyzed using pQCT (Niinimäki et al., ). The composition of the sample is shown in Table . The sexes of the individuals are unknown, as it is impossible to ascertain the sex of immature skeletal remains solely through morphological methods and a DNA work has not yet been attempted on this sample. We can, however, assume that it is a mixed‐sex sample as it derives from a community cemetery. Left humeri were preferentially selected as these were generally better preserved in this sample and to avoid having to correct for bilateral asymmetry within individuals. A number of right humeri were also scanned, however, when intact left humeri were unavailable these scans were mirrored to augment the sample size. The sample composition of the study is shown in Table . Data acquisition All humeri were batch scanned using a Nikon‐Metris custom bay microCT scanner at the Henry Moseley X‐Ray Imaging Facility, University of Manchester. All humeri were mounted vertically in inert foam and the foam was fixed to the scanner turntable, and highest achievable resolution for each batch was used. The method of batch scanning was chosen to maximize sample throughout, because the research questions for this study did not require a voxel resolution lower than 40 μm. Samples were scanned at 50 KeV/235 μA with isotropic voxel sizes ranging from 0.0384 to 0.108 mm, on continuous scanning mode with 2001 projections per volume. Projections were reconstructed into stacks using CTPro (Nikon Ltd.) on a dedicated workstation. Due to scanning volume constraints, some of the larger bones had to be scanned in two passes, and these stacks were automatically aligned using Avizo® 9.0 (FEI/Thermo Fisher Scientific, Inc.). Individual bone stacks were extracted using the ROI crop function in Avizo 9.0. Individual scan parameters for each specimen are available in the Supplementary Information . Stack segmentation Stacks were aligned vertically to their principal axes using a combination of the “moments of inertia” tool in BoneJ 1.4.2 (Doube et al., ) and “Reorient3_TP” (available from http://www.med.harvard.edu/JPNM/ij/plugins/AlignStacks.html ). The stacks were oriented with the bone vertical with the coronal plane, parallel to the y ‐axis. Each complete stack was then resampled to 200 equal slices the transverse plane for the whole diaphyseal length, giving 0.5% increments using the resample tool with spline interpolation in Avizo 9.0. (For our analyses we were only interested in the 20–80% margin but this was the simplest method of obtaining this subsample.) This resulted in very few artifacts from partial volume averaging (a known problem with downsampling of CT data; Abel et al., ) (Supplementary Figure ). In older specimens where the epiphyses with the articular processes were fused to the diaphysis, scans were cropped at the epiphyseal margins, to ensure comparability over all age groups. Images were semi‐manually segmented using a Wacom Bamboo® (Wacom Co.) graphics tablet and the segmentation editor “Edit label field” in Avizo® 9.0. Slices were thresholded using the magic wand tool and inner contours were manually corrected. This approach was taken as fully automated routines such as in Zebaze et al. or Buie et al. it did not yield satisfactory results. This is probably due to the fact that these algorithms were all developed using adult bone, whereas in immature bones the boundaries between trabecular and cortical bone are often diffused and para‐cortical bone is also present, especially in younger age groups. All holes caused by nutrient foramina or postdepositional cracks in the bone were manually filled in, using the surrounding internal and external contour as a guide. Processing of segmented stacks Using a series of image processing routines written in Matlab, we were able to extract the following data from whole segmented Micro‐CT stacks: Cortical thickness at regular radial intervals per slice, periosteal surface curvature, biomechanical properties, and pseudo‐landmarks. Firstly, conventional biomechanical indices were analyzed to establish if differences are discernable between groups. These indices were automatically quantified at every 0.5% longitudinal increment of the diaphysis and a secondary aim was to see if the increment locations usually quantified (e.g., 20%) were actually of utility, or if the analysis of different locations along the bone shaft would be more useful. Secondly, mapping of the thickness of cortical bone was undertaken, again to track change throughout development and to see if differences between age groups were statistically significant. Significant results could have a bearing on helping to narrow down age ranges for fragmentary juvenile material. Thirdly, analysis of curvature of the external periosteal contour was undertaken to track the development regions of high curvature to see if the contours reflect muscle attachment sites and to see if any significant correlations exist between this and cortical thickness. Finally, a GMM analysis of the entire diaphysis using coordinate data of the periosteal contour was undertaken, to compare the utility of this versus our more landmark free approach. It also gives an alternative, GMM analysis of the proximal–distal curvature of the humerus. After segmentation, each stack of images was analyzed using a custom routine written in Matlab (Mathworks Ltd) by WIS and TO'M which characterized the thickness of the cortex in the segmented stacks. Our method for characterizing thickness broadly follows that of Bondioli et al. , as the long bones of interest in this article are broadly cylindrical in cross‐section and can be measured using one radial line from the centroid, rather than a point orthogonal to the tangent at a point on the external surface as in Morimoto et al. . We, however, used the centroid position for each slice, rather than the medial line of the whole bone. This was also done to reduce computational complexity, although we do acknowledge that a tangentially based thickness measure may be more appropriate for bones such as ribs, which are much more ellipsoid in cross‐section. The Matlab code was set to only sample slices between the 20 and 80% margin (i.e., 120 slices). Each slice had 360 measurements taken in a clockwise direction from an automatically defined centroid and the following outputs were produced: Matrix of raw thickness values; slice based inner coordinates; slice based outer coordinates; XYZ array of inner coordinates; XYZ array of outer coordinates; subsamples of XYZ coordinates (subsampling was user defined); heat map of thickness matrix, both labeled and unlabeled. The routine also automatically extracted the following biomechanical parameters for each slice: total area; percentage cortical area; I max / I min ; I x / I y ; and J . The Matlab code for this and thickness calculations are available in Supplementary Information . To define external curvature, another routine was written in Matlab using the broad definitions provided in Morimoto et al. . Here, a folder containing the CSV files of outer points was chosen interactively. The routine then applied an elliptical Fourier smoothing on the coordinates to remove noise in the data, and the curvature coefficient, k (after Kuhl & Giardina, ) was defined per point. Outputs from this routine were: Heatmap of k matrix; CSV file of k matrix; 2D graph of original contour points; 2D graph of smoothed points (the latter help with spotting accidental oversights in alignment or segmentation). The Matlab code for this is available in Supplementary Information . An easy to digest summary of the above workflow is shown in Figure . Analysis of data generated For conventional biomechanical indices, group means and standard deviations were calculated. Coefficients of variation for each 0.5% increment in length were also calculated to establish positions at which differences between groups could be discerned. For percentage cortical area, a measure of the “spike” in the data, full width at the half maximum height (FWHMH) of the data was also measured (Weisstein, ). For thickness based morphometric maps, all thicknesses were standardized in size before further analysis. In this case, we have decided to size standardize using the method outlined by Puymerail et al. and did not shift the matrices. This resulted in a process which is easy to both understand and to replicate. The decision made not to shift the matrices, either through the circular shift technique or through a spline, was because the stacks had been oriented prior to segmentation and the subsequent shifting of matrices removed this homology. This is probably due to the process of cortical drift in young specimens where remodeling occurred at different speeds. The code for our size standardization is in Supplementary Information . We also repeated the caution of Morimoto et al. that this technique does not presume perfect point‐to‐point homology among specimens; it is used to analyze variation of global patterns around and along entire diaphysis. For curvature maps, a size standardization step was not necessary as the data was already at the same scale. The matrices were then combined by reshaping to one line each and adding to an overall matrix. This routine was also written in Matlab and is available as part of the supplementary material (Supplementary Information ). Principal components analysis (PCA) of the matrices was conducted in Matlab. Subsequently, the principal component (PC) scores were subjected to linear discriminant analysis (LDA) in PAST version 3.25 (Hammer et al., ). Although Puymerail and Morimoto et al. both suggested differing decomposition methods to ease subsequent data analysis, as the data have already been standardized, it is no longer technically necessary to do this, and separation of groups was actually improved by removing this step. To establish how different the analysis of periosteal curvature based on Fourier decomposition would be to that of a more familiar GMM analysis, the same external coordinates were subjected to Procrustes superposition in the R package “Morpho” (Schlager, ). Discriminant function analysis (DFA) of the resulting eigenvectors was analyzed in PAST V3 (Hammer et al., ). Extremes of the plotted PC axes were also generated using the package Geomorph (Adams & Otarola‐Castillo, ). RESULTS 3.1 Conventional biomechanical parameters 3.1.1 Percentage cortical area Percentage of cortical area provides a standardized measure of areal bone density. Here (Figure ), it can be seen that a peak of cortical bone volume is achieved between 40 and 60% increments in all age groups but that this peak more marked in the fetal/neonatal and early walking individuals, as evidenced by Figure and Table , which shows the FWHMH of the data. Cortical bone volume is highest in the neonatal age category, then falls successively during infancy before rising progressively during childhood and adolescence. 3.1.2 I x / I y , the circularity of cross‐section This measurement is of how close the cross section is to that of a perfect circle (where I x / I y would be one) or whether it is more ellipsoid. Values below 1 are where the ellipse is more expanded in the anteroposterior direction, and values above one show expansion in the mediolateral direction. Figure shows this over the 20–80% margin for mean values of each group. Smoothing was achieved by a locally selecting 20% of points to influence each point's smoothing (Cleveland, ). Deviation from circularity varies considerably along the diaphysis. In the earliest stages, the shaft is much more ellipsoid in the mediolateral direction at the 20% margin and decreases constantly, resulting in a relatively circular midshaft. A more antero‐posteriorly ellipsoid shape is observable in the proximal humerus. This is most marked in the fetal/neonatal stages, but the infant crawling group also exhibits this shape characteristic in the proximal portion. The infant walking, young child, and older child stages broadly follow the pattern but have a fall‐off in ellipsoidal form toward the proximal margin. This repeats itself in the adolescent stage, but at the 40% margin, the bone reaches a peak of non‐circularity in the anteroposterior direction. 3.1.3 I max / I min , evenness of rigidity The patterns observed here are very similar to I x / I y (Figure ) but more exaggerated for I max / I min . As such, the discussion of I x / I y (Figure ) holds for this as well. To ease comparison, both sets of variables (Figures and ) are displayed with the same scale y ‐axis. It is difficult to compare the distribution of the I max / I min data statistically between the groups as they all show different modes of distribution. For two groups (fetal/neonate and infant crawling), the shape of the graph makes measuring the FWHMH impossible. In lieu of this, group mean maximum heights, with the margin at which they occur, are provided in (Table ). 3.1.4 J , torsional rigidity In all age groups, J is at its lowest between the 30 and 40% margin (Figure ). All groups are very slightly bimodal with a peak at the end closest to the distal margin, and broadly increase the second moment of the area toward a second peak, at or near the proximal margin. For visual ease, the results for older child and adolescent groups are presented on a separate graph of a different scale (Table ). If we look at the coefficient of variation within groups (Figure ), the following patterns become evident. Most groups increase in variation toward the midshaft and drop off in variation toward the proximal and distal margins. The exceptions are adolescent, which decreases over the entire range and older child, which remains fairly flat. As can be seen from the graph, the data have a lot of noise in it, which makes locally weighted smoothing (LOESS) fitting only effective for a minority of age groups. This suggests that the midshaft is a more effective point to take measurements, although only for younger age groups. 3.2 Cortical thickness The standardized mean heat map for each group is presented in Figure . Here, a broad trend toward the organization of the internal bone structure can be seen, with the neonates appearing to have a unimodal distribution of cortical thickness across the map, whereas the later stages appear to be more complex/multimodal. In younger age groups, especially neonatal individuals, there are two main regions of diffusely increased thickness toward the distal margin. In older age categories, there are four regions of increased thickness—one located distally and three proximally. Individual maps are available in the Supplementary Information . When “re‐wrapped” around the bone's diaphysis in adolescents, this patterning resembles the pattern of entheses on the bone; however, the overall correlation between cortical thickness and localized surface curvature is weak. Results for DFA (Figure ) show a good separation of groups. Axes 1–3 explain 83.33% of all variation. Fetal/neonate and young child both form visually distinct groups; however, overall the classification of individuals is poor, with only 12.07% of individuals being correctly classified on jackknifing. Results for Axes 1–3 of PCA are displayed in Supplementary Figure for cortical thickness. In the PCA, fetal/neonate, infant crawling and infant walking groups all separate clearly from the rest of the groups. The first three components accounted for 85% of all variance. Graphs of PC1 versus PC2 and PC2 versus PC3 are provided as Supplementary Figure . Using the scores from DFA rather than from PCA helps the eye to see grouping patterns more clearly, although, with these analyses of high‐resolution data, a different grouping technique such as between group PCA (as implemented by Mitteroecker & Bookstein, ) may be more effective (there is, however, debate about the effectiveness of between‐group PCA; Bookstein, ; Cardini et al., ). In a situation like in this article, where there is a much larger number of variables than specimens between group PCA may result in observing patterns in data were there are none. As such, we err on the side of caution and use traditional PCA. 3.3 Periosteal curvature The surface curvature maps for mean values in each age group are shown in Figure . Here, three longitudinal bands of increased curvature, are visible, which are most marked toward the distal end of the bone. These regions correspond well to the overall shape of the developing humerus, which has a rounded triangular cross‐section with three broad ridges located anteriorly, posteromedially, and posterolaterally (the latter two extend to the supracondylar ridges). Individual maps are available in the Supplementary Information . Results for Axes 1 and 2 of LDA are displayed in Figure . Graphs of PC1 versus PC2 and PC2 versus PC3 are shown in Supplementary Figure . Regarding thickness, “Infant walking” bones group very differently to the rest of the age classes, and infants only just border the other groups. In the case of external surface curvature, the fetal/neonatal and infant crawling groups are very different from the other age groups. Correlation of raw scores between periosteal curvature and cortical thickness was not statistically significant at p = .05, using Pearson's correlation coefficient. 3.4 GMM analysis GMM analysis of the diaphyses separated fetal/neonatal, infant, and adolescent age groups fairly confidently. A plot of discriminant analysis of the PC scores of this data is shown in Figure . Extreme shapes from each of the first three principal components are shown in Figure . What can be seen immediately is that the GMM analysis was more effective at analyzing longitudinal curvature, rather than the periosteal surface curvature analyzed using maps. The first three principal components accounted for 96.6% of the total variation. The PCA was not very good at separating the differing groups (Supplementary Figure ) and LDA only correctly classified 19.6% of specimens on jackknifing. This suggests that this dense sampling of semilandmarks is excessive for the determination of group affiliation. Conventional biomechanical parameters 3.1.1 Percentage cortical area Percentage of cortical area provides a standardized measure of areal bone density. Here (Figure ), it can be seen that a peak of cortical bone volume is achieved between 40 and 60% increments in all age groups but that this peak more marked in the fetal/neonatal and early walking individuals, as evidenced by Figure and Table , which shows the FWHMH of the data. Cortical bone volume is highest in the neonatal age category, then falls successively during infancy before rising progressively during childhood and adolescence. 3.1.2 I x / I y , the circularity of cross‐section This measurement is of how close the cross section is to that of a perfect circle (where I x / I y would be one) or whether it is more ellipsoid. Values below 1 are where the ellipse is more expanded in the anteroposterior direction, and values above one show expansion in the mediolateral direction. Figure shows this over the 20–80% margin for mean values of each group. Smoothing was achieved by a locally selecting 20% of points to influence each point's smoothing (Cleveland, ). Deviation from circularity varies considerably along the diaphysis. In the earliest stages, the shaft is much more ellipsoid in the mediolateral direction at the 20% margin and decreases constantly, resulting in a relatively circular midshaft. A more antero‐posteriorly ellipsoid shape is observable in the proximal humerus. This is most marked in the fetal/neonatal stages, but the infant crawling group also exhibits this shape characteristic in the proximal portion. The infant walking, young child, and older child stages broadly follow the pattern but have a fall‐off in ellipsoidal form toward the proximal margin. This repeats itself in the adolescent stage, but at the 40% margin, the bone reaches a peak of non‐circularity in the anteroposterior direction. 3.1.3 I max / I min , evenness of rigidity The patterns observed here are very similar to I x / I y (Figure ) but more exaggerated for I max / I min . As such, the discussion of I x / I y (Figure ) holds for this as well. To ease comparison, both sets of variables (Figures and ) are displayed with the same scale y ‐axis. It is difficult to compare the distribution of the I max / I min data statistically between the groups as they all show different modes of distribution. For two groups (fetal/neonate and infant crawling), the shape of the graph makes measuring the FWHMH impossible. In lieu of this, group mean maximum heights, with the margin at which they occur, are provided in (Table ). 3.1.4 J , torsional rigidity In all age groups, J is at its lowest between the 30 and 40% margin (Figure ). All groups are very slightly bimodal with a peak at the end closest to the distal margin, and broadly increase the second moment of the area toward a second peak, at or near the proximal margin. For visual ease, the results for older child and adolescent groups are presented on a separate graph of a different scale (Table ). If we look at the coefficient of variation within groups (Figure ), the following patterns become evident. Most groups increase in variation toward the midshaft and drop off in variation toward the proximal and distal margins. The exceptions are adolescent, which decreases over the entire range and older child, which remains fairly flat. As can be seen from the graph, the data have a lot of noise in it, which makes locally weighted smoothing (LOESS) fitting only effective for a minority of age groups. This suggests that the midshaft is a more effective point to take measurements, although only for younger age groups. Percentage cortical area Percentage of cortical area provides a standardized measure of areal bone density. Here (Figure ), it can be seen that a peak of cortical bone volume is achieved between 40 and 60% increments in all age groups but that this peak more marked in the fetal/neonatal and early walking individuals, as evidenced by Figure and Table , which shows the FWHMH of the data. Cortical bone volume is highest in the neonatal age category, then falls successively during infancy before rising progressively during childhood and adolescence. I x / I y , the circularity of cross‐section This measurement is of how close the cross section is to that of a perfect circle (where I x / I y would be one) or whether it is more ellipsoid. Values below 1 are where the ellipse is more expanded in the anteroposterior direction, and values above one show expansion in the mediolateral direction. Figure shows this over the 20–80% margin for mean values of each group. Smoothing was achieved by a locally selecting 20% of points to influence each point's smoothing (Cleveland, ). Deviation from circularity varies considerably along the diaphysis. In the earliest stages, the shaft is much more ellipsoid in the mediolateral direction at the 20% margin and decreases constantly, resulting in a relatively circular midshaft. A more antero‐posteriorly ellipsoid shape is observable in the proximal humerus. This is most marked in the fetal/neonatal stages, but the infant crawling group also exhibits this shape characteristic in the proximal portion. The infant walking, young child, and older child stages broadly follow the pattern but have a fall‐off in ellipsoidal form toward the proximal margin. This repeats itself in the adolescent stage, but at the 40% margin, the bone reaches a peak of non‐circularity in the anteroposterior direction. I max / I min , evenness of rigidity The patterns observed here are very similar to I x / I y (Figure ) but more exaggerated for I max / I min . As such, the discussion of I x / I y (Figure ) holds for this as well. To ease comparison, both sets of variables (Figures and ) are displayed with the same scale y ‐axis. It is difficult to compare the distribution of the I max / I min data statistically between the groups as they all show different modes of distribution. For two groups (fetal/neonate and infant crawling), the shape of the graph makes measuring the FWHMH impossible. In lieu of this, group mean maximum heights, with the margin at which they occur, are provided in (Table ). J , torsional rigidity In all age groups, J is at its lowest between the 30 and 40% margin (Figure ). All groups are very slightly bimodal with a peak at the end closest to the distal margin, and broadly increase the second moment of the area toward a second peak, at or near the proximal margin. For visual ease, the results for older child and adolescent groups are presented on a separate graph of a different scale (Table ). If we look at the coefficient of variation within groups (Figure ), the following patterns become evident. Most groups increase in variation toward the midshaft and drop off in variation toward the proximal and distal margins. The exceptions are adolescent, which decreases over the entire range and older child, which remains fairly flat. As can be seen from the graph, the data have a lot of noise in it, which makes locally weighted smoothing (LOESS) fitting only effective for a minority of age groups. This suggests that the midshaft is a more effective point to take measurements, although only for younger age groups. Cortical thickness The standardized mean heat map for each group is presented in Figure . Here, a broad trend toward the organization of the internal bone structure can be seen, with the neonates appearing to have a unimodal distribution of cortical thickness across the map, whereas the later stages appear to be more complex/multimodal. In younger age groups, especially neonatal individuals, there are two main regions of diffusely increased thickness toward the distal margin. In older age categories, there are four regions of increased thickness—one located distally and three proximally. Individual maps are available in the Supplementary Information . When “re‐wrapped” around the bone's diaphysis in adolescents, this patterning resembles the pattern of entheses on the bone; however, the overall correlation between cortical thickness and localized surface curvature is weak. Results for DFA (Figure ) show a good separation of groups. Axes 1–3 explain 83.33% of all variation. Fetal/neonate and young child both form visually distinct groups; however, overall the classification of individuals is poor, with only 12.07% of individuals being correctly classified on jackknifing. Results for Axes 1–3 of PCA are displayed in Supplementary Figure for cortical thickness. In the PCA, fetal/neonate, infant crawling and infant walking groups all separate clearly from the rest of the groups. The first three components accounted for 85% of all variance. Graphs of PC1 versus PC2 and PC2 versus PC3 are provided as Supplementary Figure . Using the scores from DFA rather than from PCA helps the eye to see grouping patterns more clearly, although, with these analyses of high‐resolution data, a different grouping technique such as between group PCA (as implemented by Mitteroecker & Bookstein, ) may be more effective (there is, however, debate about the effectiveness of between‐group PCA; Bookstein, ; Cardini et al., ). In a situation like in this article, where there is a much larger number of variables than specimens between group PCA may result in observing patterns in data were there are none. As such, we err on the side of caution and use traditional PCA. Periosteal curvature The surface curvature maps for mean values in each age group are shown in Figure . Here, three longitudinal bands of increased curvature, are visible, which are most marked toward the distal end of the bone. These regions correspond well to the overall shape of the developing humerus, which has a rounded triangular cross‐section with three broad ridges located anteriorly, posteromedially, and posterolaterally (the latter two extend to the supracondylar ridges). Individual maps are available in the Supplementary Information . Results for Axes 1 and 2 of LDA are displayed in Figure . Graphs of PC1 versus PC2 and PC2 versus PC3 are shown in Supplementary Figure . Regarding thickness, “Infant walking” bones group very differently to the rest of the age classes, and infants only just border the other groups. In the case of external surface curvature, the fetal/neonatal and infant crawling groups are very different from the other age groups. Correlation of raw scores between periosteal curvature and cortical thickness was not statistically significant at p = .05, using Pearson's correlation coefficient. GMM analysis GMM analysis of the diaphyses separated fetal/neonatal, infant, and adolescent age groups fairly confidently. A plot of discriminant analysis of the PC scores of this data is shown in Figure . Extreme shapes from each of the first three principal components are shown in Figure . What can be seen immediately is that the GMM analysis was more effective at analyzing longitudinal curvature, rather than the periosteal surface curvature analyzed using maps. The first three principal components accounted for 96.6% of the total variation. The PCA was not very good at separating the differing groups (Supplementary Figure ) and LDA only correctly classified 19.6% of specimens on jackknifing. This suggests that this dense sampling of semilandmarks is excessive for the determination of group affiliation. DISCUSSION 4.1 Conventional biomechanical indices For percentage cortical area, I max / I min and I x / I y , bones tend to exhibit peaks between 40 and 50% margins, with different peaks for each age group. It must be emphasized, however, that variation, both along the bone and between groups, even in our moderate size sample, is large and not consistent. With J , resistance to torsion, all age groups see peaks at the proximal and distal ends, which is evidence of buttressing against strain. As the ends of the humerus are where we see the greatest density of muscle attachments and are close to articular surfaces where stress may be concentrated, this is not a surprising finding. The 30–40% margin tends to be the area of least resistance to torsion. This would suggest therefore that analyses which only examine the 40% or 50% margin (e.g., Cowgill, ; Trinkaus et al., ) may actually miss out on areas of the bone which are both potentially of interest for inter‐group discrimination and biomechanically significant. Of interest, is that percentage cortical area is highest in fetal/neonatal individuals, and then rapidly falls to its lowest in infants. It then progressively increases again in children and then adolescents. This suggests that the medullary cavity is very small in the prenatal period, a feature also noted by Cambra‐Moo (though they only included a single neonatal specimen in their study). This refines the findings of Cowgill  and may be due to our inclusion of paracortical bone, which is easier to identify in μCT images than biplanar X‐rays as well as greater subdivision of the age groups. This improves our understanding of the early growth of human long bones. There is a rapid deposition of cortical and paracortical bone in early embryological/fetal stages (as can be seen from Carnegie stage 15; De Bakker, ) which provides a “scaffold” for muscles and tendons to attach to, which is more stable than cartilage. This overproduction of bone during gestation has also been observed (albeit for trabecular bone) in the vertebrae (Acquaah et al., ) and the femur (Milovanovic et al., ). It has been suggested that this overproduction is probably a product of rapid endochondral ossification. Here, a large amount of bone is laid down rapidly in a genetically programmed process which ensures rapid growth, similar to the cartilage model (Milovanovic et al., ). Postnatally, the need to develop trabecular bone to withstand less predictable loading means that this early cortical bone is rapidly lost. It then recovers as growth proceeds and body mass increases. The fact that cortical bone occupies over 50% of bone volume for most groups suggests that it is selected for as it is the densest type of bone and able to withstand the greatest amount of force. The covariance ratio also increases from fetus to infant, then declines. This is probably because the medullary cavity is expanding relatively in the period of rapid postnatal growth. Overall, these results suggest that the conventional analysis of the internal growth and shape change of the humerus demonstrates a highly variable process between individuals. As the sample analyzed here is cross‐sectional rather than longitudinal (like those studied by Tanner, ) this high variability could perhaps be accounted for by the uncertainty in aging of individuals. Further studies using known age and sex individuals, either from clinical or osteological data, may help to clarify these trends somewhat. 4.2 Thickness maps Over development, the thickness maps tend to show an increase in the organization of the internal structure of the humerus. In fetal/neonatal and infant age groups, there is very little trabecular bone, and para‐cortical bone is also present. There is a large position of cortical bone in the proximal half of the humerus on the anterior portion. This is due to a large portion of early growth being from the proximal growth plate and the lack of trabecular bone in extremely juvenile individuals. It would appear, based on maps for older age groups, that what is generally termed “para‐cortical bone” does not all transmute to cortical bone, and in fact some becomes trabecular bone, which is also known to become more regular in structure through development (Cambra‐Moo, ; Carter & Orr, ). The concentration of cortical bone shifts toward the mid shaft in these latter groups (from “Infant Walking” stage onwards) and areas of deposition become more discrete. This is probably linked to both the redeposition and regularization of structure of trabecular bone (Cambra‐Moo, ; Carter & Orr, ). It can be observed that the shift of cortical thickness is mainly to the anterior and medial portions of the bone, which are where the triceps and brachialis both attach. The discriminant analysis allows, in this sample, to start to distinguish the first three stages from all others. With larger samples, and more refined age estimates (either by using known age samples or refined dental aging, either histological, or radiographic); it may be possible to distinguish better between the latter age groups as well. It seems, however, that this sample is not large enough, or the group too homogenous, for intergroup divisions to be greater in these latter stages than intra‐group ones. Due to the sample composition, only the young child group is significantly larger than any other (there are no statistically significant differences between the sample sizes for the different age groups [chi‐square p = .43]) but distortion of the results by an overly dominant subsample is not of concern here. Some of the samples are quite small however, and more differences might have emerged if larger sample sizes were studied. Unfortunately, due to the large overlap between many of these groups and the large error in classification of samples upon jackknifing, these results should be treated with caution. It may be that downsampling of the data here may be useful to see if the grouping observed here is easier to distinguish. 4.3 Curvature maps Periosteal curvature tells a slightly different story to that of cortical thickness. The external surface of fetal/neonatal bone appears to be relatively clearly defined into ridges and peaks. This is probably because development is canalized at this stage and normal fetal development in this sample progressed in a homogenous fashion. We also know from previous work that this population was relatively well‐nourished (Mahoney‐Swales, ) and it was a homogenous population in terms of origin, so weight can be added to this argument in this fashion. Muscle contractions in utero will have therefore made a proportionally large impact on the very delicate periosteal surfaces at this stage, but this is only an observation we have been able to make due to the adequate resolution of the scan data. For infant and young child stages, the pattern of marking is more diffuse. One potential effect may have been the use of swaddling of infants, thought to have been common in the medieval period, rendering them fairly immobile for extended periods during early postnatal development. This is, however, speculative as we have no direct textual evidence of this being practiced in Newcastle during this time period. Another is that a large amount of individual variation in achieving standard developmental goals that are observed in all communities as infants became more mobile and independent. In older child and adolescent stages, markings become more defined, which probably indicates the assumption of regular patterns of activity for these individuals. As our sample is medieval, one must also bear in mind that individuals over the age of approximately 13 would have been assumed to be capable of, and would have been expected to participate in the full range of adult physical activities. The maps also show shallow or negative curvature in the regions where the triceps and brachialis muscles attach, suggesting that these muscles, (which act to flex and extend the arm at the elbow joint) play a dominant role in development of humeral shape. Discriminant analysis distinguishes fetal and early crawling age groups from all the others, showing that the differences between groups are real. Again, however, the classification under jackknifing was poor and these results should be treated with caution. 4.4 Geometric morphometric analysis Dense sampling undertaken in an automated and indiscriminate fashion, as here, lends itself more to a broad separation of groups and overall object shape. This suggests that this application of GMM is more effective at analyzing longitudinal curvature, rather than periosteal surface curvature. Our results differ subtly from Hambücken's results, as we find that longitudinal curvature of the diaphysis is probably proportional to bone length, where a shorter humerus is likely to be more curved. This parallels the results found in the femur, radius and ulna in adults by De Groote . Developmental differences in the influence of the deltoid may also have an influence, but PC1–PC3 indicate that size of the bone is a major contributing factor to overall curvature. It can be observed, however, that fetal and neonatal bones are very separated from other groups. This is due to the relative stoutness of fetal and neonatal bones relative to length, which is distinctive to all other age groups. Indeed, infant humeri are also largely separate from other age groups as they are also stout, but in a different fashion. This is also probably due to the rapid lay down of bone observed internally during the fetal period in both this bone, the vertebrae (Acquaah et al., ) and the femur (Milovanovic et al., ). Another question that can be asked is how homologous are the landmarks from this type of analysis? If they are not homologous, then this may violate one of the principles of GMM, which is after all, the analysis of changes in homologous structures. Stern et al. have found in mouse models that one can effectively track different portions of the bone throughout development, and that the relative position of major structures stays the same, that is, they scale allometrically. We concur with this argument as our study is limited to the same bone from the same species. How effective this is when multiple species are analyzed (as in Boyer et al., ) is one of considerable debate (e.g., Gao et al., ). 4.5 Use of PCA and LDA One of the findings of this study has been that PCA and LDA analysis are only moderately effective at differentiating between different age groups. This is probably due to the high dimensionality of the data and the fact that there is high degree of internal correlation between points as this study sampled measurements extremely densely. There are two potential routes for amelioration of this problem. Secondly, it may be that PCA and LDA are simply not appropriate tools at this level of resolution, and that alternatives need to be sought. There are multiple techniques available for this. An easy to implement solution may be between group PCA (as implemented by Mitteroecker & Bookstein, , but see caution in Bookstein, , Cardini et al., ). An alternative is to use different pattern recognition methods, such as adversarial neural networks (e.g., Nielsen, ; Radford et al., ) or more general machine learning approaches (as carried out by Püschel et al., ). 4.6 Future work This study has demonstrated the utility of approaching a large proportion of the diaphysis using automated analytical techniques. We would, therefore, re‐emphasize the need for researchers to examine multiple sites throughout the diaphysis of the humerus in order to effectively track variation and to potentially more finely discriminate between groups. Where possible, a whole bone approach should be employed. The most significant finding is the rapid decline in cortical bone postnatally, after excess production in utero, which concurs with finding from the vertebral column and the femur (Acquaah et al., ; Milovanovic et al., ). Both cortical thickness and periosteal curvature mapping can be related to muscular development, especially that of the brachialis and triceps. GMM analysis revealed that longitudinal curvature of the humerus is largely allometric, as previously found in adult femora, radii, and ulnae (De Groote, , ). Future research should look at different human groups, in order to establish whether the patterns observed here are more generally applicable to H. sapiens generally. Examination of comparative juvenile fossil samples (e.g., H. neanderthalensis ), as well as ontogenetic series of hominoid primates (as examined for the femur by Morimoto et al., , ), would also be fruitful. Further work will also focus on expanding this methodology to both other long bones and to other species, as well as known age samples in order to test this hypothesis of finer discrimination. The use of different pattern recognition methods, such as adversarial neural networks (e.g., Nielsen, ; Radford et al., ) or more general machine learning approaches (as carried out by Püschel et al., ) will also be explored, as these may be more effective for classification than conventional multivariate methods. Conventional biomechanical indices For percentage cortical area, I max / I min and I x / I y , bones tend to exhibit peaks between 40 and 50% margins, with different peaks for each age group. It must be emphasized, however, that variation, both along the bone and between groups, even in our moderate size sample, is large and not consistent. With J , resistance to torsion, all age groups see peaks at the proximal and distal ends, which is evidence of buttressing against strain. As the ends of the humerus are where we see the greatest density of muscle attachments and are close to articular surfaces where stress may be concentrated, this is not a surprising finding. The 30–40% margin tends to be the area of least resistance to torsion. This would suggest therefore that analyses which only examine the 40% or 50% margin (e.g., Cowgill, ; Trinkaus et al., ) may actually miss out on areas of the bone which are both potentially of interest for inter‐group discrimination and biomechanically significant. Of interest, is that percentage cortical area is highest in fetal/neonatal individuals, and then rapidly falls to its lowest in infants. It then progressively increases again in children and then adolescents. This suggests that the medullary cavity is very small in the prenatal period, a feature also noted by Cambra‐Moo (though they only included a single neonatal specimen in their study). This refines the findings of Cowgill  and may be due to our inclusion of paracortical bone, which is easier to identify in μCT images than biplanar X‐rays as well as greater subdivision of the age groups. This improves our understanding of the early growth of human long bones. There is a rapid deposition of cortical and paracortical bone in early embryological/fetal stages (as can be seen from Carnegie stage 15; De Bakker, ) which provides a “scaffold” for muscles and tendons to attach to, which is more stable than cartilage. This overproduction of bone during gestation has also been observed (albeit for trabecular bone) in the vertebrae (Acquaah et al., ) and the femur (Milovanovic et al., ). It has been suggested that this overproduction is probably a product of rapid endochondral ossification. Here, a large amount of bone is laid down rapidly in a genetically programmed process which ensures rapid growth, similar to the cartilage model (Milovanovic et al., ). Postnatally, the need to develop trabecular bone to withstand less predictable loading means that this early cortical bone is rapidly lost. It then recovers as growth proceeds and body mass increases. The fact that cortical bone occupies over 50% of bone volume for most groups suggests that it is selected for as it is the densest type of bone and able to withstand the greatest amount of force. The covariance ratio also increases from fetus to infant, then declines. This is probably because the medullary cavity is expanding relatively in the period of rapid postnatal growth. Overall, these results suggest that the conventional analysis of the internal growth and shape change of the humerus demonstrates a highly variable process between individuals. As the sample analyzed here is cross‐sectional rather than longitudinal (like those studied by Tanner, ) this high variability could perhaps be accounted for by the uncertainty in aging of individuals. Further studies using known age and sex individuals, either from clinical or osteological data, may help to clarify these trends somewhat. Thickness maps Over development, the thickness maps tend to show an increase in the organization of the internal structure of the humerus. In fetal/neonatal and infant age groups, there is very little trabecular bone, and para‐cortical bone is also present. There is a large position of cortical bone in the proximal half of the humerus on the anterior portion. This is due to a large portion of early growth being from the proximal growth plate and the lack of trabecular bone in extremely juvenile individuals. It would appear, based on maps for older age groups, that what is generally termed “para‐cortical bone” does not all transmute to cortical bone, and in fact some becomes trabecular bone, which is also known to become more regular in structure through development (Cambra‐Moo, ; Carter & Orr, ). The concentration of cortical bone shifts toward the mid shaft in these latter groups (from “Infant Walking” stage onwards) and areas of deposition become more discrete. This is probably linked to both the redeposition and regularization of structure of trabecular bone (Cambra‐Moo, ; Carter & Orr, ). It can be observed that the shift of cortical thickness is mainly to the anterior and medial portions of the bone, which are where the triceps and brachialis both attach. The discriminant analysis allows, in this sample, to start to distinguish the first three stages from all others. With larger samples, and more refined age estimates (either by using known age samples or refined dental aging, either histological, or radiographic); it may be possible to distinguish better between the latter age groups as well. It seems, however, that this sample is not large enough, or the group too homogenous, for intergroup divisions to be greater in these latter stages than intra‐group ones. Due to the sample composition, only the young child group is significantly larger than any other (there are no statistically significant differences between the sample sizes for the different age groups [chi‐square p = .43]) but distortion of the results by an overly dominant subsample is not of concern here. Some of the samples are quite small however, and more differences might have emerged if larger sample sizes were studied. Unfortunately, due to the large overlap between many of these groups and the large error in classification of samples upon jackknifing, these results should be treated with caution. It may be that downsampling of the data here may be useful to see if the grouping observed here is easier to distinguish. Curvature maps Periosteal curvature tells a slightly different story to that of cortical thickness. The external surface of fetal/neonatal bone appears to be relatively clearly defined into ridges and peaks. This is probably because development is canalized at this stage and normal fetal development in this sample progressed in a homogenous fashion. We also know from previous work that this population was relatively well‐nourished (Mahoney‐Swales, ) and it was a homogenous population in terms of origin, so weight can be added to this argument in this fashion. Muscle contractions in utero will have therefore made a proportionally large impact on the very delicate periosteal surfaces at this stage, but this is only an observation we have been able to make due to the adequate resolution of the scan data. For infant and young child stages, the pattern of marking is more diffuse. One potential effect may have been the use of swaddling of infants, thought to have been common in the medieval period, rendering them fairly immobile for extended periods during early postnatal development. This is, however, speculative as we have no direct textual evidence of this being practiced in Newcastle during this time period. Another is that a large amount of individual variation in achieving standard developmental goals that are observed in all communities as infants became more mobile and independent. In older child and adolescent stages, markings become more defined, which probably indicates the assumption of regular patterns of activity for these individuals. As our sample is medieval, one must also bear in mind that individuals over the age of approximately 13 would have been assumed to be capable of, and would have been expected to participate in the full range of adult physical activities. The maps also show shallow or negative curvature in the regions where the triceps and brachialis muscles attach, suggesting that these muscles, (which act to flex and extend the arm at the elbow joint) play a dominant role in development of humeral shape. Discriminant analysis distinguishes fetal and early crawling age groups from all the others, showing that the differences between groups are real. Again, however, the classification under jackknifing was poor and these results should be treated with caution. Geometric morphometric analysis Dense sampling undertaken in an automated and indiscriminate fashion, as here, lends itself more to a broad separation of groups and overall object shape. This suggests that this application of GMM is more effective at analyzing longitudinal curvature, rather than periosteal surface curvature. Our results differ subtly from Hambücken's results, as we find that longitudinal curvature of the diaphysis is probably proportional to bone length, where a shorter humerus is likely to be more curved. This parallels the results found in the femur, radius and ulna in adults by De Groote . Developmental differences in the influence of the deltoid may also have an influence, but PC1–PC3 indicate that size of the bone is a major contributing factor to overall curvature. It can be observed, however, that fetal and neonatal bones are very separated from other groups. This is due to the relative stoutness of fetal and neonatal bones relative to length, which is distinctive to all other age groups. Indeed, infant humeri are also largely separate from other age groups as they are also stout, but in a different fashion. This is also probably due to the rapid lay down of bone observed internally during the fetal period in both this bone, the vertebrae (Acquaah et al., ) and the femur (Milovanovic et al., ). Another question that can be asked is how homologous are the landmarks from this type of analysis? If they are not homologous, then this may violate one of the principles of GMM, which is after all, the analysis of changes in homologous structures. Stern et al. have found in mouse models that one can effectively track different portions of the bone throughout development, and that the relative position of major structures stays the same, that is, they scale allometrically. We concur with this argument as our study is limited to the same bone from the same species. How effective this is when multiple species are analyzed (as in Boyer et al., ) is one of considerable debate (e.g., Gao et al., ). Use of PCA and LDA One of the findings of this study has been that PCA and LDA analysis are only moderately effective at differentiating between different age groups. This is probably due to the high dimensionality of the data and the fact that there is high degree of internal correlation between points as this study sampled measurements extremely densely. There are two potential routes for amelioration of this problem. Secondly, it may be that PCA and LDA are simply not appropriate tools at this level of resolution, and that alternatives need to be sought. There are multiple techniques available for this. An easy to implement solution may be between group PCA (as implemented by Mitteroecker & Bookstein, , but see caution in Bookstein, , Cardini et al., ). An alternative is to use different pattern recognition methods, such as adversarial neural networks (e.g., Nielsen, ; Radford et al., ) or more general machine learning approaches (as carried out by Püschel et al., ). Future work This study has demonstrated the utility of approaching a large proportion of the diaphysis using automated analytical techniques. We would, therefore, re‐emphasize the need for researchers to examine multiple sites throughout the diaphysis of the humerus in order to effectively track variation and to potentially more finely discriminate between groups. Where possible, a whole bone approach should be employed. The most significant finding is the rapid decline in cortical bone postnatally, after excess production in utero, which concurs with finding from the vertebral column and the femur (Acquaah et al., ; Milovanovic et al., ). Both cortical thickness and periosteal curvature mapping can be related to muscular development, especially that of the brachialis and triceps. GMM analysis revealed that longitudinal curvature of the humerus is largely allometric, as previously found in adult femora, radii, and ulnae (De Groote, , ). Future research should look at different human groups, in order to establish whether the patterns observed here are more generally applicable to H. sapiens generally. Examination of comparative juvenile fossil samples (e.g., H. neanderthalensis ), as well as ontogenetic series of hominoid primates (as examined for the femur by Morimoto et al., , ), would also be fruitful. Further work will also focus on expanding this methodology to both other long bones and to other species, as well as known age samples in order to test this hypothesis of finer discrimination. The use of different pattern recognition methods, such as adversarial neural networks (e.g., Nielsen, ; Radford et al., ) or more general machine learning approaches (as carried out by Püschel et al., ) will also be explored, as these may be more effective for classification than conventional multivariate methods. Thomas George O'Mahoney: Conceptualization (lead); data curation (lead); formal analysis (lead); investigation (lead); methodology (equal); project administration (equal); software (equal); validation (equal); visualization (equal); writing – original draft (lead); writing – review and editing (lead). Tristan Lowe: Investigation (supporting); methodology (supporting); resources (equal); validation (supporting); writing – review and editing (supporting). Andrew Timothy Chamberlain: Conceptualization (supporting); formal analysis (supporting); funding acquisition (lead); methodology (supporting); resources (equal); supervision (equal); validation (equal); writing – review and editing (equal). William Irvin Sellers: Conceptualization (equal); formal analysis (supporting); methodology (supporting); software (equal); supervision (equal); writing – review and editing (equal). Appendix S1 Supporting Information. Click here for additional data file.
Australasian paediatric gastroenterologist practices of coeliac disease diagnosis before and during the COVID‐19 pandemic
1a91ec48-46f8-4d86-abfd-f69568b9e456
10086844
Internal Medicine[mh]
The European Society of Paediatric Gastroenterology, Hepatology and Nutrition (ESPGHAN) introduced a non‐biopsy coeliac disease (CD) diagnosis pathway for selected children a decade ago and the guidelines were updated in 2020. A range of coeliac serology tests are frequently ordered in children suspected of having CD. The majority of Australasian gastroenterologist respondents reported they routinely utilised the 2020 ESPGHAN diagnostic criteria in eligible children to diagnose CD. Half of the Australasian gastroenterologist respondents started practising non‐biopsy CD diagnosis prior to the COVID‐19 pandemic while an additional quarter of clinicians have been practising non‐biopsy CD diagnosis subsequently. A wide variation of practices by Australasian paediatric gastroenterologists in ordering initial screening blood tests for children suspected of having CD, although TTG‐IgA was the most frequently ordered test. Participants An email invitation was sent to all practising paediatric gastroenterologists in Australia and NZ to participate in a single anonymous online survey via the Australasian Society of Paediatric Gastroenterology, Hepatology and Nutrition (AuSPGHAN) and Paediatric Network, GESA (Gastroenterology Society of Australia) bulletin board. The AuSPGHAN and Paediatric Network, GESA are closed membership groups. The bulletin board consists of paediatric gastroenterologists and trainees who are currently working or have worked in Australia and NZ. Gastroenterologists who were not working in Australasia at the time of the survey period and trainees were excluded. The subcommittee of the University of Otago Human Ethics Committee approved the study (Reference number: D21/352). Survey The authors (Executive members of the PEDiatric Australasian Gastroenterology Research NEtwork: PEDAGREE) developed the survey questionnaire (Data ) and the final version was posted to an online platform (Qualtrics Version 2021: Provo, UT, USA). The survey was open for 2 weeks from 22 November 2021. This period coincided with various COVID‐19 restrictions occurring in Australia and NZ, including reduced endoscopy lists. During the survey period, two reminder emails were sent to encourage participation. The survey collected basic demographics, the impact of endoscopy restrictions at the time of the survey, preferences for coeliac screening blood tests and CD diagnostic methods. Reasons for participants wanting or reluctant to practise non‐biopsy CD diagnosis were explored. Survey responses were excluded if <50% of the questionnaire was completed. Statistical analysis Data were exported from Qualtrics into IBM SPSS Statistics version 28.0 (IBM Corp., Armonk, NY, USA) for descriptive statistical analysis. An email invitation was sent to all practising paediatric gastroenterologists in Australia and NZ to participate in a single anonymous online survey via the Australasian Society of Paediatric Gastroenterology, Hepatology and Nutrition (AuSPGHAN) and Paediatric Network, GESA (Gastroenterology Society of Australia) bulletin board. The AuSPGHAN and Paediatric Network, GESA are closed membership groups. The bulletin board consists of paediatric gastroenterologists and trainees who are currently working or have worked in Australia and NZ. Gastroenterologists who were not working in Australasia at the time of the survey period and trainees were excluded. The subcommittee of the University of Otago Human Ethics Committee approved the study (Reference number: D21/352). The authors (Executive members of the PEDiatric Australasian Gastroenterology Research NEtwork: PEDAGREE) developed the survey questionnaire (Data ) and the final version was posted to an online platform (Qualtrics Version 2021: Provo, UT, USA). The survey was open for 2 weeks from 22 November 2021. This period coincided with various COVID‐19 restrictions occurring in Australia and NZ, including reduced endoscopy lists. During the survey period, two reminder emails were sent to encourage participation. The survey collected basic demographics, the impact of endoscopy restrictions at the time of the survey, preferences for coeliac screening blood tests and CD diagnostic methods. Reasons for participants wanting or reluctant to practise non‐biopsy CD diagnosis were explored. Survey responses were excluded if <50% of the questionnaire was completed. Data were exported from Qualtrics into IBM SPSS Statistics version 28.0 (IBM Corp., Armonk, NY, USA) for descriptive statistical analysis. Respondent background A total of 42 responses were received and three responders were excluded due to incomplete questionnaires. Thirty‐nine responses (100% completion rate) were included in the final analysis: 33 (85%) from Australia and 6 (15%) from NZ. Based on the known practising paediatric gastroenterologists in Australasia, this provides a 66% response rate (Table ). Of the 39 respondents, 9 and 7 practised solely in public hospitals and private practice, respectively; while the rest had combined public and private practices. Overall, physicians reported variable restrictions to endoscopy access due to the COVID‐19 pandemic in both countries, with those practising in the public hospital setting reporting more restrictions than those in private practice (Table ). Practices of initial coeliac screening TTG‐IgA was the most frequently ordered initial coeliac screening test for children of any age: 100% reported by respondents for children >2 years of age and 97% in children ≤2 years of age (Fig. ). This was followed by deamidated gliadin peptide IgG antibody (DGP‐IgG), utilised by 59% of respondents for children >2 years of age and 69% for children ≤2 years of age. EMA was ordered by 41% of clinicians for children of all ages. Thirty‐six respondents (92%) stated that total Immunoglobulin (Ig) A was routinely ordered as part of their initial coeliac screening tests and 3 others did not request this as their local laboratory services performed total IgA routinely. When physicians were asked whether other blood tests were ordered with their initial coeliac screening tests, full blood examination and ferritin level were the most requested tests (95% of respondents for both) (Data ). A minority (5%) of respondents would not order any simultaneous tests. The practice of CD diagnosis At the time of the survey, 34 out of 39 (87%) gastroenterologists reported they practised non‐biopsy CD diagnosis in those children who fulfilled the criteria. The remaining five physicians practised biopsy‐proven CD diagnosis only. When stratified by country, 28 (85%) Australian respondents practised non‐biopsy CD diagnosis compared to all of the NZ respondents. When respondents were asked whether the COVID‐19 pandemic had impacted their practice of diagnosing CD in children, half of the respondents (21 out of 39, 54%) stated they started practising non‐biopsy CD diagnosis before the pandemic and did not change their practice during the pandemic (Fig. ). A quarter of respondents ( n = 11) reported they started practising non‐biopsy CD diagnosis during the COVID‐19 pandemic. One of the 11 respondents reported that non‐biopsy CD diagnosis practice was started during the pandemic because there was sufficient local data to support such practice rather than in response to the pandemic. Two others (5%) reported they practised non‐biopsy CD diagnosis at the time of the survey but did not report when they started such practice and the rest ( n = 5, 13%) continued to practise biopsy‐proven CD before and during the pandemic. Among 34 clinicians who practised non‐biopsy CD diagnosis, 26 (77%) followed the ESGPHAN 2020 guidelines. Three (9%) continued to follow the ESGPHAN 2012 guidelines, while another three (9%) followed a variation of the ESPGHAN 2020 guidelines (e.g. using the guidelines only in the setting of consistent symptoms) and the rest reported other variations (Fig. ). Two respondents selected more than one criterion in their practice of non‐biopsy CD diagnosis. All NZ respondents followed the ESPGHAN 2020 guidelines in diagnosing non‐biopsy CD. Furthermore, 25 out of 34 (74%) respondents who practised non‐biopsy CD diagnosis would offer biopsy confirmation of CD diagnosis to those patients who fulfilled the non‐biopsy CD criteria. Eight out of 34 (24%) would not offer biopsy confirmation of CD and one respondent did not provide a response. Among the 31 respondents who requested a second blood sample as part of their criteria in diagnosing CD without a biopsy, 15 (48%) stated they would refer to any laboratory services, another 15 (48%) would refer patients only to specific laboratory services they trust and 1 (4%) would refer patients to the same laboratory service where the serology was initially performed. Of the four other respondents who accept single blood samples as part of their criteria in diagnosing non‐biopsy CD, two (50%) would refer patients to specific laboratory services they trust and the other two did not provide a response. In addition, this study explored whether those 34 physicians who utilised non‐biopsy CD diagnosis criteria would specifically exclude certain patients with underlying comorbidities. Most respondents (24 out of 34, 71%) would exclude patients with IgA deficiency, followed by children with type 1 diabetes mellitus (T1DM) and Trisomy 21 (59% and 35% of respondents, respectively) (Data ). Among the 34 respondents who practised non‐biopsy CD diagnosis, 26 (76%) would proceed to biopsy confirmation in patients with specific comorbidities (as listed in Data ). Two others would proceed to biopsy confirmation in patients with T1DM who have persistently abnormal coeliac serology without gluten restriction and one other respondent stated that they would be happy to diagnose non‐biopsy CD (based on the ESPGHAN criteria) if they were confident with the coeliac serology assays. The other five respondents reported that they did not apply any exclusion of comorbidity in their non‐biopsy CD diagnosis practices. Reasons provided by respondents for wanting or reluctant to practise non‐biopsy CD diagnosis in their practice The majority of 34 respondents who utilised non‐biopsy CD diagnosis criteria reported that such practice reduced the need for endoscopy (94%) and that it was supported by good evidence (88%) (Data ). Almost two thirds (62%) of respondents thought that non‐biopsy CD diagnosis would reduce the waiting time for other children needing an endoscopy. Other supportive reasons related to the COVID‐19 pandemic were also reported. Of the five physician respondents who practised biopsy‐proven CD only, all felt uncertain of the reliability of their local laboratory assays (Data ). Almost two‐thirds of this group reported personal experience of false‐positive results as a reason to not use non‐biopsy protocols. One of these five responders felt there was insufficient evidence worldwide to support such practice. A total of 42 responses were received and three responders were excluded due to incomplete questionnaires. Thirty‐nine responses (100% completion rate) were included in the final analysis: 33 (85%) from Australia and 6 (15%) from NZ. Based on the known practising paediatric gastroenterologists in Australasia, this provides a 66% response rate (Table ). Of the 39 respondents, 9 and 7 practised solely in public hospitals and private practice, respectively; while the rest had combined public and private practices. Overall, physicians reported variable restrictions to endoscopy access due to the COVID‐19 pandemic in both countries, with those practising in the public hospital setting reporting more restrictions than those in private practice (Table ). TTG‐IgA was the most frequently ordered initial coeliac screening test for children of any age: 100% reported by respondents for children >2 years of age and 97% in children ≤2 years of age (Fig. ). This was followed by deamidated gliadin peptide IgG antibody (DGP‐IgG), utilised by 59% of respondents for children >2 years of age and 69% for children ≤2 years of age. EMA was ordered by 41% of clinicians for children of all ages. Thirty‐six respondents (92%) stated that total Immunoglobulin (Ig) A was routinely ordered as part of their initial coeliac screening tests and 3 others did not request this as their local laboratory services performed total IgA routinely. When physicians were asked whether other blood tests were ordered with their initial coeliac screening tests, full blood examination and ferritin level were the most requested tests (95% of respondents for both) (Data ). A minority (5%) of respondents would not order any simultaneous tests. At the time of the survey, 34 out of 39 (87%) gastroenterologists reported they practised non‐biopsy CD diagnosis in those children who fulfilled the criteria. The remaining five physicians practised biopsy‐proven CD diagnosis only. When stratified by country, 28 (85%) Australian respondents practised non‐biopsy CD diagnosis compared to all of the NZ respondents. When respondents were asked whether the COVID‐19 pandemic had impacted their practice of diagnosing CD in children, half of the respondents (21 out of 39, 54%) stated they started practising non‐biopsy CD diagnosis before the pandemic and did not change their practice during the pandemic (Fig. ). A quarter of respondents ( n = 11) reported they started practising non‐biopsy CD diagnosis during the COVID‐19 pandemic. One of the 11 respondents reported that non‐biopsy CD diagnosis practice was started during the pandemic because there was sufficient local data to support such practice rather than in response to the pandemic. Two others (5%) reported they practised non‐biopsy CD diagnosis at the time of the survey but did not report when they started such practice and the rest ( n = 5, 13%) continued to practise biopsy‐proven CD before and during the pandemic. Among 34 clinicians who practised non‐biopsy CD diagnosis, 26 (77%) followed the ESGPHAN 2020 guidelines. Three (9%) continued to follow the ESGPHAN 2012 guidelines, while another three (9%) followed a variation of the ESPGHAN 2020 guidelines (e.g. using the guidelines only in the setting of consistent symptoms) and the rest reported other variations (Fig. ). Two respondents selected more than one criterion in their practice of non‐biopsy CD diagnosis. All NZ respondents followed the ESPGHAN 2020 guidelines in diagnosing non‐biopsy CD. Furthermore, 25 out of 34 (74%) respondents who practised non‐biopsy CD diagnosis would offer biopsy confirmation of CD diagnosis to those patients who fulfilled the non‐biopsy CD criteria. Eight out of 34 (24%) would not offer biopsy confirmation of CD and one respondent did not provide a response. Among the 31 respondents who requested a second blood sample as part of their criteria in diagnosing CD without a biopsy, 15 (48%) stated they would refer to any laboratory services, another 15 (48%) would refer patients only to specific laboratory services they trust and 1 (4%) would refer patients to the same laboratory service where the serology was initially performed. Of the four other respondents who accept single blood samples as part of their criteria in diagnosing non‐biopsy CD, two (50%) would refer patients to specific laboratory services they trust and the other two did not provide a response. In addition, this study explored whether those 34 physicians who utilised non‐biopsy CD diagnosis criteria would specifically exclude certain patients with underlying comorbidities. Most respondents (24 out of 34, 71%) would exclude patients with IgA deficiency, followed by children with type 1 diabetes mellitus (T1DM) and Trisomy 21 (59% and 35% of respondents, respectively) (Data ). Among the 34 respondents who practised non‐biopsy CD diagnosis, 26 (76%) would proceed to biopsy confirmation in patients with specific comorbidities (as listed in Data ). Two others would proceed to biopsy confirmation in patients with T1DM who have persistently abnormal coeliac serology without gluten restriction and one other respondent stated that they would be happy to diagnose non‐biopsy CD (based on the ESPGHAN criteria) if they were confident with the coeliac serology assays. The other five respondents reported that they did not apply any exclusion of comorbidity in their non‐biopsy CD diagnosis practices. The majority of 34 respondents who utilised non‐biopsy CD diagnosis criteria reported that such practice reduced the need for endoscopy (94%) and that it was supported by good evidence (88%) (Data ). Almost two thirds (62%) of respondents thought that non‐biopsy CD diagnosis would reduce the waiting time for other children needing an endoscopy. Other supportive reasons related to the COVID‐19 pandemic were also reported. Of the five physician respondents who practised biopsy‐proven CD only, all felt uncertain of the reliability of their local laboratory assays (Data ). Almost two‐thirds of this group reported personal experience of false‐positive results as a reason to not use non‐biopsy protocols. One of these five responders felt there was insufficient evidence worldwide to support such practice. This survey found a wide variation of practices by Australasian paediatric gastroenterologists in ordering initial screening blood tests for children suspected of having CD, although TTG‐IgA was the most frequently ordered test. A majority (87%) of these Australasian gastroenterologists reported practising non‐biopsy CD diagnosis at the time of the survey and only a minority continued to solely rely on biopsy‐proven CD diagnosis. Three‐quarters of the respondents reported following the latest 2020 ESPGHAN CD guidelines. However, there was a wide range of perspectives on which comorbidities to exclude from the application of non‐biopsy criteria. The preferred initial coeliac screening test by this group of gastroenterologists was TTG‐IgA for all children, which is aligned with other guidelines , , , , , , , and similar findings to an earlier survey in the Australasian region. However, other coeliac serologies were commonly ordered concurrently with TTG‐IgA, in particular DGP‐IgG and EMA. The current survey did not explore the rationale for clinicians to order one or other combination of tests. Interestingly, within the 2 years between the earlier (2019) and the current study (end of 2021), the percentage of practitioners utilising non‐biopsy CD diagnostic criteria has increased from 21 to 87%. The change in practice was predominantly in the Australian practitioners, whereas the NZ respondents continued to practise non‐biopsy CD diagnosis in both surveys. It was not possible to track the responses of individual respondents due to the anonymity of both surveys. Notably, there were more respondents who completed the current survey than previously. Nevertheless, it is possible that these physicians may have considered a non‐biopsy diagnosis feasible consequent to the exclusion of coeliac HLA typing and the presence of symptoms suggestive of CD in the updated ESGPHAN guidelines. In the current study, a quarter of the respondents reported that they started practising non‐biopsy CD diagnosis during the COVID‐19 pandemic. During the current pandemic, the individual states and territories of Australia have regulated their specific public health directions, including endoscopy restrictions in public and private hospitals. Although NZ has had a national public health approach, endoscopy restrictions have varied across regions reflecting local pressures. As the current study reflected the views of practitioners across both countries, it was not surprising that a wide range of endoscopy restrictions was reported. The respondents were asked to comment only on their current level of restrictions (during the 2‐week survey period) as impacts over time would not be able to be recorded clearly. Furthermore, this study did not explore the relationship between the impact of endoscopy restrictions and the respondent's decision to maintain or change practice. At least three‐quarters of the current surveyed practitioners reported that they used the latest ESPGHAN 2020 guidelines in their practice of non‐biopsy CD diagnosis, while others reported the use of various diagnostic criteria. The current survey did not explore whether there were local coeliac serology validations performed to support such variation of practice. However, there were different views from the respondents on which laboratory services they would use for a single or second blood sample when it comes to applying the non‐biopsy criteria. To date, there are currently three known published coeliac serology validations in Australasia that have supported the use of the ESGPHAN non‐biopsy CD criteria in selected children. , , The second blood sample for EMA is recommended in the latest ESPGHAN guidelines to reduce false‐positive cases in those patients who had TTG‐IgA > 10× ULN on their first test. The respondents gave diverse perspectives on which children with specific comorbidities should be excluded from consideration of diagnosis using a non‐biopsy protocol. Selective IgA deficiency was most felt by respondents to be excluded from the application of non‐biopsy CD criteria: this being consistent with the latest ESPGHAN guidelines. T1DM was the second most commonly reported comorbidity that the group felt should not be diagnosed using the non‐biopsy approach. This is partly contradictory to the 2020 ESPGHAN guidelines, where the guidelines recommended that non‐biopsy CD diagnosis can be made in symptomatic children with T1DM, but with a conditional recommendation if the child is asymptomatic. The authors of the guidelines acknowledged that studies with large numbers of children with coexisting T1DM and CD were not included in their literature search. Despite this, the earlier PRoCeDe study and a study from NZ evaluating the ESPGHAN criteria, both found two false‐positive cases. One of the cases in each study was a symptomatic child with T1DM. In contrast, a recent study from Western Australia reported no false positive cases in their validation of the 2020 ESPGHAN criteria; however, the details of the patients' coexisting conditions were not provided. The current study has some limitations. First, the recorded perspectives reflected the opinions of two‐thirds but not all of the practising Australasian gastroenterologists. However, it did include a range of practice locations across the region. Furthermore, the perspectives provided in this study may be biased towards Australian practices, given there were more participating Australian than NZ physicians. In conclusion, a majority of this group of Australasian paediatric gastroenterologists reported that they routinely practised non‐biopsy CD diagnosis in eligible children. Half of the respondents started non‐biopsy diagnosis prior to the COVID‐19 pandemic and the pandemic has further influenced an additional quarter of clinicians to start practising non‐biopsy CD diagnosis. Only a minority continued to solely rely on biopsy confirmation. Of those respondents who practised non‐biopsy CD diagnosis, three‐quarters used the 2020 ESPGHAN guidelines. Given that more Australasian physicians are now utilising non‐biopsy CD diagnostic criteria, a consistent diagnostic approach and standardisation of coeliac serology assays across Australasia will be increasingly important. Data S1. Questionnaire used for the survey. Click here for additional data file. Data S2. Thirty‐nine respondents' views regarding concurrent tests they would order in addition to the initial screening test in children suspected of having coeliac disease. Click here for additional data file. Data S3. Perspectives of 34 physicians who practised non‐biopsy coeliac disease (CD) on whether non‐biopsy CD criteria should or should not be applied in certain comorbidities. Click here for additional data file. Data S4. Reasons provided by respondents for wanting or reluctant to practise non‐biopsy coeliac disease (CD) diagnosis. (a) Thirty‐four clinicians who routinely practised non‐biopsy CD provided their reasons for wanting such practice. (b) Five respondents who did not practice non‐biopsy CD provided their reasons for not wanting such practice. Click here for additional data file.
The FIGO ovulatory disorders classification system
8cbaba02-a4bb-4b77-9d21-a5a1852a3b49
10086853
Gynaecology[mh]
INTRODUCTION Ovulatory disorders are common in girls and women of reproductive age and are associated with episodic or chronic dysfunction of the hypothalamic–pituitary–ovarian (H‐P‐O) axis. , These disorders may adversely affect quality of life when they manifest with infertility or as aberrations in menstrual function. Menstrual symptoms may include altered frequency or regularity of flow, as well as prolonged or heavy menstrual bleeding (HMB), or even a complete absence of menstrual blood flow, referred to as amenorrhea. Reproductive function may be adversely impacted as chronic anovulation is a common cause of infertility. While there are numerous known causes and contributors to ovulatory disorders, the entire spectrum of mechanisms of pathogenesis remains to be fully elucidated. Ovulatory disorders are often associated with underlying endocrinopathies, neoplasms, psychological and psychiatric conditions, and the use of specific pharmacologic agents. Optimally effective research, teaching, and clinical management of ovulatory disorders has been impeded by the absence of a comprehensive, internationally recognized and utilized structured classification system. The WHO system for ovulatory disorders was first presented as a monograph in 1973 and has been modified over time in various reviews and book chapters by single authors rather than international consensus. Some 50 years later, much more is known about ovulatory disorders. As a result, the International Federation of Gynecology and Obstetrics (FIGO) has undertaken a process whereby the global community of stakeholders involved with ovulatory disorders has designed a new system to better meet the needs of investigators, clinicians, and medical educators worldwide. The development of the system started with the formation of an Ovulatory Disorders Steering Committee (ODSC) comprising members of FIGO's Committee on Menstrual Disorders (MDC) (now the Committee on Menstrual Disorders and Related Health Impacts, or MDRHI) and Committee on Reproductive Medicine, Endocrinology, and Infertility. The involvement of the MDRHI reflects the common and important impact of ovulatory disorders on menstrual bleeding experience, an entity referred to as AUB‐O in FIGO System 2 (see below). BACKGROUND AND RATIONALE 2.1 Defining ovulatory disorders In the reproductive years—and in the absence of pregnancy, the process of lactation, or the use of pharmacological agents such as contraceptive steroids—the normal woman releases a mature oocyte from a Graafian follicle in a relatively predictable and cyclical fashion. However, a consensus definition of ovulatory disorders, sometimes called ovulatory dysfunction, has been lacking. The notion of anovulation or absent ovulation is but one manifestation, but there exists a spectrum of chronic or episodic conditions or circumstances that also disrupt the predictable and cyclical ovulatory process. Previously, infrequent ovulation has been termed “oligo‐ovulation,” which typically, but not always, manifests with some combination of infrequent and irregular onset of menstruation as defined in FIGO AUB System 1 (FIGO discontinued the term oligomenorrhea). However, and recognizing that many women with ovulatory disorders may have normal‐length menstrual cycles, no clear definition of infrequent ovulation has been adopted, and this was not addressed in the joint “Committee Opinion” on Infertility Workup for the Women's Health Specialist produced by the American College of Obstetricians and Gynecologists and the American Fertility Society. Furthermore, while an occasional failure to ovulate is expected and may not contribute to infertility, it may well cause an episode of delayed onset of menses and even HMB. This circumstance begs the inclusion of intermittent anovulation in a broad‐based, all‐encompassing definition of ovarian dysfunction. An additional consideration is other aberrations in ovulatory function, such as the luteinized unruptured follicle (LUF) , and the luteal out of phase (LOOP) events 9 that represent, respectively, mechanical failure to release the mature oocyte and the premature recruitment of follicles in the luteal phase, each of which could be candidates for inclusion in the definition of ovulatory dysfunction. As a result of these considerations, it is apparent that there is an unmet need for both a revised definition of ovulatory disorders and a consensus classification system designed to guide research, education, and clinical care across disciplines. 2.2 Existing “system” and its value and limitations The original WHO classification presented three types of ovulatory dysfunction. Group I included “women with amenorrhea and with little or no evidence of endogenous estrogen activity, including patients with (a) hypogonadotrophic ovarian failure, (b) complete or partial hypopituitarism, or (c) pituitary‐hypothalamic dysfunction.” Group II was described as “Women with a variety of menstrual cycle disturbances (including amenorrhea) who exhibit distinct estrogen activity (urinary estrogens usually <10 mcg/24 h), whose urinary and serum gonadotrophins are in the normal range and fluctuating, and who may also have fairly regular spontaneous menstrual bleeds (i.e. 24–38 days apart) but without ovulation.” Group III was described as “Females with primary ovarian failure (sic, now known as primary ovarian insufficiency; POI) associated with low endogenous estrogen activity and pathologically elevated serum and urinary gonadotrophins.” This classification illustrates the now‐outdated assay methodology of the time. A second monograph was published in 1976, which presented an algorithm based upon whether the serum prolactin concentration was elevated or normal, the response to a progestagen challenge test to assess estrogenization, and whether the serum follicle‐stimulating hormone (FSH) concentration was elevated or normal. The results of these assays were to be used to define seven groups: Group I: Hypothalamic pituitary failure Group II: Hypothalamic pituitary dysfunction Group III: Ovarian failure Group IV: Congenital or acquired genital tract disorders Group V: Hyperprolactinemia, with a space‐occupying lesion Group VI: Hyperprolactinemia, with no detectable space‐occupying lesion Group VII: Non‐functioning hypothalamic/pituitary tumors Over the last 40 years, numerous descriptions of the WHO classification have appeared in various monographs and book chapters in textbooks on gynecology, infertility, and reproductive endocrinology. Multiple authors have modified the classification without any evidence of further scientific discussion or consensus development. Interestingly, the UK NICE Guidelines on the investigation and management of infertility, first published in 2004, describe three groups with reference to the WHO Manual for the Standardized Investigation and Diagnosis of the Infertile Couple , published in 1993. Yet this WHO manual does not contain any classification of ovulatory disorders. Nonetheless, the NICE classification encompasses the three groups that most authors refer to currently, namely: Group I: Low gonadotropins and estradiol Group II: “Gonadotropin disorder” and normal estradiol Group III: High gonadotropins and low estradiol In this classification, Group I essentially refers to hypogonadotropic hypogonadism and pituitary insufficiency but also includes hyperprolactinemia. Group II is often referred to as “hypothalamic/pituitary dysfunction,” and most consider this group to primarily comprise women with polycystic ovary syndrome (PCOS), while Group III is consistently primary ovarian insufficiency (POI). However, it is essential to appreciate that hormone levels do not obey clear rules. For example, in those with hypothalamic amenorrhea who are underweight, levels of serum luteinizing hormone (LH) are usually suppressed, while levels of FSH are often in the normal range. , In addition, women with PCOS often have levels of FSH and LH in the normal range. Furthermore, anovulation is only one extreme of ovulatory dysfunction that includes a spectrum of manifestations that range from isolated episodes to chronic ovulatory failure. Since the first iterations of the WHO classification, there have been significant advances in understanding the control of ovulation and the pathophysiology of ovulatory disorders, together with improvements in assay technology and genomics. Consequently, there exists a need for a more comprehensive and updated classification. 2.3 The FIGO Systems for Abnormal Uterine Bleeding ( AUB ) in the Reproductive Years In 2011, and again in 2018, FIGO published its two systems for describing nongestational AUB in the reproductive years, including System 2, the classification system known as “PALM‐COEIN” that categorizes causes of AUB in non‐gravid women of reproductive age, including those with ovulatory disorders (AUB‐O). These systems were developed and designed using a rigorous Delphi process, with the participants including international experts and representation from multiple and diverse stakeholder organizations, including national and subspecialty societies and journals and the US Food and Drug Administration. The overall process also included an examination of the available population databases dealing with menstruation that resulted in new, evidence‐based definitions for normal and abnormal menstrual metrics that are now known as the FIGO AUB System 1. , , The process has been iterative, with periodic revisions of systems that reside in what is described as a “living document.” The whole process has been underpinned and continues to be supported by FIGO and the FIGO Committee on Menstrual Disorders (MDC), which, since 2022, has been known as the Committee on Menstrual Disorders and Related Health Impacts. FIGO AUB System 1 describes non‐gestational normal and AUB in the reproductive years and addresses the features of menstruation, that is, frequency, regularity, duration, and perceived volume of menstrual blood loss in addition to the presence of bleeding between periods (intermenstrual bleeding) as well as unscheduled bleeding associated with the use of gonadal steroids for contraception. The latter is now encompassed by the increasingly used term “contraceptive‐induced menstrual bleeding changes” (CiMBC). Notably, System 1 is currently based upon data from studies of women aged 18–45 years, as evidence from adolescent girls and women in the late reproductive years is less well defined. The second system, FIGO AUB System 2, describes potential causes or contributors to symptoms of AUB that are categorized in System 1. The nine categories, arranged according to the acronym PALM‐COEIN, are as follows: Polyp (AUB‐P); Adenomyosis (AUB‐A); Leiomyoma (AUB‐L); Malignancy and hyperplasia (AUB‐M); Coagulopathy (AUB‐C); Ovulatory dysfunction (AUB‐O); Endometrial disorders (AUB‐E); Iatrogenic (AUB‐I); and Not otherwise classified (AUB‐N). For the present context, ovulatory disorders (AUB‐O) incorporate a range of disturbances in normal ovulatory function ranging from irregular to infrequent to absent ovulation. To date, in the context of management of patients with AUB, the diagnosis of ovulatory disorders has been based mainly on a detailed menstrual history to meet the parameters that comprise FIGO System 1. In the 2018 revisions of the two FIGO systems, the recommendation was made that treatments that may interfere with the H‐P‐O axis and associated with AUB be placed within the “AUB‐I" category. The rationale and methodology for developing a sub‐classification system for AUB‐O are now presented. Defining ovulatory disorders In the reproductive years—and in the absence of pregnancy, the process of lactation, or the use of pharmacological agents such as contraceptive steroids—the normal woman releases a mature oocyte from a Graafian follicle in a relatively predictable and cyclical fashion. However, a consensus definition of ovulatory disorders, sometimes called ovulatory dysfunction, has been lacking. The notion of anovulation or absent ovulation is but one manifestation, but there exists a spectrum of chronic or episodic conditions or circumstances that also disrupt the predictable and cyclical ovulatory process. Previously, infrequent ovulation has been termed “oligo‐ovulation,” which typically, but not always, manifests with some combination of infrequent and irregular onset of menstruation as defined in FIGO AUB System 1 (FIGO discontinued the term oligomenorrhea). However, and recognizing that many women with ovulatory disorders may have normal‐length menstrual cycles, no clear definition of infrequent ovulation has been adopted, and this was not addressed in the joint “Committee Opinion” on Infertility Workup for the Women's Health Specialist produced by the American College of Obstetricians and Gynecologists and the American Fertility Society. Furthermore, while an occasional failure to ovulate is expected and may not contribute to infertility, it may well cause an episode of delayed onset of menses and even HMB. This circumstance begs the inclusion of intermittent anovulation in a broad‐based, all‐encompassing definition of ovarian dysfunction. An additional consideration is other aberrations in ovulatory function, such as the luteinized unruptured follicle (LUF) , and the luteal out of phase (LOOP) events 9 that represent, respectively, mechanical failure to release the mature oocyte and the premature recruitment of follicles in the luteal phase, each of which could be candidates for inclusion in the definition of ovulatory dysfunction. As a result of these considerations, it is apparent that there is an unmet need for both a revised definition of ovulatory disorders and a consensus classification system designed to guide research, education, and clinical care across disciplines. Existing “system” and its value and limitations The original WHO classification presented three types of ovulatory dysfunction. Group I included “women with amenorrhea and with little or no evidence of endogenous estrogen activity, including patients with (a) hypogonadotrophic ovarian failure, (b) complete or partial hypopituitarism, or (c) pituitary‐hypothalamic dysfunction.” Group II was described as “Women with a variety of menstrual cycle disturbances (including amenorrhea) who exhibit distinct estrogen activity (urinary estrogens usually <10 mcg/24 h), whose urinary and serum gonadotrophins are in the normal range and fluctuating, and who may also have fairly regular spontaneous menstrual bleeds (i.e. 24–38 days apart) but without ovulation.” Group III was described as “Females with primary ovarian failure (sic, now known as primary ovarian insufficiency; POI) associated with low endogenous estrogen activity and pathologically elevated serum and urinary gonadotrophins.” This classification illustrates the now‐outdated assay methodology of the time. A second monograph was published in 1976, which presented an algorithm based upon whether the serum prolactin concentration was elevated or normal, the response to a progestagen challenge test to assess estrogenization, and whether the serum follicle‐stimulating hormone (FSH) concentration was elevated or normal. The results of these assays were to be used to define seven groups: Group I: Hypothalamic pituitary failure Group II: Hypothalamic pituitary dysfunction Group III: Ovarian failure Group IV: Congenital or acquired genital tract disorders Group V: Hyperprolactinemia, with a space‐occupying lesion Group VI: Hyperprolactinemia, with no detectable space‐occupying lesion Group VII: Non‐functioning hypothalamic/pituitary tumors Over the last 40 years, numerous descriptions of the WHO classification have appeared in various monographs and book chapters in textbooks on gynecology, infertility, and reproductive endocrinology. Multiple authors have modified the classification without any evidence of further scientific discussion or consensus development. Interestingly, the UK NICE Guidelines on the investigation and management of infertility, first published in 2004, describe three groups with reference to the WHO Manual for the Standardized Investigation and Diagnosis of the Infertile Couple , published in 1993. Yet this WHO manual does not contain any classification of ovulatory disorders. Nonetheless, the NICE classification encompasses the three groups that most authors refer to currently, namely: Group I: Low gonadotropins and estradiol Group II: “Gonadotropin disorder” and normal estradiol Group III: High gonadotropins and low estradiol In this classification, Group I essentially refers to hypogonadotropic hypogonadism and pituitary insufficiency but also includes hyperprolactinemia. Group II is often referred to as “hypothalamic/pituitary dysfunction,” and most consider this group to primarily comprise women with polycystic ovary syndrome (PCOS), while Group III is consistently primary ovarian insufficiency (POI). However, it is essential to appreciate that hormone levels do not obey clear rules. For example, in those with hypothalamic amenorrhea who are underweight, levels of serum luteinizing hormone (LH) are usually suppressed, while levels of FSH are often in the normal range. , In addition, women with PCOS often have levels of FSH and LH in the normal range. Furthermore, anovulation is only one extreme of ovulatory dysfunction that includes a spectrum of manifestations that range from isolated episodes to chronic ovulatory failure. Since the first iterations of the WHO classification, there have been significant advances in understanding the control of ovulation and the pathophysiology of ovulatory disorders, together with improvements in assay technology and genomics. Consequently, there exists a need for a more comprehensive and updated classification. The FIGO Systems for Abnormal Uterine Bleeding ( AUB ) in the Reproductive Years In 2011, and again in 2018, FIGO published its two systems for describing nongestational AUB in the reproductive years, including System 2, the classification system known as “PALM‐COEIN” that categorizes causes of AUB in non‐gravid women of reproductive age, including those with ovulatory disorders (AUB‐O). These systems were developed and designed using a rigorous Delphi process, with the participants including international experts and representation from multiple and diverse stakeholder organizations, including national and subspecialty societies and journals and the US Food and Drug Administration. The overall process also included an examination of the available population databases dealing with menstruation that resulted in new, evidence‐based definitions for normal and abnormal menstrual metrics that are now known as the FIGO AUB System 1. , , The process has been iterative, with periodic revisions of systems that reside in what is described as a “living document.” The whole process has been underpinned and continues to be supported by FIGO and the FIGO Committee on Menstrual Disorders (MDC), which, since 2022, has been known as the Committee on Menstrual Disorders and Related Health Impacts. FIGO AUB System 1 describes non‐gestational normal and AUB in the reproductive years and addresses the features of menstruation, that is, frequency, regularity, duration, and perceived volume of menstrual blood loss in addition to the presence of bleeding between periods (intermenstrual bleeding) as well as unscheduled bleeding associated with the use of gonadal steroids for contraception. The latter is now encompassed by the increasingly used term “contraceptive‐induced menstrual bleeding changes” (CiMBC). Notably, System 1 is currently based upon data from studies of women aged 18–45 years, as evidence from adolescent girls and women in the late reproductive years is less well defined. The second system, FIGO AUB System 2, describes potential causes or contributors to symptoms of AUB that are categorized in System 1. The nine categories, arranged according to the acronym PALM‐COEIN, are as follows: Polyp (AUB‐P); Adenomyosis (AUB‐A); Leiomyoma (AUB‐L); Malignancy and hyperplasia (AUB‐M); Coagulopathy (AUB‐C); Ovulatory dysfunction (AUB‐O); Endometrial disorders (AUB‐E); Iatrogenic (AUB‐I); and Not otherwise classified (AUB‐N). For the present context, ovulatory disorders (AUB‐O) incorporate a range of disturbances in normal ovulatory function ranging from irregular to infrequent to absent ovulation. To date, in the context of management of patients with AUB, the diagnosis of ovulatory disorders has been based mainly on a detailed menstrual history to meet the parameters that comprise FIGO System 1. In the 2018 revisions of the two FIGO systems, the recommendation was made that treatments that may interfere with the H‐P‐O axis and associated with AUB be placed within the “AUB‐I" category. The rationale and methodology for developing a sub‐classification system for AUB‐O are now presented. METHODOLOGY The approach selected was based on RAND Delphi methodology, extensively used for consensus development processes, including classification systems for medical conditions. The two FIGO systems for AUB in the reproductive years, the sub‐classification systems for leiomyomas (AUB‐L) and adenomyosis (AUB‐A), now undergoing validation, have all been developed using a version of this process. , , The project was submitted to and approved by the FIGO Executive, and FIGO's Education Communication and Advocacy Consortium (ECAC) approved the results before submission of the manuscript. 3.1 Ovulatory Disorders Steering Committee The first step was to form an Ovulatory Disorders Steering Committee (ODSC) comprising members of FIGO's MDC (now MDRHI) and Committee on Reproductive Medicine, Endocrinology, and Infertility. The chairs of each of these committees collaborated to form the ODSC by identifying eight members from their committees, adding an external member who had a leadership position in the Global PCOS Alliance. The resulting nine‐member committee had diverse reach and comprised one from each of the continents of Africa, Asia, and North America, and two from each of the European Union, the United Kingdom, and South America. The ODSC met at regular intervals between June and December 2020 to identify and engage stakeholders and develop and test the consensus process. The scope of the ODSC also included review and analysis of the results of the various rounds and the design and testing of subsequent Delphi rounds. 3.2 Stakeholder and participant identification The first task of the ODSC was to identify and engage the appropriate stakeholders necessary for the Delphi process. The chosen categories included the following: National obstetrical and gynecological societies Subspecialty societies representing reproductive endocrinologists Specialty (obstetrics and gynecology) and subspecialty (reproductive endocrinology and infertility) journals Recognized experts in ovulatory disorders not participating in categories 1–3 Lay organizations interested in infertility, AUB, or PCOS Descriptive letters were created and customized for the various categories describing the rationale for the process and a synopsis of the methodology. Via the FIGO record of member countries, each of the national obstetrical and gynecological societies was contacted and invited by email to support the process by naming a representative. The ODSC identified the spectrum of subspecialty societies on the six continents and contacted leadership to explain the process and solicit support. The descriptive letter was sent electronically to both the society headquarters and the identified participant. A similar process involved the editorial offices of relevant specialty and subspecialty journals. The ODSC then identified recognized experts based on a combination of personal knowledge of the field and a search of the literature, subtracting those identified by national societies, subspecialty societies, or journals for representation. Finally, the ODSC sought to identify lay organizations that could represent women and adolescent girls who may have ovulatory disorders. These groups were generally contacted directly, and if there was interest and an indication of commitment, a lay‐based version of the letter was sent. 3.3 The Delphi consensus process 3.3.1 | Background and scoring system The Delphi process was developed by the RAND Corporation as a method for determining multi‐stakeholder expert consensus in a semi‐anonymous fashion that minimizes the impact of interpersonal issues on the outcome. Originally designed to forecast the impact of technology on warfare, it has subsequently been utilized across a number of disciplines including health care. Versions of the Delphi Process were used previously in the development of the FIGO AUB systems , , and are generally similar to the original RAND system comprising a series of survey rounds designed to be administered in a web‐based or live environment with electronic scoring. Members of the ODSC did not participate in the Delphi process as participants. The scoring system has nine levels (1–9), with “1” being the most substantial disagreement with a statement, “9” the strongest agreement, and “5” representing neutrality. Scores in the top tertile (7, 8, and 9) indicated “agreement” with a statement, while those in the bottom tertile (1, 2, and 3) were indications of disagreement. As a result, the remaining scores (4, 5, and 6) comprised the “neutral” category, with “4” leaning to disagreement and “6” leaning to agreement. The minimum requirement for consensus agreement was a mean score of at least 7 (scores of 6.5–6.9 were rounded to 7), with no more than 15% in the disagreement category. Conversely, “disagreement” was defined as a mean score of 3 or less (scores of 3.1–3.4 were rounded to 3), with no more than 15% in the agreement category. For each statement or question in a survey, there is a field to allow for free‐text comments by the participants. 3.3.2 | Participant orientation meeting Before distributing the first round of surveys, two orientation meetings for the participants were held to ensure that the appropriate contact information was in the study database and systems and that all understood the survey mechanisms. The two meetings were held on the Zoom platform (Zoom Video Communications Inc, San Jose, CA, USA), with dates and times selected to facilitate flexibility for the diverse group of participants, particularly considering the spectrum of world time zones involved. Included in the messaging of this meeting was the understanding that Delphi participant answers would remain confidential and that all distributions would be anonymized. Demonstrations of the functionality of the system were provided. A session was recorded and uploaded to an accessible server for individuals who could not attend either of the live, web‐based meetings and to provide a resource for all participants who wished to review the instructions on their own time. It is to be noted that the lay component of the process was planned to occur after the medical stakeholders had developed a draft system. 3.3.3 | Conduct of the first round The first round of the Delphi process was designed to identify the participants' age, gender, location, expertise, and constituency and evaluate general opinions, the latter using statements intended to elicit an “agree” or “disagree” response. These statements were crafted in a fashion that invited and measured opinions regarding the clinical relevance of ovulatory disorders, the need for a well‐designed classification system, and the broad categories that should be included if such a system was to be designed. The draft set of questions was created by the Chair of the ODSC, reviewed by the committee members in meetings using the Zoom platform, and then tested on the web‐based survey instrument SurveyMonkey (Momentive, San Mateo, CA, USA). The final version of the first round was distributed to the stakeholders via their identified email addresses within the web‐based survey system. The ODSC Chair, who also functioned as the Facilitator, kept track of responses and sent out reminder emails at intervals of 7–10 days until there were no additional responses. The data were then exported to an Excel (Microsoft Corp, Everett, WA, USA) workbook comprising spreadsheets containing the survey template that automatically calculated means and the percentage of answers in the agree (7–9), neutral (4–6), and disagree (1–3) categories. The free‐text comments made by the participants were also included in the spreadsheet. The ODSC reviewed these data as a prelude to the design of the second round. The aggregate anonymized results were sent to each participant along with a copy of their responses for comparative purposes. 3.3.4 | Conduct of the second round The second‐round survey was constructed, in part, based upon the first‐round results. Some “neutral” responses that had marginal scores close to 3 or 7, or defined principally by the outliers, were reviewed in particular because, in such circumstances, it was possible that rewording a question or providing appropriately representative evidence would result in a change in the participant's opinion. It was also possible that “re‐asking” the question in the context of individual participant understanding of the group response might result in changes in individual responses. This information allowed the ODSC to construct a second survey round that eliminated items with defined agreement or disagreement but included reworded statements and new statements seeking to refine and expand the criteria that the participants thought necessary. The distribution of the second‐round survey was confined to those participating in and responding to the first round. The web‐based system, distribution, and follow‐up reminder technique were again employed. The data were retrieved, exported into the same Excel workbook with worksheet templates, and analyzed by the ODSC. Similarly, the participants received an anonymized summary of the participant responses to each of the items and a copy of their answers for comparison. At this point, the committee had enough information to design a draft system that addressed and included the elements identified in the first two Delphi rounds. This was conducted iteratively until a draft acceptable to all ODSC members was created. 3.3.5 | Conduct of the third round As a prelude to the live stakeholder meeting, a short clarifying third round was created, tested, distributed, and the results analyzed by the ODSC, conducted in a fashion similar to that of the first two rounds. Included in this round was a version of the draft system with solicitation of preliminary opinions from the participants. As was the case for the first two rounds, each participant was provided an anonymized copy of the results of the previous round and a copy of their responses, all for review before the live participant meeting. 3.3.6 | Participant meeting All medical participants and the ODSC were invited to participate in the stakeholder meeting held live on the Zoom platform. Here, the overall results of the survey rounds were presented, including those items where consensus one way or the other had not been reached. The draft system was also reviewed. An open discussion was invited, and preliminary polls were taken using the system available on the Zoom platform. 3.3.7 | Post‐meeting and fourth survey round The ODSC undertook the post‐meeting analysis. Subsequently, a short fourth‐round poll was conducted to reach a consensus on the remaining elements and include individuals who could not participate in the live meeting. 3.3.8 | Lay round The lay round was designed to query the lay representatives, both for their perception of a need for a classification system and their opinions of the system developed by the expert and representative participants. A separate survey was designed that included some of the items in the medical participant rounds but presented in a fashion accessible by a lay audience. There was a focus on their opinions of clarity and utility in the context of discussion and counseling involving healthcare practitioners and patients. The draft lay‐round elements were reviewed and revised by the ODSC, uploaded to the SurveyMonkey platform, tested, and then distributed to the participants in a fashion similar to that used for the medical participant rounds. The results were reviewed and analyzed by the ODSC, who considered these opinions in revising the system and constructing the manuscript and the design of materials for the lay audience. Ovulatory Disorders Steering Committee The first step was to form an Ovulatory Disorders Steering Committee (ODSC) comprising members of FIGO's MDC (now MDRHI) and Committee on Reproductive Medicine, Endocrinology, and Infertility. The chairs of each of these committees collaborated to form the ODSC by identifying eight members from their committees, adding an external member who had a leadership position in the Global PCOS Alliance. The resulting nine‐member committee had diverse reach and comprised one from each of the continents of Africa, Asia, and North America, and two from each of the European Union, the United Kingdom, and South America. The ODSC met at regular intervals between June and December 2020 to identify and engage stakeholders and develop and test the consensus process. The scope of the ODSC also included review and analysis of the results of the various rounds and the design and testing of subsequent Delphi rounds. Stakeholder and participant identification The first task of the ODSC was to identify and engage the appropriate stakeholders necessary for the Delphi process. The chosen categories included the following: National obstetrical and gynecological societies Subspecialty societies representing reproductive endocrinologists Specialty (obstetrics and gynecology) and subspecialty (reproductive endocrinology and infertility) journals Recognized experts in ovulatory disorders not participating in categories 1–3 Lay organizations interested in infertility, AUB, or PCOS Descriptive letters were created and customized for the various categories describing the rationale for the process and a synopsis of the methodology. Via the FIGO record of member countries, each of the national obstetrical and gynecological societies was contacted and invited by email to support the process by naming a representative. The ODSC identified the spectrum of subspecialty societies on the six continents and contacted leadership to explain the process and solicit support. The descriptive letter was sent electronically to both the society headquarters and the identified participant. A similar process involved the editorial offices of relevant specialty and subspecialty journals. The ODSC then identified recognized experts based on a combination of personal knowledge of the field and a search of the literature, subtracting those identified by national societies, subspecialty societies, or journals for representation. Finally, the ODSC sought to identify lay organizations that could represent women and adolescent girls who may have ovulatory disorders. These groups were generally contacted directly, and if there was interest and an indication of commitment, a lay‐based version of the letter was sent. The Delphi consensus process 3.3.1 | Background and scoring system The Delphi process was developed by the RAND Corporation as a method for determining multi‐stakeholder expert consensus in a semi‐anonymous fashion that minimizes the impact of interpersonal issues on the outcome. Originally designed to forecast the impact of technology on warfare, it has subsequently been utilized across a number of disciplines including health care. Versions of the Delphi Process were used previously in the development of the FIGO AUB systems , , and are generally similar to the original RAND system comprising a series of survey rounds designed to be administered in a web‐based or live environment with electronic scoring. Members of the ODSC did not participate in the Delphi process as participants. The scoring system has nine levels (1–9), with “1” being the most substantial disagreement with a statement, “9” the strongest agreement, and “5” representing neutrality. Scores in the top tertile (7, 8, and 9) indicated “agreement” with a statement, while those in the bottom tertile (1, 2, and 3) were indications of disagreement. As a result, the remaining scores (4, 5, and 6) comprised the “neutral” category, with “4” leaning to disagreement and “6” leaning to agreement. The minimum requirement for consensus agreement was a mean score of at least 7 (scores of 6.5–6.9 were rounded to 7), with no more than 15% in the disagreement category. Conversely, “disagreement” was defined as a mean score of 3 or less (scores of 3.1–3.4 were rounded to 3), with no more than 15% in the agreement category. For each statement or question in a survey, there is a field to allow for free‐text comments by the participants. 3.3.2 | Participant orientation meeting Before distributing the first round of surveys, two orientation meetings for the participants were held to ensure that the appropriate contact information was in the study database and systems and that all understood the survey mechanisms. The two meetings were held on the Zoom platform (Zoom Video Communications Inc, San Jose, CA, USA), with dates and times selected to facilitate flexibility for the diverse group of participants, particularly considering the spectrum of world time zones involved. Included in the messaging of this meeting was the understanding that Delphi participant answers would remain confidential and that all distributions would be anonymized. Demonstrations of the functionality of the system were provided. A session was recorded and uploaded to an accessible server for individuals who could not attend either of the live, web‐based meetings and to provide a resource for all participants who wished to review the instructions on their own time. It is to be noted that the lay component of the process was planned to occur after the medical stakeholders had developed a draft system. 3.3.3 | Conduct of the first round The first round of the Delphi process was designed to identify the participants' age, gender, location, expertise, and constituency and evaluate general opinions, the latter using statements intended to elicit an “agree” or “disagree” response. These statements were crafted in a fashion that invited and measured opinions regarding the clinical relevance of ovulatory disorders, the need for a well‐designed classification system, and the broad categories that should be included if such a system was to be designed. The draft set of questions was created by the Chair of the ODSC, reviewed by the committee members in meetings using the Zoom platform, and then tested on the web‐based survey instrument SurveyMonkey (Momentive, San Mateo, CA, USA). The final version of the first round was distributed to the stakeholders via their identified email addresses within the web‐based survey system. The ODSC Chair, who also functioned as the Facilitator, kept track of responses and sent out reminder emails at intervals of 7–10 days until there were no additional responses. The data were then exported to an Excel (Microsoft Corp, Everett, WA, USA) workbook comprising spreadsheets containing the survey template that automatically calculated means and the percentage of answers in the agree (7–9), neutral (4–6), and disagree (1–3) categories. The free‐text comments made by the participants were also included in the spreadsheet. The ODSC reviewed these data as a prelude to the design of the second round. The aggregate anonymized results were sent to each participant along with a copy of their responses for comparative purposes. 3.3.4 | Conduct of the second round The second‐round survey was constructed, in part, based upon the first‐round results. Some “neutral” responses that had marginal scores close to 3 or 7, or defined principally by the outliers, were reviewed in particular because, in such circumstances, it was possible that rewording a question or providing appropriately representative evidence would result in a change in the participant's opinion. It was also possible that “re‐asking” the question in the context of individual participant understanding of the group response might result in changes in individual responses. This information allowed the ODSC to construct a second survey round that eliminated items with defined agreement or disagreement but included reworded statements and new statements seeking to refine and expand the criteria that the participants thought necessary. The distribution of the second‐round survey was confined to those participating in and responding to the first round. The web‐based system, distribution, and follow‐up reminder technique were again employed. The data were retrieved, exported into the same Excel workbook with worksheet templates, and analyzed by the ODSC. Similarly, the participants received an anonymized summary of the participant responses to each of the items and a copy of their answers for comparison. At this point, the committee had enough information to design a draft system that addressed and included the elements identified in the first two Delphi rounds. This was conducted iteratively until a draft acceptable to all ODSC members was created. 3.3.5 | Conduct of the third round As a prelude to the live stakeholder meeting, a short clarifying third round was created, tested, distributed, and the results analyzed by the ODSC, conducted in a fashion similar to that of the first two rounds. Included in this round was a version of the draft system with solicitation of preliminary opinions from the participants. As was the case for the first two rounds, each participant was provided an anonymized copy of the results of the previous round and a copy of their responses, all for review before the live participant meeting. 3.3.6 | Participant meeting All medical participants and the ODSC were invited to participate in the stakeholder meeting held live on the Zoom platform. Here, the overall results of the survey rounds were presented, including those items where consensus one way or the other had not been reached. The draft system was also reviewed. An open discussion was invited, and preliminary polls were taken using the system available on the Zoom platform. 3.3.7 | Post‐meeting and fourth survey round The ODSC undertook the post‐meeting analysis. Subsequently, a short fourth‐round poll was conducted to reach a consensus on the remaining elements and include individuals who could not participate in the live meeting. 3.3.8 | Lay round The lay round was designed to query the lay representatives, both for their perception of a need for a classification system and their opinions of the system developed by the expert and representative participants. A separate survey was designed that included some of the items in the medical participant rounds but presented in a fashion accessible by a lay audience. There was a focus on their opinions of clarity and utility in the context of discussion and counseling involving healthcare practitioners and patients. The draft lay‐round elements were reviewed and revised by the ODSC, uploaded to the SurveyMonkey platform, tested, and then distributed to the participants in a fashion similar to that used for the medical participant rounds. The results were reviewed and analyzed by the ODSC, who considered these opinions in revising the system and constructing the manuscript and the design of materials for the lay audience. The Delphi process was developed by the RAND Corporation as a method for determining multi‐stakeholder expert consensus in a semi‐anonymous fashion that minimizes the impact of interpersonal issues on the outcome. Originally designed to forecast the impact of technology on warfare, it has subsequently been utilized across a number of disciplines including health care. Versions of the Delphi Process were used previously in the development of the FIGO AUB systems , , and are generally similar to the original RAND system comprising a series of survey rounds designed to be administered in a web‐based or live environment with electronic scoring. Members of the ODSC did not participate in the Delphi process as participants. The scoring system has nine levels (1–9), with “1” being the most substantial disagreement with a statement, “9” the strongest agreement, and “5” representing neutrality. Scores in the top tertile (7, 8, and 9) indicated “agreement” with a statement, while those in the bottom tertile (1, 2, and 3) were indications of disagreement. As a result, the remaining scores (4, 5, and 6) comprised the “neutral” category, with “4” leaning to disagreement and “6” leaning to agreement. The minimum requirement for consensus agreement was a mean score of at least 7 (scores of 6.5–6.9 were rounded to 7), with no more than 15% in the disagreement category. Conversely, “disagreement” was defined as a mean score of 3 or less (scores of 3.1–3.4 were rounded to 3), with no more than 15% in the agreement category. For each statement or question in a survey, there is a field to allow for free‐text comments by the participants. Before distributing the first round of surveys, two orientation meetings for the participants were held to ensure that the appropriate contact information was in the study database and systems and that all understood the survey mechanisms. The two meetings were held on the Zoom platform (Zoom Video Communications Inc, San Jose, CA, USA), with dates and times selected to facilitate flexibility for the diverse group of participants, particularly considering the spectrum of world time zones involved. Included in the messaging of this meeting was the understanding that Delphi participant answers would remain confidential and that all distributions would be anonymized. Demonstrations of the functionality of the system were provided. A session was recorded and uploaded to an accessible server for individuals who could not attend either of the live, web‐based meetings and to provide a resource for all participants who wished to review the instructions on their own time. It is to be noted that the lay component of the process was planned to occur after the medical stakeholders had developed a draft system. The first round of the Delphi process was designed to identify the participants' age, gender, location, expertise, and constituency and evaluate general opinions, the latter using statements intended to elicit an “agree” or “disagree” response. These statements were crafted in a fashion that invited and measured opinions regarding the clinical relevance of ovulatory disorders, the need for a well‐designed classification system, and the broad categories that should be included if such a system was to be designed. The draft set of questions was created by the Chair of the ODSC, reviewed by the committee members in meetings using the Zoom platform, and then tested on the web‐based survey instrument SurveyMonkey (Momentive, San Mateo, CA, USA). The final version of the first round was distributed to the stakeholders via their identified email addresses within the web‐based survey system. The ODSC Chair, who also functioned as the Facilitator, kept track of responses and sent out reminder emails at intervals of 7–10 days until there were no additional responses. The data were then exported to an Excel (Microsoft Corp, Everett, WA, USA) workbook comprising spreadsheets containing the survey template that automatically calculated means and the percentage of answers in the agree (7–9), neutral (4–6), and disagree (1–3) categories. The free‐text comments made by the participants were also included in the spreadsheet. The ODSC reviewed these data as a prelude to the design of the second round. The aggregate anonymized results were sent to each participant along with a copy of their responses for comparative purposes. The second‐round survey was constructed, in part, based upon the first‐round results. Some “neutral” responses that had marginal scores close to 3 or 7, or defined principally by the outliers, were reviewed in particular because, in such circumstances, it was possible that rewording a question or providing appropriately representative evidence would result in a change in the participant's opinion. It was also possible that “re‐asking” the question in the context of individual participant understanding of the group response might result in changes in individual responses. This information allowed the ODSC to construct a second survey round that eliminated items with defined agreement or disagreement but included reworded statements and new statements seeking to refine and expand the criteria that the participants thought necessary. The distribution of the second‐round survey was confined to those participating in and responding to the first round. The web‐based system, distribution, and follow‐up reminder technique were again employed. The data were retrieved, exported into the same Excel workbook with worksheet templates, and analyzed by the ODSC. Similarly, the participants received an anonymized summary of the participant responses to each of the items and a copy of their answers for comparison. At this point, the committee had enough information to design a draft system that addressed and included the elements identified in the first two Delphi rounds. This was conducted iteratively until a draft acceptable to all ODSC members was created. As a prelude to the live stakeholder meeting, a short clarifying third round was created, tested, distributed, and the results analyzed by the ODSC, conducted in a fashion similar to that of the first two rounds. Included in this round was a version of the draft system with solicitation of preliminary opinions from the participants. As was the case for the first two rounds, each participant was provided an anonymized copy of the results of the previous round and a copy of their responses, all for review before the live participant meeting. All medical participants and the ODSC were invited to participate in the stakeholder meeting held live on the Zoom platform. Here, the overall results of the survey rounds were presented, including those items where consensus one way or the other had not been reached. The draft system was also reviewed. An open discussion was invited, and preliminary polls were taken using the system available on the Zoom platform. The ODSC undertook the post‐meeting analysis. Subsequently, a short fourth‐round poll was conducted to reach a consensus on the remaining elements and include individuals who could not participate in the live meeting. The lay round was designed to query the lay representatives, both for their perception of a need for a classification system and their opinions of the system developed by the expert and representative participants. A separate survey was designed that included some of the items in the medical participant rounds but presented in a fashion accessible by a lay audience. There was a focus on their opinions of clarity and utility in the context of discussion and counseling involving healthcare practitioners and patients. The draft lay‐round elements were reviewed and revised by the ODSC, uploaded to the SurveyMonkey platform, tested, and then distributed to the participants in a fashion similar to that used for the medical participant rounds. The results were reviewed and analyzed by the ODSC, who considered these opinions in revising the system and constructing the manuscript and the design of materials for the lay audience. RESULTS 4.1 Medical expert participants A total of 88 invitations were sent to the responding national gynecological and obstetrical societies, experts at large, and the delegated representatives of journals and subspecialty societies. Ultimately, 46 individuals from all six continents responded and participated in the first Delphi round; approximately half were from Europe (Figure ), with age and gender distribution demonstrated in Figure . Of these, 28 (61%) were men and 18 (39%) were women. Over half of the participants (59%) were national society representatives, and 19% were experts at large (Figure ). Participants were asked about their principal role, and 72% responded “clinical care,” with the rest distributed across clinical research, teaching, and epidemiology. The secondary roles included clinical research, reported by 36%, and education by 24%, with some reporting bench research, administrative duties, and editorial responsibilities (Figure ). 4.2 Results of rounds 1–3 The results from rounds 1, 2, and 3 are shown in Tables , , and , respectively. In round 1, of 37 items, there was consensus on all but five. There was general support for the stated definition of ovulatory disorders and the rationale for a consensus classification system to support research, teaching, and clinical care. Respondents neither supported nor disagreed with the statement “The WHO classification system, in its current form, would meet the needs for a contemporary classification system for ovulatory disorders.” There was broad support for a spectrum of potential causes of ovulatory disorders except for idiopathic mechanisms and LOOP cycles. 9 The ODSC took these results and developed and tested the second Delphi round before distributing it to the 46 respondents in the first round. There were 41 respondents with the results of the 22 items shown in Table . The results of the second round suggested that there would be support for an anatomically based system (hypothalamus, pituitary, ovarian) with a separate category for PCOS. There was general support for this concept, with a mean score of 7.1. The survey also explored the notion of distinguishing chronic from isolated or intermittent ovulatory disorders, and this concept received consensus support with a mean score of 7.5 with no respondent disagreeing. Importantly, no consensus was reached on the question of using the Rotterdam Criteria to define PCOS, as 22.0% were in disagreement despite a mean overall score of 6.7. The second round was also designed to clarify some items from the first round and to identify more granular concepts relating to the pathogenesis of ovulatory disorders. There was a lack of consensus regarding the role of ovarian neoplasms, bacterial and viral infections, and the concept of infectious or inflammatory causes in general. There was also no consensus on the role of an absent surge of LH and LOOP events. While “menopause” as an etiology had a mean score otherwise sufficient to indicate agreement, 15% of the respondents disagreed, thereby preventing the attainment of consensus. With these data, the ODSC devised a draft system based upon anatomy that included a separate component for PCOS. Before distributing to the participants, and as a prelude to the live virtual meeting of the participants in the Delphi process, a five‐item third round was developed, tested, and distributed. Included in the distribution to the participants was evidence describing and evaluating LOOP events and the potential role of ovarian neoplasms and infectious or inflammatory disorders in the pathogenesis of ovulatory dysfunction. Related items were modified, and the results from the 38 respondents are displayed in Table . There was now consensus support for the inclusion of menopause and LOOP events, but lack of agreement on the role of ovarian neoplasms and infectious or other inflammatory disorders in the genesis of ovulatory dysfunction. 4.3 Live meeting For the live meeting, the ODSC distributed the draft system and an Excel workbook comprising a summary of the results of the three rounds and how the consensus agreements attained were integrated into the design. The live meeting was conducted on August 25, 2021, using the Zoom video platform. The meeting agenda included a review of the rationale for the process and the results of the three Delphi rounds, summarizing areas of agreement and focusing on the few places where consensus had not been reached. A total of 22 respondents could attend, so it was impossible to survey them officially. Still, there was a strong indication of support for the system based upon an in‐meeting electronic poll. The formal process was the subject of the fourth round. 4.4 Results of round 4 For this round, the ODSC sought the participants' opinions on the draft system and tried to resolve some of the remaining items upon which there was a persisting lack of consensus. For this four‐item survey, there were 39 respondents, with the results displayed in Table . There was support for the presented system by 95% of the respondents (mean score 8.0), with disagreement of only 2.6%. The fourth round also saw agreement that there should be a category for ovarian neoplasms. Although more than 60% supported the notion of inflammatory or infectious mechanisms, these items failed to achieve the predetermined criteria for consensus. There were some valuable comments about the specific graphical depiction of the system that will be discussed subsequently in the context of the results of the lay round. 4.5 Results of the lay round The lay round, as planned, was conducted following the deliberations of the experts and society, and journal representatives and the development of the draft FIGO Ovulatory Disorders Classification System. The results of the 11‐item survey sent to 17 individuals can be seen in Table . The first three items were designed to obtain demographic data; all 10 respondents were women representing organizations from Africa, Europe, and North America with an age distribution of 25–54 years. There was general agreement on the definition of ovulatory disorders and their potential role in the genesis of infertility. However, there was no consensus on the contribution of ovulatory disorders to symptoms of AUB. While there was agreement that girls and women often do not understand the causes of ovulatory disorders, there was uncertainty regarding reasons unknown to healthcare providers and other medical professionals. There was a clear consensus that a well‐conceived system of classifying ovulatory disorders would improve the design and interpretation of research and facilitate communication between patients and healthcare practitioners. However, the support for the draft system was mixed with a mean score of 4.9 and only 33% agreeing that the system was “understandable” and one that could provide “a platform upon which a lay audience” could “gain insight into the possible causes of ovulatory disorders.” The comments from the participants were illuminating (Table ) and, in some instances, mirrored comments from the other participants. Respecting these comments, the ODSC altered the graphical representation of the system without changing the content, placing the PCOS panel at the bottom, allowing for the use of the acronym “HyPO‐P.” In addition, a draft lay version of the major elements of the system was developed with lay language that was nonetheless compatible with the medical version (Supplementary Material). This draft was distributed to lay participants and their comments were generally incorporated into the text, and into modifications of the graphical content. Medical expert participants A total of 88 invitations were sent to the responding national gynecological and obstetrical societies, experts at large, and the delegated representatives of journals and subspecialty societies. Ultimately, 46 individuals from all six continents responded and participated in the first Delphi round; approximately half were from Europe (Figure ), with age and gender distribution demonstrated in Figure . Of these, 28 (61%) were men and 18 (39%) were women. Over half of the participants (59%) were national society representatives, and 19% were experts at large (Figure ). Participants were asked about their principal role, and 72% responded “clinical care,” with the rest distributed across clinical research, teaching, and epidemiology. The secondary roles included clinical research, reported by 36%, and education by 24%, with some reporting bench research, administrative duties, and editorial responsibilities (Figure ). Results of rounds 1–3 The results from rounds 1, 2, and 3 are shown in Tables , , and , respectively. In round 1, of 37 items, there was consensus on all but five. There was general support for the stated definition of ovulatory disorders and the rationale for a consensus classification system to support research, teaching, and clinical care. Respondents neither supported nor disagreed with the statement “The WHO classification system, in its current form, would meet the needs for a contemporary classification system for ovulatory disorders.” There was broad support for a spectrum of potential causes of ovulatory disorders except for idiopathic mechanisms and LOOP cycles. 9 The ODSC took these results and developed and tested the second Delphi round before distributing it to the 46 respondents in the first round. There were 41 respondents with the results of the 22 items shown in Table . The results of the second round suggested that there would be support for an anatomically based system (hypothalamus, pituitary, ovarian) with a separate category for PCOS. There was general support for this concept, with a mean score of 7.1. The survey also explored the notion of distinguishing chronic from isolated or intermittent ovulatory disorders, and this concept received consensus support with a mean score of 7.5 with no respondent disagreeing. Importantly, no consensus was reached on the question of using the Rotterdam Criteria to define PCOS, as 22.0% were in disagreement despite a mean overall score of 6.7. The second round was also designed to clarify some items from the first round and to identify more granular concepts relating to the pathogenesis of ovulatory disorders. There was a lack of consensus regarding the role of ovarian neoplasms, bacterial and viral infections, and the concept of infectious or inflammatory causes in general. There was also no consensus on the role of an absent surge of LH and LOOP events. While “menopause” as an etiology had a mean score otherwise sufficient to indicate agreement, 15% of the respondents disagreed, thereby preventing the attainment of consensus. With these data, the ODSC devised a draft system based upon anatomy that included a separate component for PCOS. Before distributing to the participants, and as a prelude to the live virtual meeting of the participants in the Delphi process, a five‐item third round was developed, tested, and distributed. Included in the distribution to the participants was evidence describing and evaluating LOOP events and the potential role of ovarian neoplasms and infectious or inflammatory disorders in the pathogenesis of ovulatory dysfunction. Related items were modified, and the results from the 38 respondents are displayed in Table . There was now consensus support for the inclusion of menopause and LOOP events, but lack of agreement on the role of ovarian neoplasms and infectious or other inflammatory disorders in the genesis of ovulatory dysfunction. Live meeting For the live meeting, the ODSC distributed the draft system and an Excel workbook comprising a summary of the results of the three rounds and how the consensus agreements attained were integrated into the design. The live meeting was conducted on August 25, 2021, using the Zoom video platform. The meeting agenda included a review of the rationale for the process and the results of the three Delphi rounds, summarizing areas of agreement and focusing on the few places where consensus had not been reached. A total of 22 respondents could attend, so it was impossible to survey them officially. Still, there was a strong indication of support for the system based upon an in‐meeting electronic poll. The formal process was the subject of the fourth round. Results of round 4 For this round, the ODSC sought the participants' opinions on the draft system and tried to resolve some of the remaining items upon which there was a persisting lack of consensus. For this four‐item survey, there were 39 respondents, with the results displayed in Table . There was support for the presented system by 95% of the respondents (mean score 8.0), with disagreement of only 2.6%. The fourth round also saw agreement that there should be a category for ovarian neoplasms. Although more than 60% supported the notion of inflammatory or infectious mechanisms, these items failed to achieve the predetermined criteria for consensus. There were some valuable comments about the specific graphical depiction of the system that will be discussed subsequently in the context of the results of the lay round. Results of the lay round The lay round, as planned, was conducted following the deliberations of the experts and society, and journal representatives and the development of the draft FIGO Ovulatory Disorders Classification System. The results of the 11‐item survey sent to 17 individuals can be seen in Table . The first three items were designed to obtain demographic data; all 10 respondents were women representing organizations from Africa, Europe, and North America with an age distribution of 25–54 years. There was general agreement on the definition of ovulatory disorders and their potential role in the genesis of infertility. However, there was no consensus on the contribution of ovulatory disorders to symptoms of AUB. While there was agreement that girls and women often do not understand the causes of ovulatory disorders, there was uncertainty regarding reasons unknown to healthcare providers and other medical professionals. There was a clear consensus that a well‐conceived system of classifying ovulatory disorders would improve the design and interpretation of research and facilitate communication between patients and healthcare practitioners. However, the support for the draft system was mixed with a mean score of 4.9 and only 33% agreeing that the system was “understandable” and one that could provide “a platform upon which a lay audience” could “gain insight into the possible causes of ovulatory disorders.” The comments from the participants were illuminating (Table ) and, in some instances, mirrored comments from the other participants. Respecting these comments, the ODSC altered the graphical representation of the system without changing the content, placing the PCOS panel at the bottom, allowing for the use of the acronym “HyPO‐P.” In addition, a draft lay version of the major elements of the system was developed with lay language that was nonetheless compatible with the medical version (Supplementary Material). This draft was distributed to lay participants and their comments were generally incorporated into the text, and into modifications of the graphical content. PROPOSED HyPO‐P SYSTEM 5.1 Rationale and development The system was designed to align with the results of the Delphi process (see Supplementary Table ). There was support for a design that grouped the causes of ovulatory disorders anatomically, a logical extension of the former WHO classification but more precise and more accessible than one based primarily on hormone assays. It was, therefore, rational to design this classification system according to the levels of the H‐P‐O axis as reflected in the second Delphi round (Table , question 1). It was also considered essential to allow for the designation of any element that is known or suspected to alter the functionality of the organ in a fashion that could contribute to the genesis of ovulatory dysfunction, whether related to demonstrable histopathology, abnormal laboratory assays, iatrogenic mechanisms, or even functional disorders without measurable laboratory features. However, it was recognized that an important cause of ovulatory disorders is PCOS since it affects 8%–13% of women of reproductive age. It is a complex and heterogeneous condition with comprehensive international guidelines for diagnosis, investigation, and management , , that cannot be confined to an ovarian origin. Therefore, it was determined that PCOS constitutes a class apart from the anatomical categorization, a notion that was supported in the second round of the Delphi process (Table , question 2). Therefore, the proposed FIGO classification now includes ovulatory disorders categorized into four groups as follows: Type I: Hypothalamic; Type II: Pituitary; Type III: Ovarian; and Type IV: PCOS (Figure ). The system can be referred to by the acronym “HyPO‐P,” where the “P” is separated from the other three categories recognizing that it does not reside in a single anatomic location. The new system provides practical utility and a second layer, or sub‐classification, for each of the three anatomically defined entities, including discrete pathophysiological categories. These can be remembered using the acronym “GAIN‐FIT‐PIE” (Figure ). A detailed description of every known or suspected cause of ovulatory dysfunction is beyond the scope of the present paper. Still, the new classification is presented with references to some of the many included conditions. Supplementary Table shows the linkages between various potential causes or categories of causes and the elements in the FIGO Ovulatory Disorders Classification System. Rationale and development The system was designed to align with the results of the Delphi process (see Supplementary Table ). There was support for a design that grouped the causes of ovulatory disorders anatomically, a logical extension of the former WHO classification but more precise and more accessible than one based primarily on hormone assays. It was, therefore, rational to design this classification system according to the levels of the H‐P‐O axis as reflected in the second Delphi round (Table , question 1). It was also considered essential to allow for the designation of any element that is known or suspected to alter the functionality of the organ in a fashion that could contribute to the genesis of ovulatory dysfunction, whether related to demonstrable histopathology, abnormal laboratory assays, iatrogenic mechanisms, or even functional disorders without measurable laboratory features. However, it was recognized that an important cause of ovulatory disorders is PCOS since it affects 8%–13% of women of reproductive age. It is a complex and heterogeneous condition with comprehensive international guidelines for diagnosis, investigation, and management , , that cannot be confined to an ovarian origin. Therefore, it was determined that PCOS constitutes a class apart from the anatomical categorization, a notion that was supported in the second round of the Delphi process (Table , question 2). Therefore, the proposed FIGO classification now includes ovulatory disorders categorized into four groups as follows: Type I: Hypothalamic; Type II: Pituitary; Type III: Ovarian; and Type IV: PCOS (Figure ). The system can be referred to by the acronym “HyPO‐P,” where the “P” is separated from the other three categories recognizing that it does not reside in a single anatomic location. The new system provides practical utility and a second layer, or sub‐classification, for each of the three anatomically defined entities, including discrete pathophysiological categories. These can be remembered using the acronym “GAIN‐FIT‐PIE” (Figure ). A detailed description of every known or suspected cause of ovulatory dysfunction is beyond the scope of the present paper. Still, the new classification is presented with references to some of the many included conditions. Supplementary Table shows the linkages between various potential causes or categories of causes and the elements in the FIGO Ovulatory Disorders Classification System. USE OF THE FIGO OVULATORY DISORDERS CLASSIFICATION SYSTEM 6.1 Clinical application 6.1.1 | Identifying individuals with ovulatory disorders The new system is designed for clinicians, educators, and investigators, including those involved in basic, translational, clinical, and epidemiological research. Depending on the audience, educators may focus only on the four primary categories or add the detail afforded by the second GAIN‐FIT‐PIE stratification. To be categorized by the system, the individual or patient must be identified as having an ovulatory disorder. Several potential clinical “entry points” are based on suspicion or knowledge about the presence of an ovulatory disorder that range from delayed menarche to infrequent or irregular menstruation through to presentation with primary or secondary infertility or hirsutism or other features or findings associated with PCOS. The term “ovulatory disorder” is not synonymous with the term “anovulation.” Instead, ovulatory disorders are considered to exist on a spectrum ranging from episodic to chronic (Figure ). Individuals may present with a chronic problem or may experience a singular episode where an anovulatory “cycle” manifests with delayed onset of HMB. Especially in the late reproductive years, women may experience regular, predictable cycles of normal length but experience HMB as the development of follicles in the luteal phase contribute to high premenstrual estradiol levels, a process known as a LOOP cycle. 9 Individuals with primary amenorrhea deserve special attention, and details regarding their investigation are beyond the scope of the present paper. However, in general, primary amenorrhea is said to be present when menstruation has not yet occurred by the age of 14 years in the absence of secondary sexual characteristics (when it is called delayed puberty) or 16 years in the presence of secondary sexual characteristics. Associated symptoms such as cyclical pelvic pain may suggest the presence of ovulation in association with a Müllerian anomaly or other obstruction that should be appropriately investigated without delay. Most, but certainly not all, ovulatory disorders are suggested by the presence of symptoms of AUB, ranging from complete absence (amenorrhea) to infrequent or irregular onset of menstrual blood flow. Secondary amenorrhea is generally defined as the cessation of menstruation for 6 months consecutively after at least one previous spontaneous menstrual bleed. Using data from extensive epidemiological studies, FIGO has previously determined that for those aged 18–45 years, and using the 5%–95% percentiles from large‐scale population studies, the normal frequency of menses is 24–38 days. Those with a cycle length of fewer than 24 days are deemed “frequent” while those whose cycle length is more than 38 days “infrequent,” a term designed to replace oligomenorrhea. , , , , Even in this category, regularity varies by age; for those aged either 18–25 or 42–45 years, the difference between the shortest and longest cycle should be 9 days or less, while for those aged 26–41 years, it is 7 days or less. Regardless, those with infrequent or irregular menstrual bleeding should be considered to have an ovulatory disorder. Diagnosing the presence of an ovulatory disorder at the extremes of reproductive age can be challenging, depending on the perception of what is normal. For postmenarcheal girls aged under 18 years, infrequent menstrual bleeding or irregular menstrual cycles suggesting ovulatory dysfunction are common, with available evidence suggesting that the individual's “normal” cycle length may not be established until the sixth year after menarche. , , During this pubertal transition, ovulatory dysfunction impacts about 50% of adolescent girls in the first year after menarche with a cycle length that is typically in the range of 21–45 days , but sometimes is as short as 20 days or may even exceed 60 days. In the years after menarche, these variations change such that 6 years later, the range is similar to those of adults. These issues can be explored in detail elsewhere. , However, it should be remembered that while common, and even “normal,” the individual's experience with this transition can be disruptive at a vulnerable time in their social, psychological, and physical development. A somewhat similar experience exists at the opposite end of the reproductive age spectrum, beyond the age of 45 years, as women enter what has been called the menopausal transition, where cycle length typically becomes more infrequent or irregular before culminating in amenorrhea as ovarian secretion of estradiol declines and ultimately ceases. However, this experience is perhaps even less orderly than that of the post‐menarcheal period, as there may be highly variable endocrine changes resulting in unpredictable impacts on menstrual function . Again, what is common, and often portrayed as “normal”, can be highly disruptive, particularly when coupled with other symptoms. Women who present with infertility may have accompanying menstrual symptoms typical of ovulatory disorders. However, women with cyclically normal onset of menstrual bleeding may not be ovulating, or at least not ovulating regularly, as the frequency of single‐cycle anovulation in the context of normal regular cycles is in the range of 3.7%–26.7%. , , Consequently, further evaluation beyond a detailed history will be necessary to identify those with ovulatory disorders. The optimal way to assess for ovulation and, by extension, confirm ovulatory disorders may vary according to the clinical circumstance. The menstrual history of regular, predictable cycles between 24 and 38 days remains a helpful tool, and reflects the overall experience better than evaluation of endocrine or imaging parameters from a single cycle does. While patients and clinicians have traditionally used measurement of basal body temperature, interpretation can be difficult, so this approach should be used with caution. , If available, ovulation predictor kits that measure the levels of luteinizing hormone in urine samples generally accurately reflect levels of serum luteinizing hormone and are a valuable tool for detecting ovulation in a given cycle. Simply measuring progesterone in the predicted luteal phase may provide satisfactory evidence supporting ovulatory function, particularly when the first day of the next menstrual period is known. Such an approach may be helpful in circumstances such as hirsutism, where the incidence of anovulation in women with cyclically predictable menstrual cycles is higher. There are other, less common ovulatory disorders that may require more complex evaluation to determine if they are present in a given individual. For example, identifying LUF cycles, somewhat common in infertile women, requires both confirmation of the LH surge and the performance of serial ultrasound to demonstrate failed rupture of the dominant follicle. It should be remembered that scrutiny of a single cycle may not reflect the overall experience for a given individual. 6.1.2 | Categorization in the FIGO Ovulatory Disorders Classification System The new system recognizes three basic strata once an ovulatory disorder has been diagnosed. The first level is categorization by one of the four primary categories as follows: Type I: Hypothalamus; Type II: Pituitary; Type III: Ovary; and Type IV: PCOS. The second level requires assignment to the known or suspected anatomically based abnormality as directed by the GAIN‐FIT‐PIE acronym. The third or tertiary level identifies a specific entity causing or contributing to the ovulatory disorder. Categorizing into these levels requires that the clinician perform whatever investigations deemed appropriate to localize the site and the presumed underlying mechanism contributing to ovulatory dysfunction. For example, the individual with infrequent and irregular menses, galactorrhea, elevated prolactin, and a magnetic resonance image demonstrating a pituitary tumor would categorize as a type 2 – N (pituitary neoplasm). The same might be said about an individual with irregular and infrequent menstruation, mild hirsutism, and sonographic evidence of at least one symmetrically enlarged ovary (≥10 ml) or an ovary with more than 20 follicles without a dominant follicle or corpus luteum, a circumstance that dictates a type 4 – PCOS classification. Use of the 20‐follicle threshold is utilized only when the patient is examined with an endovaginal ultrasound transducer with a high frequency bandwidth of at least 8 MHz. , It is recognized that the precision in determining the anatomic location and the mechanism of pathogenesis is somewhat aspirational and will vary to a degree by the disorder and the resources available to the clinician. Further discussion of the detection, characterization, and management of ovulatory disorders is beyond the spectrum of the present study, which is designed to provide a structure for clinical care, investigation, and education. Clinical application 6.1.1 | Identifying individuals with ovulatory disorders The new system is designed for clinicians, educators, and investigators, including those involved in basic, translational, clinical, and epidemiological research. Depending on the audience, educators may focus only on the four primary categories or add the detail afforded by the second GAIN‐FIT‐PIE stratification. To be categorized by the system, the individual or patient must be identified as having an ovulatory disorder. Several potential clinical “entry points” are based on suspicion or knowledge about the presence of an ovulatory disorder that range from delayed menarche to infrequent or irregular menstruation through to presentation with primary or secondary infertility or hirsutism or other features or findings associated with PCOS. The term “ovulatory disorder” is not synonymous with the term “anovulation.” Instead, ovulatory disorders are considered to exist on a spectrum ranging from episodic to chronic (Figure ). Individuals may present with a chronic problem or may experience a singular episode where an anovulatory “cycle” manifests with delayed onset of HMB. Especially in the late reproductive years, women may experience regular, predictable cycles of normal length but experience HMB as the development of follicles in the luteal phase contribute to high premenstrual estradiol levels, a process known as a LOOP cycle. 9 Individuals with primary amenorrhea deserve special attention, and details regarding their investigation are beyond the scope of the present paper. However, in general, primary amenorrhea is said to be present when menstruation has not yet occurred by the age of 14 years in the absence of secondary sexual characteristics (when it is called delayed puberty) or 16 years in the presence of secondary sexual characteristics. Associated symptoms such as cyclical pelvic pain may suggest the presence of ovulation in association with a Müllerian anomaly or other obstruction that should be appropriately investigated without delay. Most, but certainly not all, ovulatory disorders are suggested by the presence of symptoms of AUB, ranging from complete absence (amenorrhea) to infrequent or irregular onset of menstrual blood flow. Secondary amenorrhea is generally defined as the cessation of menstruation for 6 months consecutively after at least one previous spontaneous menstrual bleed. Using data from extensive epidemiological studies, FIGO has previously determined that for those aged 18–45 years, and using the 5%–95% percentiles from large‐scale population studies, the normal frequency of menses is 24–38 days. Those with a cycle length of fewer than 24 days are deemed “frequent” while those whose cycle length is more than 38 days “infrequent,” a term designed to replace oligomenorrhea. , , , , Even in this category, regularity varies by age; for those aged either 18–25 or 42–45 years, the difference between the shortest and longest cycle should be 9 days or less, while for those aged 26–41 years, it is 7 days or less. Regardless, those with infrequent or irregular menstrual bleeding should be considered to have an ovulatory disorder. Diagnosing the presence of an ovulatory disorder at the extremes of reproductive age can be challenging, depending on the perception of what is normal. For postmenarcheal girls aged under 18 years, infrequent menstrual bleeding or irregular menstrual cycles suggesting ovulatory dysfunction are common, with available evidence suggesting that the individual's “normal” cycle length may not be established until the sixth year after menarche. , , During this pubertal transition, ovulatory dysfunction impacts about 50% of adolescent girls in the first year after menarche with a cycle length that is typically in the range of 21–45 days , but sometimes is as short as 20 days or may even exceed 60 days. In the years after menarche, these variations change such that 6 years later, the range is similar to those of adults. These issues can be explored in detail elsewhere. , However, it should be remembered that while common, and even “normal,” the individual's experience with this transition can be disruptive at a vulnerable time in their social, psychological, and physical development. A somewhat similar experience exists at the opposite end of the reproductive age spectrum, beyond the age of 45 years, as women enter what has been called the menopausal transition, where cycle length typically becomes more infrequent or irregular before culminating in amenorrhea as ovarian secretion of estradiol declines and ultimately ceases. However, this experience is perhaps even less orderly than that of the post‐menarcheal period, as there may be highly variable endocrine changes resulting in unpredictable impacts on menstrual function . Again, what is common, and often portrayed as “normal”, can be highly disruptive, particularly when coupled with other symptoms. Women who present with infertility may have accompanying menstrual symptoms typical of ovulatory disorders. However, women with cyclically normal onset of menstrual bleeding may not be ovulating, or at least not ovulating regularly, as the frequency of single‐cycle anovulation in the context of normal regular cycles is in the range of 3.7%–26.7%. , , Consequently, further evaluation beyond a detailed history will be necessary to identify those with ovulatory disorders. The optimal way to assess for ovulation and, by extension, confirm ovulatory disorders may vary according to the clinical circumstance. The menstrual history of regular, predictable cycles between 24 and 38 days remains a helpful tool, and reflects the overall experience better than evaluation of endocrine or imaging parameters from a single cycle does. While patients and clinicians have traditionally used measurement of basal body temperature, interpretation can be difficult, so this approach should be used with caution. , If available, ovulation predictor kits that measure the levels of luteinizing hormone in urine samples generally accurately reflect levels of serum luteinizing hormone and are a valuable tool for detecting ovulation in a given cycle. Simply measuring progesterone in the predicted luteal phase may provide satisfactory evidence supporting ovulatory function, particularly when the first day of the next menstrual period is known. Such an approach may be helpful in circumstances such as hirsutism, where the incidence of anovulation in women with cyclically predictable menstrual cycles is higher. There are other, less common ovulatory disorders that may require more complex evaluation to determine if they are present in a given individual. For example, identifying LUF cycles, somewhat common in infertile women, requires both confirmation of the LH surge and the performance of serial ultrasound to demonstrate failed rupture of the dominant follicle. It should be remembered that scrutiny of a single cycle may not reflect the overall experience for a given individual. 6.1.2 | Categorization in the FIGO Ovulatory Disorders Classification System The new system recognizes three basic strata once an ovulatory disorder has been diagnosed. The first level is categorization by one of the four primary categories as follows: Type I: Hypothalamus; Type II: Pituitary; Type III: Ovary; and Type IV: PCOS. The second level requires assignment to the known or suspected anatomically based abnormality as directed by the GAIN‐FIT‐PIE acronym. The third or tertiary level identifies a specific entity causing or contributing to the ovulatory disorder. Categorizing into these levels requires that the clinician perform whatever investigations deemed appropriate to localize the site and the presumed underlying mechanism contributing to ovulatory dysfunction. For example, the individual with infrequent and irregular menses, galactorrhea, elevated prolactin, and a magnetic resonance image demonstrating a pituitary tumor would categorize as a type 2 – N (pituitary neoplasm). The same might be said about an individual with irregular and infrequent menstruation, mild hirsutism, and sonographic evidence of at least one symmetrically enlarged ovary (≥10 ml) or an ovary with more than 20 follicles without a dominant follicle or corpus luteum, a circumstance that dictates a type 4 – PCOS classification. Use of the 20‐follicle threshold is utilized only when the patient is examined with an endovaginal ultrasound transducer with a high frequency bandwidth of at least 8 MHz. , It is recognized that the precision in determining the anatomic location and the mechanism of pathogenesis is somewhat aspirational and will vary to a degree by the disorder and the resources available to the clinician. Further discussion of the detection, characterization, and management of ovulatory disorders is beyond the spectrum of the present study, which is designed to provide a structure for clinical care, investigation, and education. The new system is designed for clinicians, educators, and investigators, including those involved in basic, translational, clinical, and epidemiological research. Depending on the audience, educators may focus only on the four primary categories or add the detail afforded by the second GAIN‐FIT‐PIE stratification. To be categorized by the system, the individual or patient must be identified as having an ovulatory disorder. Several potential clinical “entry points” are based on suspicion or knowledge about the presence of an ovulatory disorder that range from delayed menarche to infrequent or irregular menstruation through to presentation with primary or secondary infertility or hirsutism or other features or findings associated with PCOS. The term “ovulatory disorder” is not synonymous with the term “anovulation.” Instead, ovulatory disorders are considered to exist on a spectrum ranging from episodic to chronic (Figure ). Individuals may present with a chronic problem or may experience a singular episode where an anovulatory “cycle” manifests with delayed onset of HMB. Especially in the late reproductive years, women may experience regular, predictable cycles of normal length but experience HMB as the development of follicles in the luteal phase contribute to high premenstrual estradiol levels, a process known as a LOOP cycle. 9 Individuals with primary amenorrhea deserve special attention, and details regarding their investigation are beyond the scope of the present paper. However, in general, primary amenorrhea is said to be present when menstruation has not yet occurred by the age of 14 years in the absence of secondary sexual characteristics (when it is called delayed puberty) or 16 years in the presence of secondary sexual characteristics. Associated symptoms such as cyclical pelvic pain may suggest the presence of ovulation in association with a Müllerian anomaly or other obstruction that should be appropriately investigated without delay. Most, but certainly not all, ovulatory disorders are suggested by the presence of symptoms of AUB, ranging from complete absence (amenorrhea) to infrequent or irregular onset of menstrual blood flow. Secondary amenorrhea is generally defined as the cessation of menstruation for 6 months consecutively after at least one previous spontaneous menstrual bleed. Using data from extensive epidemiological studies, FIGO has previously determined that for those aged 18–45 years, and using the 5%–95% percentiles from large‐scale population studies, the normal frequency of menses is 24–38 days. Those with a cycle length of fewer than 24 days are deemed “frequent” while those whose cycle length is more than 38 days “infrequent,” a term designed to replace oligomenorrhea. , , , , Even in this category, regularity varies by age; for those aged either 18–25 or 42–45 years, the difference between the shortest and longest cycle should be 9 days or less, while for those aged 26–41 years, it is 7 days or less. Regardless, those with infrequent or irregular menstrual bleeding should be considered to have an ovulatory disorder. Diagnosing the presence of an ovulatory disorder at the extremes of reproductive age can be challenging, depending on the perception of what is normal. For postmenarcheal girls aged under 18 years, infrequent menstrual bleeding or irregular menstrual cycles suggesting ovulatory dysfunction are common, with available evidence suggesting that the individual's “normal” cycle length may not be established until the sixth year after menarche. , , During this pubertal transition, ovulatory dysfunction impacts about 50% of adolescent girls in the first year after menarche with a cycle length that is typically in the range of 21–45 days , but sometimes is as short as 20 days or may even exceed 60 days. In the years after menarche, these variations change such that 6 years later, the range is similar to those of adults. These issues can be explored in detail elsewhere. , However, it should be remembered that while common, and even “normal,” the individual's experience with this transition can be disruptive at a vulnerable time in their social, psychological, and physical development. A somewhat similar experience exists at the opposite end of the reproductive age spectrum, beyond the age of 45 years, as women enter what has been called the menopausal transition, where cycle length typically becomes more infrequent or irregular before culminating in amenorrhea as ovarian secretion of estradiol declines and ultimately ceases. However, this experience is perhaps even less orderly than that of the post‐menarcheal period, as there may be highly variable endocrine changes resulting in unpredictable impacts on menstrual function . Again, what is common, and often portrayed as “normal”, can be highly disruptive, particularly when coupled with other symptoms. Women who present with infertility may have accompanying menstrual symptoms typical of ovulatory disorders. However, women with cyclically normal onset of menstrual bleeding may not be ovulating, or at least not ovulating regularly, as the frequency of single‐cycle anovulation in the context of normal regular cycles is in the range of 3.7%–26.7%. , , Consequently, further evaluation beyond a detailed history will be necessary to identify those with ovulatory disorders. The optimal way to assess for ovulation and, by extension, confirm ovulatory disorders may vary according to the clinical circumstance. The menstrual history of regular, predictable cycles between 24 and 38 days remains a helpful tool, and reflects the overall experience better than evaluation of endocrine or imaging parameters from a single cycle does. While patients and clinicians have traditionally used measurement of basal body temperature, interpretation can be difficult, so this approach should be used with caution. , If available, ovulation predictor kits that measure the levels of luteinizing hormone in urine samples generally accurately reflect levels of serum luteinizing hormone and are a valuable tool for detecting ovulation in a given cycle. Simply measuring progesterone in the predicted luteal phase may provide satisfactory evidence supporting ovulatory function, particularly when the first day of the next menstrual period is known. Such an approach may be helpful in circumstances such as hirsutism, where the incidence of anovulation in women with cyclically predictable menstrual cycles is higher. There are other, less common ovulatory disorders that may require more complex evaluation to determine if they are present in a given individual. For example, identifying LUF cycles, somewhat common in infertile women, requires both confirmation of the LH surge and the performance of serial ultrasound to demonstrate failed rupture of the dominant follicle. It should be remembered that scrutiny of a single cycle may not reflect the overall experience for a given individual. FIGO Ovulatory Disorders Classification System The new system recognizes three basic strata once an ovulatory disorder has been diagnosed. The first level is categorization by one of the four primary categories as follows: Type I: Hypothalamus; Type II: Pituitary; Type III: Ovary; and Type IV: PCOS. The second level requires assignment to the known or suspected anatomically based abnormality as directed by the GAIN‐FIT‐PIE acronym. The third or tertiary level identifies a specific entity causing or contributing to the ovulatory disorder. Categorizing into these levels requires that the clinician perform whatever investigations deemed appropriate to localize the site and the presumed underlying mechanism contributing to ovulatory dysfunction. For example, the individual with infrequent and irregular menses, galactorrhea, elevated prolactin, and a magnetic resonance image demonstrating a pituitary tumor would categorize as a type 2 – N (pituitary neoplasm). The same might be said about an individual with irregular and infrequent menstruation, mild hirsutism, and sonographic evidence of at least one symmetrically enlarged ovary (≥10 ml) or an ovary with more than 20 follicles without a dominant follicle or corpus luteum, a circumstance that dictates a type 4 – PCOS classification. Use of the 20‐follicle threshold is utilized only when the patient is examined with an endovaginal ultrasound transducer with a high frequency bandwidth of at least 8 MHz. , It is recognized that the precision in determining the anatomic location and the mechanism of pathogenesis is somewhat aspirational and will vary to a degree by the disorder and the resources available to the clinician. Further discussion of the detection, characterization, and management of ovulatory disorders is beyond the spectrum of the present study, which is designed to provide a structure for clinical care, investigation, and education. DISCUSSION AND CONCLUSION The FIGO HyPO‐P system for the classification of ovulatory disorders is submitted for consideration as a worldwide standard designed to harmonize definitions and categories in a fashion that should inform clinical care, facilitate the education of patients and trainees, and improve the ability of basic, translational, clinical, and epidemiologic research to advance our knowledge of ovulatory disorders, their diagnosis, and their management. The development has the general support of a broad spectrum of national and subspecialty societies, relevant journals, and recognized experts in the realm of ovulatory dysfunction. The lay participants agreed with the need for classification. Their comments helped refine the graphical representation and supported the rationale for a lay‐oriented explanation of ovulatory disorders presented in the context of the new system. Finally, no system should be considered permanent, so review and careful modification and revision should be carried out regularly. MGM: Chair of the Ovulatory Disorders Steering Committee (ODSC); responsible for the concept, design and management of the Delphi system; management of ODSC and stakeholder meetings, compiling and analysis of data, manuscript preparation. AHB: At large member of the ODSC; helped lead design and management of the Delphi process; analysis of data; responsible for converting results into the design of the system; manuscript preparation. SHC: Member of the ODSC; participated in the Delphi design and identification of stakeholders, and manuscript preparation. HODC: Member of the ODSC; participated in the Delphi design and identification of stakeholders, analysis of data, and manuscript preparation. ID: Co‐chair of the ODSC; participated in the Delphi design and identification of stakeholders, assisted with manuscript preparation. RF: Member of the ODSC; participated in the Delphi design and identification of stakeholders and assisted with manuscript preparation. LH: Member of the ODSC; participated in the Delphi design and identification of stakeholders, analysis of data, and manuscript preparation. EM: Member of the ODSC; participated in the Delphi design and identification of stakeholders, and manuscript preparation. ZVDS: Member of the ODSC; participated in the Delphi design and identification of stakeholders, analysis of data, and manuscript preparation. MGM reports grant funding from AbbVie and Pharmacosmos; consulting fees from Abbvie, Myovant, American Regent, Daiichi Sankyo, Hologic Inc and Pharmacosmos as well as royalty payments from UpToDate. He serves a voluntary role as Chair of the SEUD AUB Task Force, the Past Chair of FIGO's committee on Menstrual Disorders and Related Health Impacts, and Founding and Current Chair of the Women's Health Research Collaborative. AHB reports consulting fees from NovoNordisk and is a member of the WHO's Guideline Development on Infertility and a member of the International PCOS Guideline Group. He is a Trustee of the British Fertility Society and is a Director of Balance Reproductive Health Ltd and Balance Health Ltd. HODC is current Chair, FIGO Committee on Menstrual Disorders and Related Health Impacts. She has received clinical research support for laboratory consumables and staff from Bayer AG (paid to institution) and provides consultancy advice (all paid to institution) for Bayer AG, PregLem SA, Gedeon Richter, Vifor Pharma UK Ltd, AbbVie Inc; Myovant Sciences GmbH. HC has received royalties from UpToDate for articles on abnormal uterine bleeding. The rest of the authors have no conflicts of interest. None. Supplementary Table 1. Linking Delphi rounds to HyPO‐P components. Click here for additional data file. Appendix S1: Supporting information Click here for additional data file.
Pharmacogenetic actionability and medication prescribing in people with cystic fibrosis
7d1299c8-1d3d-42f8-a3af-c03261fba64b
10087076
Pharmacology[mh]
Cystic fibrosis (CF) is an autosomal recessive disease caused by genetic mutations in the CF transmembrane conductance regulator ( CFTR ) gene. CFTR dysfunction alters normal ion and fluid transport, resulting in multi‐organ damage, which must be managed with complex and time‐consuming medication regimens beginning in infancy and continuing throughout life. Two recent studies showed that treatment burden is higher in children with CF compared to children with diabetes or asthma, and an adult with CF may manage an average of seven (range 0–20) daily therapies. , Another recent study showed that the number of unique medications prescribed to children with CF as well as healthcare costs increased with age, whereas they decreased or stayed the same in a matched non‐CF group. The medication regimen for the primary disease state alone can be extensive for a patient and their caregivers to manage. Additional medications are prescribed for comorbidities, such as CF‐related diabetes, liver disease, pain, osteopenia, and mental health. , Some of these medications are associated with genetic variants that impact medication response or adverse effects, but most medications used by people with CF (PwCF) have not been extensively evaluated for association with variants. The use of individual genetic information to guide the selection and dosing of medications (pharmacogenetics [PGx]) can help reduce the burden of treatment for PwCF by providing actionable approaches for personalized medicine. Currently, the US Food and Drug Administration (FDA) lists ~320 medications that provide biomarker‐based PGx information on their respective drug labels, but only some hospitals in the United States offer PGx testing for clinical implementation. There is currently limited information available to guide PGx dosing specifically in the CF population. The Clinical Pharmacogenetics Implementation Consortium (CPIC) was created in 2009 as a collaboration between Pharmacogenomics Knowledge Base (PharmGKB) and the National Institutes of Health (NIH) to provide freely available, evidence‐based, peer‐reviewed, and updated PGx clinical practice guidelines. The CPIC database contains PGx guidelines which provide medication background, genetic test interpretation, therapeutic recommendations, potential risks and benefits to the patient, and caveats for appropriate use and/or potential misuse of genetic test results. As evidence supporting the implementation of PGx grows, the large burden of care for PwCF may be alleviated in this population by improving precision drug and dose selection and reducing medication‐related adverse effects and treatment failures. In this study, we describe the medication use in PwCF and the potential impact of PGx‐based interventions. Study participants Informed consent was obtained from all PwCF recruited to this study with approval from the Institutional Review Board (IRB) at the University of Alabama at Birmingham (UAB, IRB‐151030001 and IRB‐300001194). All participants with a diagnosis of CF in either the adult or pediatric center were eligible for inclusion. Clinical data collection and analysis Retrospective chart review was performed for patients who received clinical care at the UAB Medical Center and Children's of Alabama between 2015 and 2020. Two consecutive years of care for each patient of medication history were gathered from each participant's electronic medical record (EMR), including any supplements or over‐the‐counter medications that were documented in the medical record. During this 2‐year span the following data were collected: basic demographics (age, sex, race, etc.), CF‐related complications and/or comorbidities, CFTR genotype, medication allergy alerts, and medication details for every inpatient and outpatient encounter. All CF‐associated comorbidities that were active during the 2 years of chart review were recorded. Each medication and supplement documented during this period, including dosing changes, was recorded as an individual medication exposure event (MEE). MEEs were identified from clinical notes (inpatient and outpatient encounters), pharmacist medication reconciliation records, medication administration records, and other medication history logs within the EMR. Overlap between records was assessed to ensure counts did not include multiple instances of the same medication record. For inpatient hospital stays, medication administration records were reviewed to ensure pro re nata medications prescribed via standing orders were actually administered during the stay, actual administration of all other MEEs were not confirmed due to the retrospective nature of this study. Some MEEs were classified as undesirable drug responses (UDRs; Figure ), which were defined as any undesirable response to a medication that results in or may warrant a change in prescription (and may include need for titration). Each MEE was categorized by duration of therapy as either acute (<14 days), prolonged acute (≥14 days), or chronic (maintenance therapies, typically medications intended for use >6 months to indefinitely). Only a single event was recorded for chronic medications when they were first documented unless a change to their prescription was made during the 2‐year period. Medications categorized as prolonged acute or acute had a separately recorded event for each time they were prescribed over the 2 years. Unique MEEs exclude those counts resulting from dosing changes or those medications prescribed acutely multiple times over the 2 years. The UDRs were classified in the following ways: adverse drug reaction (ADR), subtherapeutic dose, supratherapeutic dose, therapeutic drug monitoring (TDM), or treatment failure. We adopted the FDA's accepted definition of ADRs (formerly described as “side effects”) for marketed medicinal products and define them as any response to a drug which is noxious and unintended occurring at normal dosing. , ADRs were determined from documentation in the medical record, such as clinical notes and include self‐reported ADRs as well as clinical observations. MEEs were characterized as subtherapeutic or supratherapeutic when the participant's observed clinical response required a dose increase or reduction, respectively. TDM events were defined as any medication prescribed where dosing is altered based on measured drug concentrations or biochemical laboratory values (e.g., serum creatinine, liver function tests, etc.). A separate event was recorded each time a TDM medication was prescribed, but not for dose adjustments during the same clinical encounter. TDM is classified as “undesirable” because repeated blood draws and/or dosing changes are associated with each event. Treatment failure was defined as the discontinuation of a medication or substitution for an alternative medication due to the inability to achieve adequate therapeutic effects. The 286 unique medications prescribed were classified into 16 drug classifications (Table ). Classifications were adapted from those defined by the American Hospital Formulary Service. Annotation of these medications was accomplished by using the publicly available PharmGKB database. Very Important Pharmacogenes (VIPs) are identified by PharmGKB as genes that are important in the field of PGx. Drugs in this study labeled as having dosing guidelines are those annotated in PharmGKB to have clinical dosing guidelines published by a professional society, such as CPIC. PharmGKB defines annotations for no dosing guideline (NDG) medications as follows: variant annotations are associations between a genetic variant and a drug response for a single publication, whereas clinical annotations are a combination of all the annotated evidence to support a variant/drug relationship. The frequency of genetic variants in this population that are associated with clinical guidelines were selected for assessment utilizing the CPIC database. Sample processing Whole blood was collected from all participants and DNA was isolated from the buffy coats. Genotyping analysis was performed in UAB's Heflin Center for Genomics Sciences Core using the Illumina Global Screening Array (GSA; version 2.0 or 3.0, depending on their time of enrollment). The GSA microarray includes over 14,000 genome‐wide pharmacogenomic markers, which includes over 200 markers described in the CPIC guidelines. Genotyping analysis Raw genotyping data was assessed, and single nucleotide polymorphisms (SNPs) were called using Illumina's GenomeStudio Software 2.0. Manufacturer recommended quality control measures were performed and only samples with a call rate of greater than or equal to 99.0% using the software's GenCall algorithm were included in the study. Data was exported as PLINK files using the PLINK Input Report Plug‐in version 2.1.4. PLINK files were converted to variant call format (VCF) using the open‐source PLINK version 1.9 software. The multi‐sample VCF file was then processed using the UAB Galaxy platform ( www.galaxy.genome.uab.edu ) to remove all non‐PGx variants and to prepare files for annotation by the open‐source software, Pharmacogenomics Clinical Annotation Tool (PharmCAT). Files were filtered for only the SNP positions that are annotated by the PharmCAT software ( https://github.com/PharmGKB/PharmCAT/releases/download/v0.8.0/pharmcat_positions_0.8.0.vcf ) and samples were separated into individual files. Individual VCF files were then pre‐processed following the PharmCAT VCF preparation guidelines and then entered into the PharmCAT software, as previously described. Pharmacogenes annotated with this version include: CACNA1S , CFTR , CYP2B6 , CYP2C19 , CYP2C9 , CYP2D6 , CYP3A5 , CYP4F2 , DPYD , IFNL3 , NUDT15 , RYR1 , SLCO1B1 , TPMT , UGT1A1 , and VKORC1 . CFTR annotations were excluded because all study participants have known CFTR mutations. CYP2D6 was not analyzed separately for input into PharmCAT because of incomplete and differing variant coverage between the two microarray chips. Instead, the available CYP2D6 genotype data was run through PharmCAT and diplotypes were determined based on what data was available provided by the Illumina microarrays. The diplotypes and metabolizer statuses from the PharmCAT report results were cross‐referenced with the corresponding CPIC guidelines to combine genotypes with their phenotypes and the resulting EMR priority result notations. Based on the metabolizer status, the genotype for each gene is given an EMR alert label of either Normal/Routine/Low Risk or Abnormal/Priority/High Risk according to CPIC classification. In implementation, the prescribing provider would be alerted in the EMR when trying to prescribe a medication that might be impacted by that patient's specific genotype. Actionable genotypes were defined as any genotype/phenotype combination that resulted in an EMR Priority Result of Abnormal/Priority/High Risk. Actionability references the perceived ability to take action and make a clinical intervention based on PGx guidelines. Unless otherwise specified, initial counts of individuals with actionable genotypes were assessed with no regard to whether they had an MEE associated with that genotype. Statistical analysis All qualitative and quantitative statistical analyses were performed using GraphPad Prism version 9.3.1. Comparing adult and pediatric subgroups, Fisher's exact test was used for frequency differences in gender and race, and chi‐square test for frequency differences in CFTR genotypes. For quantitative analysis descriptive statistics and unpaired t ‐tests were calculated. Nonparametric correlation between unique acute MEEs and unique chronic MEEs was performed using Spearman rank. Mann–Whitney test was used for comparison of groups not normally distributed. Informed consent was obtained from all PwCF recruited to this study with approval from the Institutional Review Board (IRB) at the University of Alabama at Birmingham (UAB, IRB‐151030001 and IRB‐300001194). All participants with a diagnosis of CF in either the adult or pediatric center were eligible for inclusion. Retrospective chart review was performed for patients who received clinical care at the UAB Medical Center and Children's of Alabama between 2015 and 2020. Two consecutive years of care for each patient of medication history were gathered from each participant's electronic medical record (EMR), including any supplements or over‐the‐counter medications that were documented in the medical record. During this 2‐year span the following data were collected: basic demographics (age, sex, race, etc.), CF‐related complications and/or comorbidities, CFTR genotype, medication allergy alerts, and medication details for every inpatient and outpatient encounter. All CF‐associated comorbidities that were active during the 2 years of chart review were recorded. Each medication and supplement documented during this period, including dosing changes, was recorded as an individual medication exposure event (MEE). MEEs were identified from clinical notes (inpatient and outpatient encounters), pharmacist medication reconciliation records, medication administration records, and other medication history logs within the EMR. Overlap between records was assessed to ensure counts did not include multiple instances of the same medication record. For inpatient hospital stays, medication administration records were reviewed to ensure pro re nata medications prescribed via standing orders were actually administered during the stay, actual administration of all other MEEs were not confirmed due to the retrospective nature of this study. Some MEEs were classified as undesirable drug responses (UDRs; Figure ), which were defined as any undesirable response to a medication that results in or may warrant a change in prescription (and may include need for titration). Each MEE was categorized by duration of therapy as either acute (<14 days), prolonged acute (≥14 days), or chronic (maintenance therapies, typically medications intended for use >6 months to indefinitely). Only a single event was recorded for chronic medications when they were first documented unless a change to their prescription was made during the 2‐year period. Medications categorized as prolonged acute or acute had a separately recorded event for each time they were prescribed over the 2 years. Unique MEEs exclude those counts resulting from dosing changes or those medications prescribed acutely multiple times over the 2 years. The UDRs were classified in the following ways: adverse drug reaction (ADR), subtherapeutic dose, supratherapeutic dose, therapeutic drug monitoring (TDM), or treatment failure. We adopted the FDA's accepted definition of ADRs (formerly described as “side effects”) for marketed medicinal products and define them as any response to a drug which is noxious and unintended occurring at normal dosing. , ADRs were determined from documentation in the medical record, such as clinical notes and include self‐reported ADRs as well as clinical observations. MEEs were characterized as subtherapeutic or supratherapeutic when the participant's observed clinical response required a dose increase or reduction, respectively. TDM events were defined as any medication prescribed where dosing is altered based on measured drug concentrations or biochemical laboratory values (e.g., serum creatinine, liver function tests, etc.). A separate event was recorded each time a TDM medication was prescribed, but not for dose adjustments during the same clinical encounter. TDM is classified as “undesirable” because repeated blood draws and/or dosing changes are associated with each event. Treatment failure was defined as the discontinuation of a medication or substitution for an alternative medication due to the inability to achieve adequate therapeutic effects. The 286 unique medications prescribed were classified into 16 drug classifications (Table ). Classifications were adapted from those defined by the American Hospital Formulary Service. Annotation of these medications was accomplished by using the publicly available PharmGKB database. Very Important Pharmacogenes (VIPs) are identified by PharmGKB as genes that are important in the field of PGx. Drugs in this study labeled as having dosing guidelines are those annotated in PharmGKB to have clinical dosing guidelines published by a professional society, such as CPIC. PharmGKB defines annotations for no dosing guideline (NDG) medications as follows: variant annotations are associations between a genetic variant and a drug response for a single publication, whereas clinical annotations are a combination of all the annotated evidence to support a variant/drug relationship. The frequency of genetic variants in this population that are associated with clinical guidelines were selected for assessment utilizing the CPIC database. Whole blood was collected from all participants and DNA was isolated from the buffy coats. Genotyping analysis was performed in UAB's Heflin Center for Genomics Sciences Core using the Illumina Global Screening Array (GSA; version 2.0 or 3.0, depending on their time of enrollment). The GSA microarray includes over 14,000 genome‐wide pharmacogenomic markers, which includes over 200 markers described in the CPIC guidelines. Raw genotyping data was assessed, and single nucleotide polymorphisms (SNPs) were called using Illumina's GenomeStudio Software 2.0. Manufacturer recommended quality control measures were performed and only samples with a call rate of greater than or equal to 99.0% using the software's GenCall algorithm were included in the study. Data was exported as PLINK files using the PLINK Input Report Plug‐in version 2.1.4. PLINK files were converted to variant call format (VCF) using the open‐source PLINK version 1.9 software. The multi‐sample VCF file was then processed using the UAB Galaxy platform ( www.galaxy.genome.uab.edu ) to remove all non‐PGx variants and to prepare files for annotation by the open‐source software, Pharmacogenomics Clinical Annotation Tool (PharmCAT). Files were filtered for only the SNP positions that are annotated by the PharmCAT software ( https://github.com/PharmGKB/PharmCAT/releases/download/v0.8.0/pharmcat_positions_0.8.0.vcf ) and samples were separated into individual files. Individual VCF files were then pre‐processed following the PharmCAT VCF preparation guidelines and then entered into the PharmCAT software, as previously described. Pharmacogenes annotated with this version include: CACNA1S , CFTR , CYP2B6 , CYP2C19 , CYP2C9 , CYP2D6 , CYP3A5 , CYP4F2 , DPYD , IFNL3 , NUDT15 , RYR1 , SLCO1B1 , TPMT , UGT1A1 , and VKORC1 . CFTR annotations were excluded because all study participants have known CFTR mutations. CYP2D6 was not analyzed separately for input into PharmCAT because of incomplete and differing variant coverage between the two microarray chips. Instead, the available CYP2D6 genotype data was run through PharmCAT and diplotypes were determined based on what data was available provided by the Illumina microarrays. The diplotypes and metabolizer statuses from the PharmCAT report results were cross‐referenced with the corresponding CPIC guidelines to combine genotypes with their phenotypes and the resulting EMR priority result notations. Based on the metabolizer status, the genotype for each gene is given an EMR alert label of either Normal/Routine/Low Risk or Abnormal/Priority/High Risk according to CPIC classification. In implementation, the prescribing provider would be alerted in the EMR when trying to prescribe a medication that might be impacted by that patient's specific genotype. Actionable genotypes were defined as any genotype/phenotype combination that resulted in an EMR Priority Result of Abnormal/Priority/High Risk. Actionability references the perceived ability to take action and make a clinical intervention based on PGx guidelines. Unless otherwise specified, initial counts of individuals with actionable genotypes were assessed with no regard to whether they had an MEE associated with that genotype. All qualitative and quantitative statistical analyses were performed using GraphPad Prism version 9.3.1. Comparing adult and pediatric subgroups, Fisher's exact test was used for frequency differences in gender and race, and chi‐square test for frequency differences in CFTR genotypes. For quantitative analysis descriptive statistics and unpaired t ‐tests were calculated. Nonparametric correlation between unique acute MEEs and unique chronic MEEs was performed using Spearman rank. Mann–Whitney test was used for comparison of groups not normally distributed. CF cohort characteristics Of the 82 PwCF studied (Table ), 43 (52.4%) were women and 54 (65.9%) were greater than or equal to 18 years old. The median age was 20 years (range 3–66). The predominant CFTR genotype in this cohort was F508del/F508del ( n = 40; 48.8%), followed by F508del/G551D ( n = 12; 14.6%), with F508del/N1303K ( n = 5; 6.1%), F508del/G542X ( n = 4; 4.9%), and the remaining grouped as Other ( n = 21; 25.6%) being comprised of rare genotypes. There were no significant differences between the adult and pediatric subgroups for the frequencies of gender, self‐reported race, or CFTR genotype. Comorbidities were captured during chart review and nine CF‐associated comorbidities seen in more than 10% of the cohort are shown in Table . Medication exposure events in PwCF The majority of medications used are well‐known to treat complications of CF, such as pulmonary infection and nutrient deficiency (Figure ). In contrast to overall cohort medication use, we were also interested in the variation in medication types for individual treatment regimens. Each participant's medication use over the 2 years was assessed separately (Figure ). Comparatively, adult participants (mean = 8.9; SD = 2.13) were prescribed medications from significantly more classes (Figure ; p = 0.011) than pediatric participants (mean = 7.7; SD = 1.89). To quantify the medication's use burden, we also recorded total medication exposures, including repeated medication treatments and altered prescriptions. In this cohort ( n = 82) overall, a total of 3336 MEEs were encountered over 2 years. Of these, 70.4% ( n = 2349) were single exposures to unique medications and 29.6% ( n = 987) were dosing changes to medications. Out of the total number of MEEs, 50.2% ( n = 1673) were acutely prescribed medications, 45.6% ( n = 1524) were chronic medications, and 4.2% ( n = 139) were medications prescribed for prolonged acute use (Figure ). Over the 2‐year period, the median number of MEEs per individual was 35 (range 12–138; Figure ). There was no significant difference between the individual total MEEs for pediatric and adult subgroups ( p = 0.28; Figure ), or when evaluating only unique medication exposures and excluding any additional dosing changes ( p = 0.17; Figure ). To evaluate whether PwCF require more frequent acute treatments when they also take a larger number of chronic medications, we correlated the number of total unique chronic MEEs and total unique acute MEEs on an individual level. We found there to be a statistically significant association (Spearman correlation: r = 0.5561, p =< 0.0001; data not shown). Frequency of UDRs The UDRs accounted for 8.7% ( n = 289) of all MEEs with most patients ( n = 61; 74.4%) experiencing at least one UDR over the 2‐year period (Figure ). TDM events (were the most frequent UDR [ n = 145, 50.5%; Figure ]), median three events per patient (range 1–20; Figure ). A list of all medications with TDM events can be found in Table . Antibiotics contributed to 56.7% ( n = 164; Figure ) of all UDR events, with 82.9% ( n = 136) of those accounted by TDM events, which is expected based on the frequent need in this population for intravenous antibiotics that require close monitoring. Following dose changes, next most frequent UDR were ADRs at 13.5% ( n = 39), and treatment failures at 4.2% ( n = 12). Many of the ADRs captured during chart review included mild symptoms, such as insomnia, pruritus, headaches, development of rashes, and nausea, but also more serious reactions like anemia, thrombocytopenia, numbness, anaphylaxis, vision disturbances, hand tremors, and depression. Most of the medications causing ADRs were antibiotics ( n = 16; 41%) and CFTR modulators ( n = 8; 20.5%). The 12 treatment failure events were mostly related to respiratory medications ( n = 6; 50%) followed by psychiatric ( n = 3; 25%), antibiotic ( n = 1; 8.3%), gastrointestinal ( n = 1; 8.3%), and antifungal ( n = 1; 8.3%) medications. Association of cohort medications with PGx variants We found only 33 out of the 286 unique medications used by this cohort had a dosing guideline in the PharmGKB database (Figure ). Of these 33 medications (Table ), only three were used by 50% or more of the cohort (Table ), but every participant in this cohort was prescribed at least one of these 33 medications. Additionally, 38.1% ( n = 109) of these unique medications had at least one association with a VIP and 50% ( n = 143) of the medications overall had at least one clinical or variant annotation. We further evaluated PGx annotations for medications with NDGs. The NDG medications with clinical annotations comprised 27.3% ( n = 78) of the medications taken by this cohort. PGx actionability based on CPIC clinical guidelines Of the 15 CPIC pharmacogenes evaluated in this study, in this cohort, 97.6% ( n = 80) had at least one actionable genotype described in CPIC guidelines, and more than 70% had three or more (Figure ). Among the pharmacogenes, actionability was highest for CYP2C19 based on genotype, medication use, and UDRs. CYP2D6 and CYP2C9 gene‐drug interactions were noted in 17.1% ( n = 14) and 14.6% ( n = 12) of cohort. No patients harbored variants in CACNA1S , NUDT15 , RYR1 , or TPMT (Figure ). Because of the high evidence supporting these guidelines, some hospitals have implemented PGx alerts in their EMRs in order to alert providers to prescription actionability. In this cohort, 61.0% ( n = 50) had an actionable genotype for CYP2C19 that would result in an EMR alert of “Abnormal/Priority/High Risk” if prescribed a medication listed in the CPIC guidelines. Of these individuals, 82% ( n = 41) were also taking at least one CYP2C19‐ affected medication, including amitriptyline, citalopram, escitalopram, lansoprazole, omeprazole, pantoprazole, sertraline, and voriconazole. When evaluated in relation to genotype, medication use, and UDRs, 22% ( n = 11) experienced an expected UDR based on CPIC's pharmacologic implications for their CYP2C19 metabolizer status and gene‐drug interaction. cohort characteristics Of the 82 PwCF studied (Table ), 43 (52.4%) were women and 54 (65.9%) were greater than or equal to 18 years old. The median age was 20 years (range 3–66). The predominant CFTR genotype in this cohort was F508del/F508del ( n = 40; 48.8%), followed by F508del/G551D ( n = 12; 14.6%), with F508del/N1303K ( n = 5; 6.1%), F508del/G542X ( n = 4; 4.9%), and the remaining grouped as Other ( n = 21; 25.6%) being comprised of rare genotypes. There were no significant differences between the adult and pediatric subgroups for the frequencies of gender, self‐reported race, or CFTR genotype. Comorbidities were captured during chart review and nine CF‐associated comorbidities seen in more than 10% of the cohort are shown in Table . PwCF The majority of medications used are well‐known to treat complications of CF, such as pulmonary infection and nutrient deficiency (Figure ). In contrast to overall cohort medication use, we were also interested in the variation in medication types for individual treatment regimens. Each participant's medication use over the 2 years was assessed separately (Figure ). Comparatively, adult participants (mean = 8.9; SD = 2.13) were prescribed medications from significantly more classes (Figure ; p = 0.011) than pediatric participants (mean = 7.7; SD = 1.89). To quantify the medication's use burden, we also recorded total medication exposures, including repeated medication treatments and altered prescriptions. In this cohort ( n = 82) overall, a total of 3336 MEEs were encountered over 2 years. Of these, 70.4% ( n = 2349) were single exposures to unique medications and 29.6% ( n = 987) were dosing changes to medications. Out of the total number of MEEs, 50.2% ( n = 1673) were acutely prescribed medications, 45.6% ( n = 1524) were chronic medications, and 4.2% ( n = 139) were medications prescribed for prolonged acute use (Figure ). Over the 2‐year period, the median number of MEEs per individual was 35 (range 12–138; Figure ). There was no significant difference between the individual total MEEs for pediatric and adult subgroups ( p = 0.28; Figure ), or when evaluating only unique medication exposures and excluding any additional dosing changes ( p = 0.17; Figure ). To evaluate whether PwCF require more frequent acute treatments when they also take a larger number of chronic medications, we correlated the number of total unique chronic MEEs and total unique acute MEEs on an individual level. We found there to be a statistically significant association (Spearman correlation: r = 0.5561, p =< 0.0001; data not shown). UDRs The UDRs accounted for 8.7% ( n = 289) of all MEEs with most patients ( n = 61; 74.4%) experiencing at least one UDR over the 2‐year period (Figure ). TDM events (were the most frequent UDR [ n = 145, 50.5%; Figure ]), median three events per patient (range 1–20; Figure ). A list of all medications with TDM events can be found in Table . Antibiotics contributed to 56.7% ( n = 164; Figure ) of all UDR events, with 82.9% ( n = 136) of those accounted by TDM events, which is expected based on the frequent need in this population for intravenous antibiotics that require close monitoring. Following dose changes, next most frequent UDR were ADRs at 13.5% ( n = 39), and treatment failures at 4.2% ( n = 12). Many of the ADRs captured during chart review included mild symptoms, such as insomnia, pruritus, headaches, development of rashes, and nausea, but also more serious reactions like anemia, thrombocytopenia, numbness, anaphylaxis, vision disturbances, hand tremors, and depression. Most of the medications causing ADRs were antibiotics ( n = 16; 41%) and CFTR modulators ( n = 8; 20.5%). The 12 treatment failure events were mostly related to respiratory medications ( n = 6; 50%) followed by psychiatric ( n = 3; 25%), antibiotic ( n = 1; 8.3%), gastrointestinal ( n = 1; 8.3%), and antifungal ( n = 1; 8.3%) medications. PGx variants We found only 33 out of the 286 unique medications used by this cohort had a dosing guideline in the PharmGKB database (Figure ). Of these 33 medications (Table ), only three were used by 50% or more of the cohort (Table ), but every participant in this cohort was prescribed at least one of these 33 medications. Additionally, 38.1% ( n = 109) of these unique medications had at least one association with a VIP and 50% ( n = 143) of the medications overall had at least one clinical or variant annotation. We further evaluated PGx annotations for medications with NDGs. The NDG medications with clinical annotations comprised 27.3% ( n = 78) of the medications taken by this cohort. actionability based on CPIC clinical guidelines Of the 15 CPIC pharmacogenes evaluated in this study, in this cohort, 97.6% ( n = 80) had at least one actionable genotype described in CPIC guidelines, and more than 70% had three or more (Figure ). Among the pharmacogenes, actionability was highest for CYP2C19 based on genotype, medication use, and UDRs. CYP2D6 and CYP2C9 gene‐drug interactions were noted in 17.1% ( n = 14) and 14.6% ( n = 12) of cohort. No patients harbored variants in CACNA1S , NUDT15 , RYR1 , or TPMT (Figure ). Because of the high evidence supporting these guidelines, some hospitals have implemented PGx alerts in their EMRs in order to alert providers to prescription actionability. In this cohort, 61.0% ( n = 50) had an actionable genotype for CYP2C19 that would result in an EMR alert of “Abnormal/Priority/High Risk” if prescribed a medication listed in the CPIC guidelines. Of these individuals, 82% ( n = 41) were also taking at least one CYP2C19‐ affected medication, including amitriptyline, citalopram, escitalopram, lansoprazole, omeprazole, pantoprazole, sertraline, and voriconazole. When evaluated in relation to genotype, medication use, and UDRs, 22% ( n = 11) experienced an expected UDR based on CPIC's pharmacologic implications for their CYP2C19 metabolizer status and gene‐drug interaction. The PwCF need many medications to manage and treat the manifestations of disease. Incorporation of PGx can help optimize medication regimens by reducing the trial‐and‐error period. Ideally, the right medication for a given indication would be selected at the correct dose at initiation, leading to a single prescription eliciting the desired therapeutic response with less risk of side effects or treatment failures. We found that PwCF have undesirable medication responses that could be prevented with a priori knowledge of individual PGx data and well‐evidenced dosing guidelines. Furthermore, the majority of PwCF have one or more actionable PGx variants for medications they are already taking or may take in the future as lifespans increase. We also identified potential areas for future research, because we found that most medications used in this population have not been extensively evaluated, according to the PharmGKB database, for precision prescription using PGx. As the field of PGx continues to evolve, further study of how genetic variants in pharmacogenes may influence the medications used by this population is needed. The introduction of targeted CFTR modulators has improved outcomes for PwCF and demonstrated the therapeutic utility of targeting genetic defects. However, CFTR modulators are only one of the drug classes utilized by PwCF and optimizing concomitant medication therapy through pre‐emptive PGx screening also has the potential to improve outcomes and reduce treatment burden. Concordant with other PGx studies, we found that 97.6% of participants had at least one actionable PGx variant, with 70% having three or more. , , Based on CPIC guidelines, CYP2C9 , CYP2D6 , and CYP2C19 were the actionable genes most frequently identified in our cohort, in terms of genetic variation and medication exposure, with CYP2C19 also associated with the most UDRs. Prior to implementation, it is important that target genes are prioritized for routine PGx testing to provide clear guidance when prescribing medications typically seen for PwCF. In alignment with the most actionable genes shown in this study, a recent review proposed a PGx test panel consisting of CYP2C9 , CYP2C19 , CYP2D6 , CYP3A4 , and CYP3A5 . Such a panel, with the addition of MT‐RNR1 for which a guideline is now published, would provide the most coverage for the most utilized medication classes and medications in PwCF. Among medication classes, antibiotics were associated with the highest number of UDRs, followed by psychiatric medications. Azithromycin, tobramycin, sulfamethoxazole‐trimethoprim, and ciprofloxacin were the most used antibiotics, each utilized by over 50% of the cohort. Since the analysis in this paper was completed, three antibiotics (tobramycin, sulfamethoxazole‐trimethoprim, and ciprofloxacin) have had a current CPIC guideline published: (1) relating MT‐RNR1 variation and aminoglycoside‐induced hearing loss, and (2) clarifying the current state of knowledge with G6PD genotypes and prescribing practices. Although no other current CPIC guidelines have been published, there are PharmGKB clinical annotations for sulfamethoxazole‐trimethoprim ( G6PD , GSTM1 , HLA‐B , HLA‐C , and NAT2 ), and ciprofloxacin ( G6PD ), and PharmGKB variant annotations for azithromycin and ABCB1 . This suggests a potential for even more actionability in this population. Additionally, numerous psychiatric medications have CPIC guidelines which could be used to tailor therapy. Although psychiatric medications were not used as frequently in our cohort, Sakon et al. demonstrated that the burden of CPIC medications in PwCF tended to increase with age, with selective serotonin reuptake inhibitors and tricyclic antidepressants initiated at a median age of 21 years (range 6–62 years) and 24.5 years (range 7–45 years), respectively. The median age of our overall cohort was 20 years (range 3–66 years), and therefore these agents may be prescribed in the future. Having PGx information available at the time of prescribing may help reduce the burden of adverse medication‐related effects and treatment failures. For example, one participant who was prescribed voriconazole had a CYP2C19 genotype that denoted a rapid metabolizer phenotype. According to CPIC guidelines, the probability of patients to achieve therapeutic serum concentrations of voriconazole is modest with standard dosing. This individual experienced multiple dose increases but was unable to achieve therapeutic serum drug concentrations and ultimately led to treatment failure. Had the prescriber had genotype information on hand at the time, the guideline would have prompted selection of an alternative agent not dependent on CYP2C19 metabolism. This would most likely have reduced the need for frequent blood draws for dose titration and resulted in faster treatment response. To understand the true medication burden in PwCF, we collected information on all medications utilized, including supplements and over‐the‐counter medications that were reported to our providers. We included every outpatient prescription and confirmed all medications actually administered while an inpatient, revealing not only that PwCF use many medications, they also experience frequent medication changes that adds to therapeutic burden. Whereas the advent of highly effective modulator therapy in greater than 90% of PwCF may reduce the need for some of these medications, we expect that most PwCF will continue to need frequent medication exposures to manage their health. As PwCF live longer and experience additional or new comorbidities due to the effectiveness of modulators, they will also be exposed to additional classes of medications to manage age‐related disease. Because many in this population require the use of multiple medications, and usually also have a centralized medical home in the CF care center, they are an ideal population for PGx implementation with an expectation for significant improvements in treatment outcome and medication burden. Future studies of therapeutics should focus on elucidating the variants that are involved in medications used in this population. Limitations This study is limited by the retrospective nature of the medication data acquisition and a relatively small sample size. Possibly owed to the small sample size, only 6% of this cohort were of racial/ethnic minority compared to 18% of PwCF reported in the Cystic Fibrosis Foundation Registry. This under‐representation has the potential for underestimating the true impact PGx testing could have among the CF population. There are also limitations due to the dependence on documented medications and medication responses, which may not reflect all medications taken by the participants or all adverse drug responses experienced. At our institution, differences in prescribing patterns may have contributed to any differences seen among adults and children. This study is also limited by its ability to capture all PGx variants. This genotyping array does not provide fully comprehensive coverage, especially for genes with a high degree of structural and copy number variation like CYP2D6 . Therefore, the accuracy of assigned metabolizer statuses is dependent on the frequency of analyzed variants in this population. Participants that have variants not covered by this genotyping array, could result in a different metabolizer status than the ones reported here, and may in fact reveal even greater potential actionability than we identified in this study. This study is limited by the retrospective nature of the medication data acquisition and a relatively small sample size. Possibly owed to the small sample size, only 6% of this cohort were of racial/ethnic minority compared to 18% of PwCF reported in the Cystic Fibrosis Foundation Registry. This under‐representation has the potential for underestimating the true impact PGx testing could have among the CF population. There are also limitations due to the dependence on documented medications and medication responses, which may not reflect all medications taken by the participants or all adverse drug responses experienced. At our institution, differences in prescribing patterns may have contributed to any differences seen among adults and children. This study is also limited by its ability to capture all PGx variants. This genotyping array does not provide fully comprehensive coverage, especially for genes with a high degree of structural and copy number variation like CYP2D6 . Therefore, the accuracy of assigned metabolizer statuses is dependent on the frequency of analyzed variants in this population. Participants that have variants not covered by this genotyping array, could result in a different metabolizer status than the ones reported here, and may in fact reveal even greater potential actionability than we identified in this study. J.D.A., B.H.D., G.G., C.R.L., H.S., K.B., N.A.L., and J.S.G. wrote the manuscript. J.D.A. and J.S.G. designed the research. J.D.A., G.G., A.J., C.R.L., and K.P. performed the research. J.D.A. analyzed the data. The authors gratefully acknowledge support during the conduct of this study from the Gregory Fleming James Cystic Fibrosis Research Center supported by the NIH (DK072482) and CFF (R35HL135816). J.S.G. is supported by CFF (GUIMBE18A0‐Q to J.S.G., GUIMBE20A0‐KB to J.S.G.). J.S.G., N.A.L., and B.H.D. are supported in part by the NIH (K23HL143167 to J.S.G., K24HL133373 to N.A.L., KL2TR003097 to B.H.D.). The authors declared no competing interests for this work. Figure S1 Click here for additional data file. Table S1 Click here for additional data file.
Quality control validation for a veterinary laboratory network of six Sysmex
1a8e8892-2177-4098-b95b-843e14ab5fb0
10087143
Internal Medicine[mh]
INTRODUCTION Quality goals have been established for clinical laboratory procedures based on biological variation data or expert opinion, , , , , and analytical equipment that achieves these quality goals will generate results that are suitable for clinical decision‐making. Once analyzers have been evaluated and optimized to meet quality goals, statistical quality control (QC) aims to maintain performance within those goals. Statistical QC relies on one or more control rule applied to the results generated from regular analyses of quality control materials (QCM). For statistical QC to be effective in achieving its purpose, the selected rule(s) must be sensitive and specific for the identification of deteriorating analytical performance. That is, statistical control validation is a necessary step in the design and implementation of statistical QC rule(s). In a network of veterinary laboratory hematology analyzers that have undergone a harmonization process, statistical QC performs an additional role in maintaining harmonization and allowing for continued interchangeability of results and common reference intervals within the veterinary network. In the same way that QC validation supports the effectiveness of QC to ensure the stability of a single analyzer, QC validation is a necessary step in the design and implementation of QC rules for a network of harmonized analyzers. The sensitivity and specificity of QC are reflected by the probability of error detection ( P ed ) and probability of false rejection ( P fr ), respectively. The P ed is a measure of the frequency with which a control rule would cause analytical runs to be rejected when results contain errors beyond the inherent imprecision. Ideally, error detection should be set to 100% for medically significant errors; however, error detection at ≥90% is considered sufficient. Conversely, P fr is a measure of the frequency with which analytical runs are rejected when there is no apparent reason or issue. The goal is to have the highest possible P ed to ensure that medically important errors are not missed and a low P fr (≤5%) to reduce the waste cost and efficiency impact on patient sample volume, reagent use, and result turnaround time. This optimizes the efficiency and capability of statistical QC as a tool for the demonstration of stable laboratory system performance. The analytical performance capabilities of the analyzer inform the choice of the number of control materials and statistical rules selected, which then determines the P ed and P fr achieved. This step‐by‐step process results in a quality control validation. Laboratories using statistical QC often employ the Westgard rules, which are denoted by a shorthand notation. For example, a 1‐2s rule means that control limits encompass two standard deviations (SD) on each side of the observed mean. A 1‐3s refers to a rule that is set at ±3 SDs, and the rule is violated when one measurement exceeds ±3 SDs from the QCM mean. The numeral 2 placed in front of a rule, such as 2‐2s, indicates the rule is violated when two consecutive control measurements or two single measurements across two QCMs are outside the 2SD control limits. Six Sigma process‐improvement methodology has been applied to clinical laboratory analyses such that performance capability (bias and imprecision) relative to TE a can be represented on a numeric scale as a “sigma metric” (σ). Sigma metrics and QC rules have been combined in Westgards Sigma Rules, such that sigma metrics can, in turn, determine whether a simple single QC rule or a more complicated collection of rules is required. A sigma metric ≥6 for an analytical method indicates <3.4 errors/defects per million results, defined as world‐class performance, and the implied low bias and imprecision mean that these methods are easily controlled with simple QC rules and a low number of QCM measurements. It would take a large deterioration in performance to cause a clinical error, and such a large shift could be readily detected with a simple QC rule. In fact, for measurands with sigma metrics ≥6.0, acceptable P ed and P fr can be achieved using 1 or 2 control measurements. Conversely, measurands with lower sigma metrics require a multirule and/or larger number of control measurements and may or may not be able to achieve acceptable P ed and P fr . Commercially available software called EZRules3 (Westgard QC, https://www.westgard.com/store/software.html ) allows the user to explore and select QC rules based on the total error quality goal and observed analytical performance as well as the P ed and P fr that can be achieved. It has been acknowledged that a P ed of ≥0.85 can be used for point of care testing (POCT) in veterinary medicine or for hematologic analyses, where the use of a single QCM is preferred. The use of a single hematology control material is traditional in the UK but a P ed ≥0.90 cannot be achieved with a 1‐3s rule and a single QCM data point can only achieve a P ed ≥0.85. , This paper describes a network of six harmonized Sysmex XT‐2000 i V hematology analyzers across five locations, which required a QC approach that could ensure the maintenance of individual analyzer performance and supported continued harmonization. In deciding which QC approaches to evaluate, the following preferences and constraints had to be taken into consideration. First, the use of a simple control rule (e.g., 1‐2.5s, 1‐3s) rather than a multirule and a single level of QCM was preferred due to the simplicity of training and evaluation and based on traditional laboratory practices and economics. Second, the choice of QCM was limited because a third‐party control material was not available for the Sysmex analyzer, only a manufacturer‐provided QCM. It was felt that the potential disadvantages of a single level of QCM could be mitigated since all blood smears undergo additional nonstatistical quality control via microscopy by a fully trained technician to validate the automated results prior to a pathologist's review of laboratory data and correlation with clinical findings before results are reported. The authors set out to discover whether an effective QC approach supporting network harmonization could be successfully implemented, given these constraints and preferences. This study addressed the following objectives: Determine that a higher P ed can be achieved using QC rules validated for each individual analyzer than that achieved using the manufacturer's acceptable limits for the control limits. Determine that a P ed (≥0.85) and P fr (≤0.05) can be achieved using a single QCM ( n = 1) with a simple single control rule (1‐2.5s or 1‐3s). Assess the use of sigma metric evaluation as part of the QC approach in conjunction with the validated QC rules appropriate for each analyzer, with a goal of using it to monitor when instrument servicing is needed to maintain a high level of instrument performance. Assess whether analyzer performance criteria established in a previous study (bias <3%, achievement of desirable biologic variation‐based goals for CV and bias, and 0.33CV I goals, with sigma metrics >5) would be confirmed in this study as useful contributions to an overall QC approach. MATERIAL AND METHODS 2.1 Hematology analyzers The data from six Sysmex XT‐2000 i V analyzers (Sysmex Corporation, Kobe, Japan) located in five laboratory locations in the UK ( n = 4) and Ireland ( n = 1) were evaluated. Analyzers were designated analyzers 1, 2, 3, 4a, 4b, and 5. This network of analyzers had previously undergone optimization and successful harmonization. Analyzer 1 was designated the reference analyzer. 2.2 Quality control material The QCM used for these evaluations was Level 2—Normal e‐CHECK (XE)‐Hematology Control (Sysmex Corporation) provided in an 8‐vial kit with one new vial used per week. Typically the lot number changed for each new kit with slight changes in the manufacturer's target means and acceptable limits. A single lot number was used for all analyzers during the study. The hematologic measurands evaluated were the white blood cell count (WBC), red blood cell count (RBC), hemoglobin concentration (HGB), hematocrit (HCT), mean cell volume (MCV), mean cell hemoglobin concentration (MCHC), reticulocyte count (RETIC), platelet count (PLT), plateletcrit (PCT), and red cell distribution width‐coefficient of variation (RDW‐CV). The QCM was analyzed by a fully trained technician according to standard operating procedures; when not in use, the QCM vial was refrigerated. 2.3 Evaluation of manufacturer's acceptable limits as control rules using EZRules3 The manufacturer's acceptable limits for the hematologic measurands were evaluated using data from 1 month of QCM results from analyzer 1. The width of the manufacturer's acceptable limits for a measurand was divided by the standard deviation (SD) for the reference lab (analyzer 1) to determine the number of SDs contained within this range. That number was divided by two to determine the SDs on each side of the target mean and was then rounded to the nearest QC rule available within EZRules3 to evaluate the performance of the control rule most closely representative of the manufacturer's acceptable limit. EZRules3 allows the manual selection of simple control rules for 1‐2s, 1‐2.5s, 1‐3s, 1‐3.5s, 1‐4s, 1‐5s, and 1‐6s. If the manufacturer's range control rule was equivalent to <±2SDs, a 1‐2s rule was used. If the manufacturer's range control rule was >6 SDs, a 1‐6s rule was used. The control rule was then assessed using EZRules3 to determine the P ed and P fr that are possible using the startup QC design, which is a program that allows the user to follow a series of prompts, including the manufacturer's target mean as the decision‐level concentration, ASVCP recommendations, and/or internal expert opinion for total allowable error (TE a ) as the chosen quality requirement, and the number of QCMs set to n = 1. For those measurands, where no ASVCP recommendation for TE a was available (PCT and RETIC number), expert opinion goals from an internal working group composed of pathologists and technicians were used (5 and 7 persons, respectively). The expected instability setting in the software was set to off. 2.4 QC validation—evaluation of control rules based on observed individual analyzer performance The QC validation for each of the six Sysmex instruments, customized for observed performance, was determined using approximately 1 month of QC data (March 2020), as reported previously. We used EZRules3 to plot the observed imprecision and observed bias (calculated from the manufacturer's target mean) generated from daily repeated QC measurements for each analyzer and measurand to determine individual analyzer QC rules and associated P ed , P fr , and sigma metrics. The candidate rules evaluated using the manual selection option were 1‐2.5s, n = 1 and 1‐3s, n = 1. A P ed ≥0.90 can theoretically be achieved with a 1‐2.5s rule and a single QCM if performance is sufficiently good. 2.5 Review of criteria for measurands with poor performance Findings from a previous study indicated that Sigma metrics >5, bias <3%, and measurands achieving desirable biologic variation goals for CV and bias demonstrated confident stable analytical performance. These criteria were reviewed and compared with the findings of this study, which confirmed that harmonization was maintained during the course of the study. Hematology analyzers The data from six Sysmex XT‐2000 i V analyzers (Sysmex Corporation, Kobe, Japan) located in five laboratory locations in the UK ( n = 4) and Ireland ( n = 1) were evaluated. Analyzers were designated analyzers 1, 2, 3, 4a, 4b, and 5. This network of analyzers had previously undergone optimization and successful harmonization. Analyzer 1 was designated the reference analyzer. Quality control material The QCM used for these evaluations was Level 2—Normal e‐CHECK (XE)‐Hematology Control (Sysmex Corporation) provided in an 8‐vial kit with one new vial used per week. Typically the lot number changed for each new kit with slight changes in the manufacturer's target means and acceptable limits. A single lot number was used for all analyzers during the study. The hematologic measurands evaluated were the white blood cell count (WBC), red blood cell count (RBC), hemoglobin concentration (HGB), hematocrit (HCT), mean cell volume (MCV), mean cell hemoglobin concentration (MCHC), reticulocyte count (RETIC), platelet count (PLT), plateletcrit (PCT), and red cell distribution width‐coefficient of variation (RDW‐CV). The QCM was analyzed by a fully trained technician according to standard operating procedures; when not in use, the QCM vial was refrigerated. Evaluation of manufacturer's acceptable limits as control rules using EZRules3 The manufacturer's acceptable limits for the hematologic measurands were evaluated using data from 1 month of QCM results from analyzer 1. The width of the manufacturer's acceptable limits for a measurand was divided by the standard deviation (SD) for the reference lab (analyzer 1) to determine the number of SDs contained within this range. That number was divided by two to determine the SDs on each side of the target mean and was then rounded to the nearest QC rule available within EZRules3 to evaluate the performance of the control rule most closely representative of the manufacturer's acceptable limit. EZRules3 allows the manual selection of simple control rules for 1‐2s, 1‐2.5s, 1‐3s, 1‐3.5s, 1‐4s, 1‐5s, and 1‐6s. If the manufacturer's range control rule was equivalent to <±2SDs, a 1‐2s rule was used. If the manufacturer's range control rule was >6 SDs, a 1‐6s rule was used. The control rule was then assessed using EZRules3 to determine the P ed and P fr that are possible using the startup QC design, which is a program that allows the user to follow a series of prompts, including the manufacturer's target mean as the decision‐level concentration, ASVCP recommendations, and/or internal expert opinion for total allowable error (TE a ) as the chosen quality requirement, and the number of QCMs set to n = 1. For those measurands, where no ASVCP recommendation for TE a was available (PCT and RETIC number), expert opinion goals from an internal working group composed of pathologists and technicians were used (5 and 7 persons, respectively). The expected instability setting in the software was set to off. QC validation—evaluation of control rules based on observed individual analyzer performance The QC validation for each of the six Sysmex instruments, customized for observed performance, was determined using approximately 1 month of QC data (March 2020), as reported previously. We used EZRules3 to plot the observed imprecision and observed bias (calculated from the manufacturer's target mean) generated from daily repeated QC measurements for each analyzer and measurand to determine individual analyzer QC rules and associated P ed , P fr , and sigma metrics. The candidate rules evaluated using the manual selection option were 1‐2.5s, n = 1 and 1‐3s, n = 1. A P ed ≥0.90 can theoretically be achieved with a 1‐2.5s rule and a single QCM if performance is sufficiently good. Review of criteria for measurands with poor performance Findings from a previous study indicated that Sigma metrics >5, bias <3%, and measurands achieving desirable biologic variation goals for CV and bias demonstrated confident stable analytical performance. These criteria were reviewed and compared with the findings of this study, which confirmed that harmonization was maintained during the course of the study. RESULTS 3.1 QC validation for manufacturer's acceptable limits The results of the QC validation for the manufacturer's acceptable limits for the QCM using the observed SD for analyzer 1 are summarized in Table . Acceptable criteria for P ed (>0.85) were met for only three measurands (HGB, PCT, and WBC). Acceptable P fr (≤0.05) was achieved for all 10 measurands using the available control rules that most closely represented the manufacturer's acceptable limits. 3.2 QC validation based on observed analyzer performance for six Sysmex analyzers Tables , , summarize the results of the QC validation for two control rules (1‐2.5s and 1‐3s) based on observed analyzer performance (March 2020) for the selected measurands on six Sysmex analyzers. Only three measurands did not achieve a P ed ≥0.85; RBC and PLT on analyzer 4a using a 1‐3s rule ( n = 1), and PCT using both QC rules, 1‐2.5s and 1‐3s ( n = 1). All other measurands achieved a P ed ≥0.85 with either control rule across the six analyzers. P fr ≤0.05 was achieved for all 10 measurands for each of the six Sysmex analyzers and both candidate control rules. Sigma metrics were >6 for 56/60 observations. When the sigma metrics was <5.5, three measurands did not achieve a P ed >0.85 (RBC, PLT, and PCT, as above for analyzer 4a); RETIC (σ = 5.98) on analyzer 4b did achieve an acceptable P ed (>0.85). 3.3 Evaluation of QC rules The 1‐2.5s rule offered the highest P ed for all measurands for the group of Sysmex analyzers based on the observed analyzer performance and yielded P fr s of only 1% (Tables , , ). The manufacturer's QC limits achieved acceptable P ed for only three measurands (Table ). 3.4 Sigma metric monitoring A monthly review of sigma metrics showed three measurands that performed <5.5 sigma failed to achieve quality goals for P ed for one (RBC and PLT) or both QC rules (PCT). On further investigation, the quality goal index (QGI) demonstrated that observed imprecision was implicated for two measurands, while imprecision and bias were implicated for one measurand. The same analyzer (4a) accounted for all three measurands with poor performance, which was compared with the other analyzers, where performance was >5.5 sigma. (see Tables , , ). 3.5 Review and optimization of criteria for measurands with poor performance Previous criteria established for stable analytical performance indicated that measurands with a bias <3% achieved desirable biologic variation‐based goals for CV, bias, and 0.33CVi goals, with sigma metrics >5 supported analytical stability and harmonization. In this study, we noted that a sigma metric >5.5 rather than 5 demonstrated analytical stability. P ed was achieved when the observed CV and Bias achieved desirable biologic variation goals, as previously reported. Imprecision limits to achieve acceptable P ed (>0.85) for RBC (see Table , analyzer 4a) using the 1‐2.5s rule; if desirable bias based on biologic variation was applied (1.761%), then the observed CV % must be ≤1.45 to achieve a P ed >0.94. Similarly, for PLT on analyzer 4a (Table ), the Sigma metric was 5.13, and for the 1‐2.5s rule, we could determine that if desirable bias was achieved based on biologic variation (5.17%), then the observed CV % must be ≤2.60% (3.72%). Therefore, the 1‐3s rule was not satisfactory, the 1‐2.5s rule was satisfactory at 0.86 P ed , and the other analyzers achieved a P ed >0.94. QC validation for manufacturer's acceptable limits The results of the QC validation for the manufacturer's acceptable limits for the QCM using the observed SD for analyzer 1 are summarized in Table . Acceptable criteria for P ed (>0.85) were met for only three measurands (HGB, PCT, and WBC). Acceptable P fr (≤0.05) was achieved for all 10 measurands using the available control rules that most closely represented the manufacturer's acceptable limits. QC validation based on observed analyzer performance for six Sysmex analyzers Tables , , summarize the results of the QC validation for two control rules (1‐2.5s and 1‐3s) based on observed analyzer performance (March 2020) for the selected measurands on six Sysmex analyzers. Only three measurands did not achieve a P ed ≥0.85; RBC and PLT on analyzer 4a using a 1‐3s rule ( n = 1), and PCT using both QC rules, 1‐2.5s and 1‐3s ( n = 1). All other measurands achieved a P ed ≥0.85 with either control rule across the six analyzers. P fr ≤0.05 was achieved for all 10 measurands for each of the six Sysmex analyzers and both candidate control rules. Sigma metrics were >6 for 56/60 observations. When the sigma metrics was <5.5, three measurands did not achieve a P ed >0.85 (RBC, PLT, and PCT, as above for analyzer 4a); RETIC (σ = 5.98) on analyzer 4b did achieve an acceptable P ed (>0.85). Evaluation of QC rules The 1‐2.5s rule offered the highest P ed for all measurands for the group of Sysmex analyzers based on the observed analyzer performance and yielded P fr s of only 1% (Tables , , ). The manufacturer's QC limits achieved acceptable P ed for only three measurands (Table ). Sigma metric monitoring A monthly review of sigma metrics showed three measurands that performed <5.5 sigma failed to achieve quality goals for P ed for one (RBC and PLT) or both QC rules (PCT). On further investigation, the quality goal index (QGI) demonstrated that observed imprecision was implicated for two measurands, while imprecision and bias were implicated for one measurand. The same analyzer (4a) accounted for all three measurands with poor performance, which was compared with the other analyzers, where performance was >5.5 sigma. (see Tables , , ). Review and optimization of criteria for measurands with poor performance Previous criteria established for stable analytical performance indicated that measurands with a bias <3% achieved desirable biologic variation‐based goals for CV, bias, and 0.33CVi goals, with sigma metrics >5 supported analytical stability and harmonization. In this study, we noted that a sigma metric >5.5 rather than 5 demonstrated analytical stability. P ed was achieved when the observed CV and Bias achieved desirable biologic variation goals, as previously reported. Imprecision limits to achieve acceptable P ed (>0.85) for RBC (see Table , analyzer 4a) using the 1‐2.5s rule; if desirable bias based on biologic variation was applied (1.761%), then the observed CV % must be ≤1.45 to achieve a P ed >0.94. Similarly, for PLT on analyzer 4a (Table ), the Sigma metric was 5.13, and for the 1‐2.5s rule, we could determine that if desirable bias was achieved based on biologic variation (5.17%), then the observed CV % must be ≤2.60% (3.72%). Therefore, the 1‐3s rule was not satisfactory, the 1‐2.5s rule was satisfactory at 0.86 P ed , and the other analyzers achieved a P ed >0.94. DISCUSSION To the authors' knowledge, this is the first reported QC validation for a network of Sysmex analyzers used in veterinary hematology and ongoing harmonization within a laboratory system. The quality goal used to determine the P ed and P fr was TE a provided by the ASVCP or internal expert opinion (RETIC and PCT). QC validation, using the manufacturer's acceptable limits and observed analytical performance of analyzer 1, showed that only 3/10 measurands (HGB, WBC, and PCT) achieved an acceptable P ed with the recommended TE a or expert opinion in the case of PCT (Table ). Consequently, the manufacturer's acceptable limits could not be relied on to identify clinically important equipment malfunction for most measurands. The wide manufacturer's limits were not unexpected, as this has been highlighted previously, , and these limits are generally derived from groups of instruments rather than individual instruments. For the manufacturer's limits that failed to achieve an acceptable P ed , error detection was as low as >0.01, using the closest available QC rule , , of 1‐5s or 1‐6s. In comparison, the QC validation used the observed performance of individual analyzers for all 10 measurands, achieving a P ed >0.94 with the 1‐2.5s QC rule and a P ed >0.85 with the 1‐3s QC rule for the observed performance of analyzer 1, resulting in much higher error detection. We determined that a higher P ed could be achieved using QC rules validated for each individual analyzer rather than the use of the manufacturer's acceptable limits. We determined that we could achieve an acceptable P ed , >0.94 for 95% of observations and P fr ≤0.05 for all observations for our network of Sysmex analyzers. We failed to meet the P ed criteria (>0.85, n = 1) for 3/60 (5%) of observations (analyzer 4a for RBC, PLT, and PCT) but were able to achieve the required P ed >0.85 using either 1‐2.5s and/or 1‐3s QC rule, n = 1, for all other measurands. It is noteworthy that the performance of analyzer 4a improved following instrument service and was able to achieve the quality goals (data not shown). This is compared with the manufacturers' P ed , where only two measurands achieved >0.94 (Table ). We are satisfied that the QC validation from the observed analytical performance offers the most consistent probability of error detection ≥0.85 for the largest number of measurands and, therefore, greater confidence in quality control and the likelihood of network stability. Using the manufacturer's acceptable limits does not provide sufficient P ed for most measurands to have peace of mind regarding detection of unstable instrument performance. The probability of false rejection was within P fr criteria (< 0.05) for analyzer 1 using both manufacturer's limits and observed analytical performance. Overall, the individual analyzer validation allowed us the ability to achieve an error detection ≥0.85 and low levels of false rejection, resulting in more satisfactory QC using the 1‐2.5s rule. The 1‐2.5s rule proved to be the optimal solution for one level of control material as the P ed was >0.94 for the majority of observations, whereas a P ed of 0.88 was the highest value achieved using the 1‐3s rule at the recommended TE a . This was achieved with little variation in P fr between the rules used to assess individual analyzer validation since P fr is considered a function of the rule used and the number of QCM and is not based on analyzer performance. For the measurand (PCT) that failed to meet acceptable criteria for P ed using both QC rules (1‐2.5s and 1‐3s), it was considered whether the TE a based on expert opinion was too stringent; however, as the remaining analyzers performed optimally, declining performance was more likely. Moreover, on closer evaluation, the large CV (5.31%) and sigma metric <6 (σ = 4.56) suggest that for this measurand, the instrument performance was deteriorating and required attention, which was reflected in both QC rule failures. Sigma metrics were applied as an additional performance monitoring tool and in conjunction with QC validation. The use of sigma metrics as a performance indicator is well documented. , , Sigma metrics were >6 for all measurands for analyzer 1 when based on observed analytical performance. For measurands on other analyzers that performed with <6 sigma (bias and/or imprecision for RBC and PLT on analyzer 4a was distinct from the other measurands across the instruments that performed at >6 sigma). On closer evaluation for these measurands, we could determine limits for observed imprecision when applying a 1‐2.5s rule to achieve acceptable P ed . In reviewing the literature, we could not conclusively draw relevant comparisons due to the limited research in veterinary publications regarding imprecision goals. However, we believe that the following information is of interest to others performing QC validation using hematology analyzers. Biologic variation data were not available for PCT; however, if bias was set to 0%, then an observed CV ≤4.30% would be required to achieve an acceptable P ed and >6 sigma; the observed CV for analyzer 4a was 5.31%. RETIC for analyzer 4b had a sigma metric of 5.98 but could achieve an acceptable P ed and P fr for the 1‐2.5s and 1‐3s rules; however, unlike the other measurands, the observed CVs were not distinct from the other analyzers. Biologic variation data were not available for this measurand, but from our data, if bias was 0%, an observed CV ≤7.0% is required to meet the quality goal (40%). These observations demonstrate that, depending on the error budget, a low % CV with a high bias may still achieve the quality goal and vice versa, but they must meet desirable goals based on biologic variation. The use of sigma metrics in conjunction with the QC rules (Westgard Sigma rules) highlighted analytical instability for analyzer 4a compared with the other 5 analyzers and technical attention was required for those measurands performing at <5.5 sigma. The measurands that were easily controlled were >5.5 sigma. These measurands had results with desirable biases and CVs based on biologic variation and could achieve high P ed and low P fr at the specified TE a . By meeting these criteria, these measurands were controlled using a simple QC rule. This is more cost‐effective , and less labor‐intensive for the technician, offering a degree of confidence to the clinical pathologist interpreting the results in a multisite, multi‐analyzer environment. For these measurands, a 1‐2.5s rule was considered the best candidate rule for ongoing use. This validation study is a much‐needed step in a harmonization process, ensuring that our network of analyzers is comparable. We know that analytical variability exists between instruments of the same model , ; therefore, by implementing validated QC rules and monitoring analytical performance using Westgards Sigma Rules, we are controlling this aspect of variability. The benefits gained from harmonization using maximally efficient high P ed and low P fr control approaches include a reduction of network costs, uniformity of standard operating procedures, unified quality management policy, unified training and proficiency, less rerun waste, and increased uniformity of turnaround times across laboratories. Measurands more difficult to control were those <5.5 sigma, which had lower error detection (<0.85 P ed ) and higher observed CV (>3.5%) compared with better performing measurands. In most cases, these measurands failed to meet the desirable CV based on biologic variation (Tables , , ). In review of the criteria for identifying a need for analytical and technical attention, previous findings suggest that a bias >3%, failure to meet desirable biologic variation goals, and sigma metrics <5 were immediate triggers for poor performance requiring servicing. However, this study suggested some modifications to those criteria because some measurements showed suboptimal performance (PLT, RBC, and PCT for analyzer 4A) when sigma metrics were <5.5 rather than <5. For those example measurands with biological variation data available (RBC, HGB, HCT, WBC, and PLT), those that failed to meet desirable biologic variation goals with a sigma metric <5.5 warrant further investigation as they were poorly controlled with a 1‐2.5s and/or 1‐3s rule. A more complicated multirule could be adopted in these cases, but in the interest of keeping QC simple and not too laborious for the technicians and network quality Managers, our recommended approach is to monitor the sigma metric, observed bias %, and observed CV %. We are confident that using (1) the 1‐2.5s QC rule, (2) ensuring that the desirable biologic variation‐based quality requirements for bias and CV are met, and (3) maintaining sigma metrics >5.5 sigma are excellent indicators of analytical stability sufficient to maintain harmonization. In addition, we continue to use nonstatistical measures such as a microscopic blood film evaluation by a hematologist, the consideration of clinical findings by a clinical pathologist, patient history, and a comparison/correlation of previous results. These measures are particularly important and recommended for measurands where statistical QC is not performed or does not provide high P ed . When statistical QC may not be sufficient to provide necessary quality monitoring, the addition of nonstatistical QC is recommended to help ensure that accurate and reliable results are reported. Some academic work has been published on QC validation of hematology analyzers, but this has predominantly been in human medicine. , In veterinary medicine, biochemistry analyzer QC validation has encompassed more of the focus. , Early work by Freeman and Gruenwaldt, as well as some comparative work, , , has created a good foundation for our harmonization study and emphasized that QC validation should be a requirement for veterinary laboratories, which is further supported in this study. CONCLUSIONS We recommend QC validation of individual hematology analyzer performance to achieve high P ed and low P fr rather than the use of the manufacturer's acceptable QC limits for hematology in veterinary laboratories. We validated 57/60 observations using the 1‐2.5s and/or 1‐3s QC rule; however, for optimal P ed , we applied the 1‐2.5s QC rule. This simple rule used with a single QCM is adequate for individual analyzers and for a harmonized network of analyzers if the sigma metrics remain >5.5 and the measurands meet desirable biologic variation goals for CV and bias. These standards should be applied across groups of analyzers and should aid the internal quality TE a m, pathologists, and technicians in recognizing when analytical issues arise. The authors have indicated that they have no affiliations or financial involvement with any organization or entity with a financial interest in, or in financial competition with, the subject matter or materials discussed in this article.
Influence of eight debridement techniques on three different titanium surfaces: A laboratory study
bee5f71e-239e-4a27-a457-b45f2fc61278
10087144
Debridement[mh]
INTRODUCTION Roughened implant surfaces which have been optimized for the purposes of osseointegration provide an exceptionally favourable microenvironment for biofilm formation. , Within 5–10 years post‐implant placement, biofilm‐induced implant diseases such as mucositis and peri‐implantitis have been reported to occur anywhere from 12% to 40%. , Mucositis is the more frequently observed soft tissue disease and in some studies reported at rates of 46.83% of implant sites and describes the presence of inflammation which is confined to the soft tissues without loss of supporting bone. In contrast, peri‐implantitis results in progressive, destructive bone loss where the implant may be eventually lost. Maintaining low levels of dental plaque biofilm is essential for preventing inflammation in the adjacent soft tissues and is critical for long‐term success. Therefore, the treatment objectives for peri‐implantitis include infection control, decontamination of implant surfaces exposed to biofilm and ensuing the remaining surface biocompatibility allows the regeneration of lost tissues. Finally, it is essential that patients who have dental implants are educated in the necessary home care regimens and are placed on strict recall programme for continuing maintenance of peri‐implant tissue health. , , , Multiple implant surface decontamination protocols and methods for the treatment of peri‐implantitis have been reported in the literature with varying success. However, no studies have shown one instrumentation method for the treatment of peri‐implantitis to be superior over another. It is difficult to completely decontaminate the implant surface without altering the implant surface or leaving residues behind which effects the biocompatibility of the surface. This is perhaps one factor as to why there is a high rate of recurrence of the disease and retreatments may be necessary. In order to choose methods that address these challenges, it is necessary to evaluate debridement protocols across different surfaces and to assess their efficacy in removal of biofilm. In this laboratory study, we compared eight different debridement protocols which are common to clinical practice, in terms of their effectiveness in biofilm removal, on three surfaces, in the presence of a multispecies biofilm. Changes in surfaces were assessed using non‐contact optical profilometry and scanning electron microscopy, while changes in biofilm biomass were quantified. The null hypothesis was that all eight methods would be equally effective at removing biofilm and would cause similar surface alterations. MATERIALS AND METHODS 2.1 Surface treatment for titanium discs A total of 189 grade IV commercially pure titanium discs (diameter 10 mm; thickness 2 mm) were treated using three different protocols to simulate commercially available titanium implant surfaces. After preparing the surface of the discs, they were allocated randomly into the groups for debridement after biofilm growth. Group 1 (smooth) discs were polished with a Carbimet 600 grit (15 mm size) aluminium oxide paper abrasive disc (Buehler, IL, USA) at a rotational speed of 400 rpm to achieve a smooth surface. A prophylaxis rubber cup in a slow–speed handpiece rotating at 500 rpm with pumice was used to achieve a mirror finish. Group 2 (abraded) discs were abraded with 50 μm alumina oxide beads (Rolloblast, Renfert, Hilzingen, Germany). Each disc was treated for two seconds at a distance of 10 mm using a pressure of 3 bar, to create a matte surface. The discs were rinsed in distilled water in an ultrasonic bath at 45–50°C for 15 min to remove any additional embedded beads. Group 3 (SLA) discs were polished as for group 1, then abraded with 50 μm alumina oxide beads as for group 2 and rinsed in an ultrasonic bath. Once dry, the surface was etched with Multi‐etch® for 5 min in a transparent container. The etchant was kept at a constant temperature of 50°C and was agitated with a magnetic stirrer. The samples were then rinsed in distilled water for 2 min. In order to validate surface treatment processes, 30 additional discs (10 each of smooth, abraded and SLA) were subjected to optical profilometry, elemental analysis and scanning electron microscope (SEM) for comparison against commercial implants in the market. After preparation, discs were ultrasonically cleaned, rinsed three times in deionized water and disinfected by immersion in 70% ethanol for 10 min before being air dried and stored aseptically prior to use in culture studies. 2.1.1 Optical profilometry All 30 additional discs were inspected using non‐contact optical profilometry, with an Olympus LEXT OLs4100 system (Olympus Corporation). Each scanned area covered 259 × 259 μm, and the working distance was 350 μm. Z axis sequences of images were adjusted to allow the lower limit of the image to be in focus. A total of 229 steps were taken at 9 μm depth and captured with a 50x objective lens fitted with a Gaussian filter. Roughness measurements were collected under white light microscopy using the multi‐layer mode with the supplied Olympus LEXT software. The arithmetic mean of the roughness area from the mean plane (Sa), the maximum height of the surface (Sz), the skewness (Ssk) (which represents the degree of bias of the roughness shape), the maximum peak height (Sp), the maximum valley depth (Sv) and the mean square roughness (Sq) were determined. 2.1.2 SEM for disc surfaces and elemental analysis Two discs from each group of 10 were viewed with a field emission SEM (Zeiss Sigma VP, Jena, Germany) at 15 kV under high vacuum conditions, without prior sputtering. Elemental analysis was performed with an energy‐dispersive spectroscopy detector (XMax 50 Silicon Drift Detector; Oxford Instruments) under high vacuum conditions. Prepared discs with no biofilm were used as a reference for the surface features at baseline. 2.2 Biofilm formation on discs Collection of stimulated human saliva was standardized, with a single donor, and was approved by the institutional ethics committee. After inoculation with stimulated human saliva, discs were cultured with brain heart infusion (BHI; Sigma) broth medium supplemented with 5% defibrinated sheep's blood and 10% human saliva in the wells of sterile 24 well plates, without shear forces being applied. Plates were cultured for 96 h under anaerobic conditions (BD GasPak EZ Anaerobe Container System; Becton Dickinson). Water was added to the GasPak generator envelope in each anaerobic jar, causing the evolution of hydrogen gas and carbon dioxide has. The oxygen was consumed by reacting with hydrogen using a palladium catalyst, forming water and establishing anaerobic conditions. The presence of carbon dioxide was intended to facilitate the growth of capnophilic bacteria. Anaerobic conditions were confirmed using methylene blue indicator strips. The inclusion of sheep's blood ensured the broth was rich in protein and haemoglobin products in order to encourage the growth of facultative and obligate anaerobes, resulting in a mixed species biofilm. The plates within the anaerobic jars were kept in a static position, during incubation at 37°C at 96 h. A sample size of 10 discs per treatment was used for the biofilm component of the study, based on past work on variations between debridement methods and the effect size for biofilm removal using abrasive particles. Biofilm compositional analysis was undertaken using next‐generation sequencing (NGS) at the University of Queensland Institute of Molecular Biosciences sequencing facility using the Illumina NextSeq 500 sequencing system (Illuma) to analyse 16S bacteria ribosomal RNA gene sequences, following the manufacturer's 16S metagenomics sequencing workflow, targeting the V3 and V4 variable regions of the 16S rRNA gene. The sequencing primers for the Illumina 16S metagenomics workflow were as follows: PCR1_Forward (50 bp): 5′‐TCGTCGGCAGCGTCAGATGTGTATAAGAGACAGCCTACGGGNGGCWGCAG–3′, and PCR1_Reverse (55 bp): 5′–GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAGGACTACHVGGGTATCTAATCC–3′. These primer pair sequences for the V3 and V4 region create a single amplicon of approximately ~460 bp. MiSeq version 3 sequencing chemistry was used using paired 300‐bp reads, and the ends of each read overlapped to generate high‐quality, full‐length reads of the V3 and V4 region. MiSeq Reporter (version 2.3) software was for analysis, with taxonomic classification using the Greengenes database. The major bacterial species present are reported in Table . 2.3 Debridement protocols Seven titanium discs per group of each of three surface types (smooth, abraded, SLA) were subjected to one of the following protocols: Treated with an air polishing system (AirNGo, Acteon) with glycine powder for 15 s, with the tip positioned 5 mm away from the disc surface at an angle of 45° and applied at a compressed air pressure of 2.4 bar. As for group a above but using sodium bicarbonate powder. As for group a above but using calcium carbonate powder. Treated with a piezoelectric ultrasonic scaler unit (Satelec, Acteon, Bordeaux, France) fitted with an Acteon Implant Protect™ tip (P1‐1). This was used at a power setting of ‘3’ for 30 s at an angle of 45° while held freehand, with horizontal sweeping motions. Hand scaling with a graphite/carbon fibre scaler (Premier® Implant Scaler, model 4 L/4R, Premier Dental Products Company) for 30 s, with the scaler applied at an angle of 90° in horizontal sweeping motions. Hand scaling with a titanium scaler (Grade 5 titanium alloy, ImplaMate™ Universal 3–4, Nordent Instruments) for 30 s at an angle of 90° in horizontal sweeping motions. Treated with a nickel–titanium brush (Hans Korea) oscillating at 900 rpm in a low‐speed rotating handpiece applied for 30 s at an angle of 90° without water coolant. Treated with 40% citric acid, which was applied with a sterile cotton swab (Cotton tip applicator; Livingstone) for 60 s using a horizontal rubbing motion and rinsed with distilled water. Untreated control (Biofilm without treatment) to establish baseline data on biofilm levels for the crystal violet assay. After the respective treatment, each disc with the attached biofilm was rinse gently in distilled water for 20 seconds, with the discs held by tweezers for this final rinse. 2.3.1 Scanning electron microscopy assessment of treated discs One disc per group was selected randomly for SEM analysis. After rinsing in distilled water, the bacterial biofilm attached to the discs was fixed in 10% buffered neutral formalin solution for 24 h, post fixed with 1% osmium tetroxide at pH 7.4 for 24 h, then rinsed with cacodylate buffer, and finally dehydrated through an ethanol series (50%, 70%, 80%, 90% and 100%; 30 minutes per concentration). Samples were dried overnight in a fume hood before being sputter coated with 10 nm of gold (model 5100 coating unit; Polaron) and examined with a field emission SEM (Zeiss Sigma VP) at 10 kV under high vacuum conditions. 2.3.2 Crystal violet assay Six discs per group were analysed for remaining biofilm following treatment, including controls. Discs were stained with 1.0 ml of 1% crystal violet for 15 minutes at room temperature (25°C) in a 24‐well plate. After rinsing three times with sterile distilled water and air drying for 30 min, 1 ml acetic acid (30%) was added to each well, and the plates incubated for 20 min at room temperature. Aliquots of 200 μl were transferred to a 96 well microtiter plate, and the absorbance measured at 570 nm with a microplate spectrophotometer (Infinite 200 PRO series, Tecan, Mannedorf, Switzerland). SEM images showing biofilm formation on discs before any treatment were recorded (Figure ). 2.4 Statistical analysis Data from the six replicate samples from the CV assay per group were collated, and results expressed as mean ± SD. Data sets were assessed for normality using the Kolmogorov–Smirnov test. Differences between groups showing a normal distribution and homogeneity of variance were assessed using a one‐way analysis of variance (ANOVA), with post hoc tests using Bonferroni correction. A threshold of p < 0.05 was set for statistical significance. Surface treatment for titanium discs A total of 189 grade IV commercially pure titanium discs (diameter 10 mm; thickness 2 mm) were treated using three different protocols to simulate commercially available titanium implant surfaces. After preparing the surface of the discs, they were allocated randomly into the groups for debridement after biofilm growth. Group 1 (smooth) discs were polished with a Carbimet 600 grit (15 mm size) aluminium oxide paper abrasive disc (Buehler, IL, USA) at a rotational speed of 400 rpm to achieve a smooth surface. A prophylaxis rubber cup in a slow–speed handpiece rotating at 500 rpm with pumice was used to achieve a mirror finish. Group 2 (abraded) discs were abraded with 50 μm alumina oxide beads (Rolloblast, Renfert, Hilzingen, Germany). Each disc was treated for two seconds at a distance of 10 mm using a pressure of 3 bar, to create a matte surface. The discs were rinsed in distilled water in an ultrasonic bath at 45–50°C for 15 min to remove any additional embedded beads. Group 3 (SLA) discs were polished as for group 1, then abraded with 50 μm alumina oxide beads as for group 2 and rinsed in an ultrasonic bath. Once dry, the surface was etched with Multi‐etch® for 5 min in a transparent container. The etchant was kept at a constant temperature of 50°C and was agitated with a magnetic stirrer. The samples were then rinsed in distilled water for 2 min. In order to validate surface treatment processes, 30 additional discs (10 each of smooth, abraded and SLA) were subjected to optical profilometry, elemental analysis and scanning electron microscope (SEM) for comparison against commercial implants in the market. After preparation, discs were ultrasonically cleaned, rinsed three times in deionized water and disinfected by immersion in 70% ethanol for 10 min before being air dried and stored aseptically prior to use in culture studies. 2.1.1 Optical profilometry All 30 additional discs were inspected using non‐contact optical profilometry, with an Olympus LEXT OLs4100 system (Olympus Corporation). Each scanned area covered 259 × 259 μm, and the working distance was 350 μm. Z axis sequences of images were adjusted to allow the lower limit of the image to be in focus. A total of 229 steps were taken at 9 μm depth and captured with a 50x objective lens fitted with a Gaussian filter. Roughness measurements were collected under white light microscopy using the multi‐layer mode with the supplied Olympus LEXT software. The arithmetic mean of the roughness area from the mean plane (Sa), the maximum height of the surface (Sz), the skewness (Ssk) (which represents the degree of bias of the roughness shape), the maximum peak height (Sp), the maximum valley depth (Sv) and the mean square roughness (Sq) were determined. 2.1.2 SEM for disc surfaces and elemental analysis Two discs from each group of 10 were viewed with a field emission SEM (Zeiss Sigma VP, Jena, Germany) at 15 kV under high vacuum conditions, without prior sputtering. Elemental analysis was performed with an energy‐dispersive spectroscopy detector (XMax 50 Silicon Drift Detector; Oxford Instruments) under high vacuum conditions. Prepared discs with no biofilm were used as a reference for the surface features at baseline. Optical profilometry All 30 additional discs were inspected using non‐contact optical profilometry, with an Olympus LEXT OLs4100 system (Olympus Corporation). Each scanned area covered 259 × 259 μm, and the working distance was 350 μm. Z axis sequences of images were adjusted to allow the lower limit of the image to be in focus. A total of 229 steps were taken at 9 μm depth and captured with a 50x objective lens fitted with a Gaussian filter. Roughness measurements were collected under white light microscopy using the multi‐layer mode with the supplied Olympus LEXT software. The arithmetic mean of the roughness area from the mean plane (Sa), the maximum height of the surface (Sz), the skewness (Ssk) (which represents the degree of bias of the roughness shape), the maximum peak height (Sp), the maximum valley depth (Sv) and the mean square roughness (Sq) were determined. SEM for disc surfaces and elemental analysis Two discs from each group of 10 were viewed with a field emission SEM (Zeiss Sigma VP, Jena, Germany) at 15 kV under high vacuum conditions, without prior sputtering. Elemental analysis was performed with an energy‐dispersive spectroscopy detector (XMax 50 Silicon Drift Detector; Oxford Instruments) under high vacuum conditions. Prepared discs with no biofilm were used as a reference for the surface features at baseline. Biofilm formation on discs Collection of stimulated human saliva was standardized, with a single donor, and was approved by the institutional ethics committee. After inoculation with stimulated human saliva, discs were cultured with brain heart infusion (BHI; Sigma) broth medium supplemented with 5% defibrinated sheep's blood and 10% human saliva in the wells of sterile 24 well plates, without shear forces being applied. Plates were cultured for 96 h under anaerobic conditions (BD GasPak EZ Anaerobe Container System; Becton Dickinson). Water was added to the GasPak generator envelope in each anaerobic jar, causing the evolution of hydrogen gas and carbon dioxide has. The oxygen was consumed by reacting with hydrogen using a palladium catalyst, forming water and establishing anaerobic conditions. The presence of carbon dioxide was intended to facilitate the growth of capnophilic bacteria. Anaerobic conditions were confirmed using methylene blue indicator strips. The inclusion of sheep's blood ensured the broth was rich in protein and haemoglobin products in order to encourage the growth of facultative and obligate anaerobes, resulting in a mixed species biofilm. The plates within the anaerobic jars were kept in a static position, during incubation at 37°C at 96 h. A sample size of 10 discs per treatment was used for the biofilm component of the study, based on past work on variations between debridement methods and the effect size for biofilm removal using abrasive particles. Biofilm compositional analysis was undertaken using next‐generation sequencing (NGS) at the University of Queensland Institute of Molecular Biosciences sequencing facility using the Illumina NextSeq 500 sequencing system (Illuma) to analyse 16S bacteria ribosomal RNA gene sequences, following the manufacturer's 16S metagenomics sequencing workflow, targeting the V3 and V4 variable regions of the 16S rRNA gene. The sequencing primers for the Illumina 16S metagenomics workflow were as follows: PCR1_Forward (50 bp): 5′‐TCGTCGGCAGCGTCAGATGTGTATAAGAGACAGCCTACGGGNGGCWGCAG–3′, and PCR1_Reverse (55 bp): 5′–GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAGGACTACHVGGGTATCTAATCC–3′. These primer pair sequences for the V3 and V4 region create a single amplicon of approximately ~460 bp. MiSeq version 3 sequencing chemistry was used using paired 300‐bp reads, and the ends of each read overlapped to generate high‐quality, full‐length reads of the V3 and V4 region. MiSeq Reporter (version 2.3) software was for analysis, with taxonomic classification using the Greengenes database. The major bacterial species present are reported in Table . Debridement protocols Seven titanium discs per group of each of three surface types (smooth, abraded, SLA) were subjected to one of the following protocols: Treated with an air polishing system (AirNGo, Acteon) with glycine powder for 15 s, with the tip positioned 5 mm away from the disc surface at an angle of 45° and applied at a compressed air pressure of 2.4 bar. As for group a above but using sodium bicarbonate powder. As for group a above but using calcium carbonate powder. Treated with a piezoelectric ultrasonic scaler unit (Satelec, Acteon, Bordeaux, France) fitted with an Acteon Implant Protect™ tip (P1‐1). This was used at a power setting of ‘3’ for 30 s at an angle of 45° while held freehand, with horizontal sweeping motions. Hand scaling with a graphite/carbon fibre scaler (Premier® Implant Scaler, model 4 L/4R, Premier Dental Products Company) for 30 s, with the scaler applied at an angle of 90° in horizontal sweeping motions. Hand scaling with a titanium scaler (Grade 5 titanium alloy, ImplaMate™ Universal 3–4, Nordent Instruments) for 30 s at an angle of 90° in horizontal sweeping motions. Treated with a nickel–titanium brush (Hans Korea) oscillating at 900 rpm in a low‐speed rotating handpiece applied for 30 s at an angle of 90° without water coolant. Treated with 40% citric acid, which was applied with a sterile cotton swab (Cotton tip applicator; Livingstone) for 60 s using a horizontal rubbing motion and rinsed with distilled water. Untreated control (Biofilm without treatment) to establish baseline data on biofilm levels for the crystal violet assay. After the respective treatment, each disc with the attached biofilm was rinse gently in distilled water for 20 seconds, with the discs held by tweezers for this final rinse. 2.3.1 Scanning electron microscopy assessment of treated discs One disc per group was selected randomly for SEM analysis. After rinsing in distilled water, the bacterial biofilm attached to the discs was fixed in 10% buffered neutral formalin solution for 24 h, post fixed with 1% osmium tetroxide at pH 7.4 for 24 h, then rinsed with cacodylate buffer, and finally dehydrated through an ethanol series (50%, 70%, 80%, 90% and 100%; 30 minutes per concentration). Samples were dried overnight in a fume hood before being sputter coated with 10 nm of gold (model 5100 coating unit; Polaron) and examined with a field emission SEM (Zeiss Sigma VP) at 10 kV under high vacuum conditions. 2.3.2 Crystal violet assay Six discs per group were analysed for remaining biofilm following treatment, including controls. Discs were stained with 1.0 ml of 1% crystal violet for 15 minutes at room temperature (25°C) in a 24‐well plate. After rinsing three times with sterile distilled water and air drying for 30 min, 1 ml acetic acid (30%) was added to each well, and the plates incubated for 20 min at room temperature. Aliquots of 200 μl were transferred to a 96 well microtiter plate, and the absorbance measured at 570 nm with a microplate spectrophotometer (Infinite 200 PRO series, Tecan, Mannedorf, Switzerland). SEM images showing biofilm formation on discs before any treatment were recorded (Figure ). Scanning electron microscopy assessment of treated discs One disc per group was selected randomly for SEM analysis. After rinsing in distilled water, the bacterial biofilm attached to the discs was fixed in 10% buffered neutral formalin solution for 24 h, post fixed with 1% osmium tetroxide at pH 7.4 for 24 h, then rinsed with cacodylate buffer, and finally dehydrated through an ethanol series (50%, 70%, 80%, 90% and 100%; 30 minutes per concentration). Samples were dried overnight in a fume hood before being sputter coated with 10 nm of gold (model 5100 coating unit; Polaron) and examined with a field emission SEM (Zeiss Sigma VP) at 10 kV under high vacuum conditions. Crystal violet assay Six discs per group were analysed for remaining biofilm following treatment, including controls. Discs were stained with 1.0 ml of 1% crystal violet for 15 minutes at room temperature (25°C) in a 24‐well plate. After rinsing three times with sterile distilled water and air drying for 30 min, 1 ml acetic acid (30%) was added to each well, and the plates incubated for 20 min at room temperature. Aliquots of 200 μl were transferred to a 96 well microtiter plate, and the absorbance measured at 570 nm with a microplate spectrophotometer (Infinite 200 PRO series, Tecan, Mannedorf, Switzerland). SEM images showing biofilm formation on discs before any treatment were recorded (Figure ). Statistical analysis Data from the six replicate samples from the CV assay per group were collated, and results expressed as mean ± SD. Data sets were assessed for normality using the Kolmogorov–Smirnov test. Differences between groups showing a normal distribution and homogeneity of variance were assessed using a one‐way analysis of variance (ANOVA), with post hoc tests using Bonferroni correction. A threshold of p < 0.05 was set for statistical significance. RESULTS 3.1 Titanium surface treatments Titanium surfaces were examined with optical profilometry, SEM and elemental analysis and compared with commercial surfaces. No sample loss occurred during the study. 3.1.1 Surface parameters Data for surface topography are presented in Table . Variation within the groups was low, and all data sets followed a normal distribution, indicating that a consistent level of surface preparation had been achieved within any one group. Sa, Sq, Sz, Sp and Sv values were all significantly higher for both the abraded and SLA surfaces than for the smooth surfaces, while Sq, Sz and Sv values were all significantly higher for abraded surfaces than for SLA surfaces. 3.1.2 SEM and elemental analysis SEM images at a final magnification of commercial implants and titanium discs are shown in Figure . The reference commercial implants were Southern ITC for an abraded surface (Figure ) and Straumann Standard SLA for the SLA surface (Figure ). The topographical features of the prepared discs were broadly similar to those of the reference samples. Smooth surfaces were flat with occasional scratch marks (Figure ), while abraded surfaces had a rough irregular surface (Figure ). At higher magnifications, occasional fragments of aluminium oxide abrasive were found embedded within the surface. SLA disc surfaces showed a honeycomb‐like undulated etched surface. At a magnification of 20,000×, the SLA surface exhibited nano‐scale features. No residual aluminium oxide abrasive remained on the SLA surfaces. Elemental analysis of the smooth surface showed Ti (57.7%), oxygen (41.1%) and a trace of contaminating carbon (1.1%). The abraded surface was composed of titanium (41.8%), oxygen (43.2%) and aluminium (13.9%), with the latter reflecting the presence of some retained aluminium oxide particles. The SLA surface consisted of Ti (57.8%), oxygen (41.2%) and carbon (1.0%) and was comparable with the Straumann SLA surface (Ti 58.2%, oxygen 41.0% and carbon 0.8%). 3.2 Crystal violet biofilm assay results All debridement techniques resulted in greater than 80% reduction in biofilm compared with baseline, irrespective of the surface type (smooth, abraded, SLA). The extent of reduction was statistically significant for all treatments ( p < 0.001). Numerical results for all treatments are summarized in graphs 1–3, while the overall ranking for effectiveness is shown in Table . Glycine powder delivered through an air polishing system eliminated the most biofilm (94.83%–96.12%), and this was significantly better than all other debridement protocols, across all three surfaces. The next most effective approach was calcium carbonate powder delivered through an air polishing system. No statistical differences were observed between the remaining treatment groups. Sodium bicarbonate powder delivered with an air polishing system varied in its effectiveness across different surfaces and was ranked third most effective for SLA surfaces. Ultrasonic scalers were ranked third or fourth across all surfaces. Mechanical instruments such as brushes and hand scalers were the least effective at eliminating biofilm across all surfaces. The effect of applying citric acid was comparable to mechanical debridement instruments, in terms of biofilm removal efficacy. 3.3 SEM features of surfaces following debridement SEM examination revealed differences in the surface topography of the discs caused by the 8 different treatments. Effects of air polishing are shown in Figure . Glycine powder was highly effective at biofilm removal and did not cause discernible surface alterations to any of the three surfaces studied. Sodium bicarbonate was effective at biofilm removal. Occasional abrasive particles were embedded into the smooth surface, but it did not scratch the surface. It did not cause discernible changes to abraded surfaces, however, caused some minor surface change to the nano‐scale roughness of the SLA surface. Calcium carbonate removed almost all biofilm across all three surfaces. Abrasive particles were embedded into the smooth surface, and there were scratch marks on the surface. There was some smoothing over of projections on abraded surfaces, flattening of the nano‐scale projections and roughness of the SLA surface. Effects of conventional debridement instruments are shown in Figures and . The ultrasonic scaler used with a titanium scaler tip produced gross damage across all surfaces, causing grooves on the smooth surface and flattening projections on the abraded and the SLA surfaces. It did not remove all of the biofilm. Hand instruments of either carbon fibre, or titanium, left biofilm behind, and caused considerable changes to the surface morphology, flattening off the surface projections of abraded and SLA surfaces. The Ni‐Ti brush caused showed gross modifications to all three surfaces (Figure ), with scratching of the smooth surface, and flattening of projections, especially on the SLA surface. It did not remove all the biofilm. In the final treatment group, citric acid caused little to no surface change, but did not remove all of the biofilm (Figure ). Titanium surface treatments Titanium surfaces were examined with optical profilometry, SEM and elemental analysis and compared with commercial surfaces. No sample loss occurred during the study. 3.1.1 Surface parameters Data for surface topography are presented in Table . Variation within the groups was low, and all data sets followed a normal distribution, indicating that a consistent level of surface preparation had been achieved within any one group. Sa, Sq, Sz, Sp and Sv values were all significantly higher for both the abraded and SLA surfaces than for the smooth surfaces, while Sq, Sz and Sv values were all significantly higher for abraded surfaces than for SLA surfaces. 3.1.2 SEM and elemental analysis SEM images at a final magnification of commercial implants and titanium discs are shown in Figure . The reference commercial implants were Southern ITC for an abraded surface (Figure ) and Straumann Standard SLA for the SLA surface (Figure ). The topographical features of the prepared discs were broadly similar to those of the reference samples. Smooth surfaces were flat with occasional scratch marks (Figure ), while abraded surfaces had a rough irregular surface (Figure ). At higher magnifications, occasional fragments of aluminium oxide abrasive were found embedded within the surface. SLA disc surfaces showed a honeycomb‐like undulated etched surface. At a magnification of 20,000×, the SLA surface exhibited nano‐scale features. No residual aluminium oxide abrasive remained on the SLA surfaces. Elemental analysis of the smooth surface showed Ti (57.7%), oxygen (41.1%) and a trace of contaminating carbon (1.1%). The abraded surface was composed of titanium (41.8%), oxygen (43.2%) and aluminium (13.9%), with the latter reflecting the presence of some retained aluminium oxide particles. The SLA surface consisted of Ti (57.8%), oxygen (41.2%) and carbon (1.0%) and was comparable with the Straumann SLA surface (Ti 58.2%, oxygen 41.0% and carbon 0.8%). Surface parameters Data for surface topography are presented in Table . Variation within the groups was low, and all data sets followed a normal distribution, indicating that a consistent level of surface preparation had been achieved within any one group. Sa, Sq, Sz, Sp and Sv values were all significantly higher for both the abraded and SLA surfaces than for the smooth surfaces, while Sq, Sz and Sv values were all significantly higher for abraded surfaces than for SLA surfaces. SEM and elemental analysis SEM images at a final magnification of commercial implants and titanium discs are shown in Figure . The reference commercial implants were Southern ITC for an abraded surface (Figure ) and Straumann Standard SLA for the SLA surface (Figure ). The topographical features of the prepared discs were broadly similar to those of the reference samples. Smooth surfaces were flat with occasional scratch marks (Figure ), while abraded surfaces had a rough irregular surface (Figure ). At higher magnifications, occasional fragments of aluminium oxide abrasive were found embedded within the surface. SLA disc surfaces showed a honeycomb‐like undulated etched surface. At a magnification of 20,000×, the SLA surface exhibited nano‐scale features. No residual aluminium oxide abrasive remained on the SLA surfaces. Elemental analysis of the smooth surface showed Ti (57.7%), oxygen (41.1%) and a trace of contaminating carbon (1.1%). The abraded surface was composed of titanium (41.8%), oxygen (43.2%) and aluminium (13.9%), with the latter reflecting the presence of some retained aluminium oxide particles. The SLA surface consisted of Ti (57.8%), oxygen (41.2%) and carbon (1.0%) and was comparable with the Straumann SLA surface (Ti 58.2%, oxygen 41.0% and carbon 0.8%). Crystal violet biofilm assay results All debridement techniques resulted in greater than 80% reduction in biofilm compared with baseline, irrespective of the surface type (smooth, abraded, SLA). The extent of reduction was statistically significant for all treatments ( p < 0.001). Numerical results for all treatments are summarized in graphs 1–3, while the overall ranking for effectiveness is shown in Table . Glycine powder delivered through an air polishing system eliminated the most biofilm (94.83%–96.12%), and this was significantly better than all other debridement protocols, across all three surfaces. The next most effective approach was calcium carbonate powder delivered through an air polishing system. No statistical differences were observed between the remaining treatment groups. Sodium bicarbonate powder delivered with an air polishing system varied in its effectiveness across different surfaces and was ranked third most effective for SLA surfaces. Ultrasonic scalers were ranked third or fourth across all surfaces. Mechanical instruments such as brushes and hand scalers were the least effective at eliminating biofilm across all surfaces. The effect of applying citric acid was comparable to mechanical debridement instruments, in terms of biofilm removal efficacy. SEM features of surfaces following debridement SEM examination revealed differences in the surface topography of the discs caused by the 8 different treatments. Effects of air polishing are shown in Figure . Glycine powder was highly effective at biofilm removal and did not cause discernible surface alterations to any of the three surfaces studied. Sodium bicarbonate was effective at biofilm removal. Occasional abrasive particles were embedded into the smooth surface, but it did not scratch the surface. It did not cause discernible changes to abraded surfaces, however, caused some minor surface change to the nano‐scale roughness of the SLA surface. Calcium carbonate removed almost all biofilm across all three surfaces. Abrasive particles were embedded into the smooth surface, and there were scratch marks on the surface. There was some smoothing over of projections on abraded surfaces, flattening of the nano‐scale projections and roughness of the SLA surface. Effects of conventional debridement instruments are shown in Figures and . The ultrasonic scaler used with a titanium scaler tip produced gross damage across all surfaces, causing grooves on the smooth surface and flattening projections on the abraded and the SLA surfaces. It did not remove all of the biofilm. Hand instruments of either carbon fibre, or titanium, left biofilm behind, and caused considerable changes to the surface morphology, flattening off the surface projections of abraded and SLA surfaces. The Ni‐Ti brush caused showed gross modifications to all three surfaces (Figure ), with scratching of the smooth surface, and flattening of projections, especially on the SLA surface. It did not remove all the biofilm. In the final treatment group, citric acid caused little to no surface change, but did not remove all of the biofilm (Figure ). DISCUSSION 4.1 Strengths of the present study The originality of this study lies in the use of three different surfaces which relate to commercially available implants, for testing a wide range of debridement techniques. The surface features mapped using profilometry and the elemental composition data show that the features relate to existing commercial implants, despite the surfaces treated being flat rather than curved. The present results extend those from past studies that have been limited to one implant surface type or that have used few debridement techniques, did not have a biofilm present, or that used a single species biofilm rather than a multi‐species biofilm. In addition, the present study examined both surface modifications and the reduction in biofilm. Past studies have often not assessed both aspects. 4.2 Limitations The present study has several limitations. The flat surfaces of the prepared discs may be less difficult to clean than curved surfaces of implants, as they lacked macro‐scale features such as threads. Curved instruments may not have adapted well to these surfaces. The hand operated debridement instruments were used by an experienced clinician in a consistent manner, in line with ordinary clinical practice, but it was not possible to control and standardize the applied forces. This is relevant to the performance of mechanical methods where the applied pressure influences their effectiveness. The biofilm on the surface of the discs was developed under anaerobic conditions, but without shear forces being present. No mineralized biofilm was present on the discs, and this may present a challenge for debridement methods in clinical situations where both forms of biofilm may be present. The crystal violet assay cannot assess metabolic activity and will stain both viable and dead microorganisms. Thus, it was not possible to determine whether any method had direct antimicrobial actions. Finally, whether the surface was biocompatible after debridement methods across the three surfaces remains unknown. Sousa et al. showed that both mechanical disruption (titanium brush), and a combination of mechanical and chemical agents (1% NaOCL and 0.2% chlorhexidine) that had been used for cleaning the titanium surface were ineffective for encouraging bone re‐contact in their in vitro peri‐implantitis model, possibly due to surface changes and or a difference in the elemental composition of the titanium. 4.3 Disc surface treatments The results of this study show important differences between different treatment methods in terms of biofilm removal and surface damage; hence, the null hypothesis was rejected. Most modern implant fixtures have textured surfaces produced by techniques such as abrasive particle beams, acid etching, anodic oxidizing and laser etching. These are intended to enhance the attachment of cells and thus the rate of osseointegration, but they also make debridement challenging. The surface roughness of implant surfaces can be categorized into three groups, as follows smooth (Sa < 0.5 μm); minimally rough (Sa between 0.5–1 μm) and moderately rough (Sa between 1–2 μm), with most implants falling into the latter category. In terms of quantitative differences between the three surfaces, a full panel of parameters were assessed for each of the surfaces created on the titanium discs, and these were similar to published and measured values for Southern ITC implant and Straumann SLA implant surfaces. The abraded disc surface used in the present study was similar to the commercially available Southern ITC implant surface. Likewise, the SLA surface resembled the commercially available Straumann SLA surface. An SLA surface should be highly conductive to the formation of bone, and this may reduce healing time prior to loading. Typical Sa values for SLA surfaces of commercial implants are between 1 and 2 μm. Acid etching at elevated temperatures can be used to promote a thicker oxide layer, which may protect against corrosion. The SLA surface in the present study was created by abrasive blasting of the titanium discs with alumina beads followed by chemical etching, in this case with Multi‐etch®, rather than using corrosive mineral acids. Multi‐etch is a water‐based solution of ammonium persulphate (an oxidant) and sodium fluoride and has a pH near neutral (pH 6.8). The solution is easier and safer to handle than concentrated hydrofluoric acid. The SLA surface produced using Multi‐etch had a Sa of 1.30 and showed similar surface characteristics as the commercially available Straumann SLA surface in terms of morphology and chemical composition. The oxide layer thickness was not determined as this was not directly relevant to our study. 4.4 Debridement of surfaces Having established a mature, diverse, multi‐species biofilm (Table ) produced from whole saliva on the three types of surfaces, it was possible to compare eight different debridement protocols which may be used for the non‐surgical treatment of peri‐implantitis. Using the crystal violet assay, we compared the biomass of biofilm remaining on the discs after each treatment with the baseline. All treatment methods gave a greater than 80% reduction in biofilm biomass when compared to baseline (Figures , , ). Overall, this is a better result than other studies, including those using ultrasonic scalers. , It is not possible to comment on whether any method had superior antimicrobial actions, as such effects could give better clinical outcomes. There is evidence for glycine having such actions, as shown in laboratory studies of the removal and recolonization of Streptococcus gordonnii biofilms on abraded surfaces, where there was reduced recolonization. Similarly, another study has shown that glycine powder used in an air polishing system gave 99.9% elimination of viable Streptococcus sanguinis . 4.5 Advantages of air polishing Within the limitations of the disc model used in the present study, overall glycine powder delivered with an air polishing system was the superior approach, with the best biofilm removal from all surfaces, and the least surface damage or alteration, compared with all other debridement protocols. It gave complete biofilm elimination from smooth and SLA surfaces, however, traces of biofilm remained on abraded titanium surfaces. These results confirm and lend further weight to other positive findings for the use of glycine powder, including reports that glycine powder does not impair fibroblast attachment, and is biocompatible. The results of the present study also show that glycine powder used in an air polisher is a worthwhile approach for removing biofilm without causing surface changes. This is consistent with past findings which support its use for debridement of rough implant surfaces. , , Several in vitro studies have shown such powders to give equal or superior debridement when compared to other protocols. , Of note, a recent study which assessed the elemental distribution of failed titanium SLA surface implants treated by air polishing with glycine followed by 17% EDTA application showed this combination was more conductive at achieving the original elemental distribution. With any air powder abrasive system, there is an inherent trade‐off between debridement efficacy and surface damage. Glycine has the lowest density (1.61 g/cm 3 ) and smallest particle size (25 μm) of the three powders that were tested in the present study. , , As a result, it will transfer less energy when it impacts onto the surface than sodium bicarbonate or calcium carbonate. , This most likely explains why glycine did not cause discernible surface changes to the titanium surfaces used in the present study. These results should be considered with caution given that flat disc surfaces were used, and these are far more accessible than the undercuts and threads of an implant fixture. When considering an air polishing approach, factors other than the powder type could affect biofilm removal and surface damage, including the target (titanium alloy hardness), surface shape, duration of exposure, air pressure, angle of the delivery tip and distance of the tip from the surface being treated. All such factors need to be considered in the clinical setting. Following glycine, the next most desirable approach was using calcium carbonate in an air polisher. Like glycine, this gave significantly better biofilm removal than the remaining methods, for all three surfaces. Calcium carbonate powder did alter the SLA surface, reducing some of the microscopic projections, and some scratching of the smooth surface was also noted. Particle residues of both calcium carbonate and sodium bicarbonate remained after treatment, while in contrast no powder residues were seen after glycine powder treatment. This is consistent with studies showing contamination of the surface with sodium bicarbonate resulting in an altered titanium surface composition, subsequently varying the regeneration ability of the surface. In a previous study, we explored the use of the same three power types used at varying air pressures on an in vitro biofilm removal model using Southern implants with an abraded surface (grade IV titanium). In that study, where removal of ink from the surface was used as a surrogate for biofilm removal, both calcium carbonate (55 μm) and sodium bicarbonate (76 μm) performed more efficiently at a pressure of 2.5 bar than glycine powder (25 μm). Of note, glycine caused the least amount of damage to the implant surface, whereas calcium carbonate and sodium bicarbonate both produced surface alterations, such as rounding of angles and flattening of peaks, in the same pattern as seen on the flat abraded titanium discs used in the present study. A note of caution must therefore be made about using large dense particles in air polishers, particularly at high pressures, because of the risks of altering implant fixture topography by reducing its surface area and micro‐roughness, and of scratching smooth regions such as the implant neck. It would be prudent to balance biofilm removal efficiency against implant surface damage when choosing a particle material for use in air polishing. Smaller and less dense particles such as glycine may be better for debriding areas such as threads as they are less likely to change the topography of the surface. In support of this, in the present study, all three air polishing protocols gave slightly better biofilm removal on the SLA surface compared to the smooth surface. In contrast, all other debridement protocols showed a higher efficiency for biofilm removal on the smooth surface compared with both roughened surfaces. 4.6 Limitations of conventional instrumentation The results of the present study show the limitations of using traditional instrumentation, even when fitted with tips that are intended to be more ‘implant safe’. The ultrasonic scaler with a titanium tip, carbon fibre and titanium hand scalers and the Ni‐Ti brush were, as a group, the least effective for biofilm elimination. Differences between these treatments were not statistically significant. Moreover, all of these approaches caused considerable surface alterations to all three titanium surfaces. This is consistent with studies that show that traditional instrumentation is not as effective for biofilm removal than air polishing methods. Areas such as threads will show more flattening and damage where they can be readily accessed, and the converse is also true. The present study used flat discs so is more representative of easily accessible surfaces which are more prone to damage from instruments. There was no standardization of the applied force used with the hand instruments, which is a limiting factor for the study; however, the technique used was in line with everyday clinical practice. The results for the Ni‐Ti brush were concerning, as this gave the most surface damage to the three surfaces that were used. Some previous work suggests that titanium brushes will not affect surface roughness when applied at a low rotational speed (300 rpm) for 40 s with concurrent irrigation. In contrast, in the present study the Ni‐Ti brush was used at a higher rotational speed of 900 rpm for 30 s, but without water irrigation. The higher speed and lack of irrigant may explain the greater effects seen in the present study, with gross surface changes from such brushes. Another relevant factor is that the present study used titanium discs with a flat surface with a mature (4 day old) biofilm (Figure ). It is possible that the biofilm may have altered the titanium oxide surface layer. 4.7 Citric acid treatment As there is little benefit in achieving a clean surface if the treated surface is not biocompatible and does not enable osseointegration, , it was of interest in the present study to include one purely chemical ‘atraumatic’ method of surface debridement. To this end, 40% citric acid was applied using with cotton swab for 60 s using a horizontal rubbing motion. This gave a level of biofilm reduction that was comparable with mechanical debridement (84.71%–90.05%), and minimal alterations to the titanium surface. It is known that citric acid when used at concentrations of 20%–40% can etch titanium surfaces, particularly smooth and SLA surfaces, causing pitting and corrosion of a smooth surface. The positive results seen for citric acid reflect the short treatment time used. The results for citric acid in the present study are consistent with several past studies which reported a reduction in biofilm levels with citric acid, while maintaining a biocompatible titanium surface. , , Further studies should explore how changes in biofilm composition affect its removal using different methods. The present study used a multispecies biofilm model based on saliva as the inoculum. This method was originally developed in our laboratory to assess microbiome changes in cariogenic biofilms. , Increasing the shear forces applied during biofilm formation would generate a biofilm with different physical properties that could be more challenging to remove. Strengths of the present study The originality of this study lies in the use of three different surfaces which relate to commercially available implants, for testing a wide range of debridement techniques. The surface features mapped using profilometry and the elemental composition data show that the features relate to existing commercial implants, despite the surfaces treated being flat rather than curved. The present results extend those from past studies that have been limited to one implant surface type or that have used few debridement techniques, did not have a biofilm present, or that used a single species biofilm rather than a multi‐species biofilm. In addition, the present study examined both surface modifications and the reduction in biofilm. Past studies have often not assessed both aspects. Limitations The present study has several limitations. The flat surfaces of the prepared discs may be less difficult to clean than curved surfaces of implants, as they lacked macro‐scale features such as threads. Curved instruments may not have adapted well to these surfaces. The hand operated debridement instruments were used by an experienced clinician in a consistent manner, in line with ordinary clinical practice, but it was not possible to control and standardize the applied forces. This is relevant to the performance of mechanical methods where the applied pressure influences their effectiveness. The biofilm on the surface of the discs was developed under anaerobic conditions, but without shear forces being present. No mineralized biofilm was present on the discs, and this may present a challenge for debridement methods in clinical situations where both forms of biofilm may be present. The crystal violet assay cannot assess metabolic activity and will stain both viable and dead microorganisms. Thus, it was not possible to determine whether any method had direct antimicrobial actions. Finally, whether the surface was biocompatible after debridement methods across the three surfaces remains unknown. Sousa et al. showed that both mechanical disruption (titanium brush), and a combination of mechanical and chemical agents (1% NaOCL and 0.2% chlorhexidine) that had been used for cleaning the titanium surface were ineffective for encouraging bone re‐contact in their in vitro peri‐implantitis model, possibly due to surface changes and or a difference in the elemental composition of the titanium. Disc surface treatments The results of this study show important differences between different treatment methods in terms of biofilm removal and surface damage; hence, the null hypothesis was rejected. Most modern implant fixtures have textured surfaces produced by techniques such as abrasive particle beams, acid etching, anodic oxidizing and laser etching. These are intended to enhance the attachment of cells and thus the rate of osseointegration, but they also make debridement challenging. The surface roughness of implant surfaces can be categorized into three groups, as follows smooth (Sa < 0.5 μm); minimally rough (Sa between 0.5–1 μm) and moderately rough (Sa between 1–2 μm), with most implants falling into the latter category. In terms of quantitative differences between the three surfaces, a full panel of parameters were assessed for each of the surfaces created on the titanium discs, and these were similar to published and measured values for Southern ITC implant and Straumann SLA implant surfaces. The abraded disc surface used in the present study was similar to the commercially available Southern ITC implant surface. Likewise, the SLA surface resembled the commercially available Straumann SLA surface. An SLA surface should be highly conductive to the formation of bone, and this may reduce healing time prior to loading. Typical Sa values for SLA surfaces of commercial implants are between 1 and 2 μm. Acid etching at elevated temperatures can be used to promote a thicker oxide layer, which may protect against corrosion. The SLA surface in the present study was created by abrasive blasting of the titanium discs with alumina beads followed by chemical etching, in this case with Multi‐etch®, rather than using corrosive mineral acids. Multi‐etch is a water‐based solution of ammonium persulphate (an oxidant) and sodium fluoride and has a pH near neutral (pH 6.8). The solution is easier and safer to handle than concentrated hydrofluoric acid. The SLA surface produced using Multi‐etch had a Sa of 1.30 and showed similar surface characteristics as the commercially available Straumann SLA surface in terms of morphology and chemical composition. The oxide layer thickness was not determined as this was not directly relevant to our study. Debridement of surfaces Having established a mature, diverse, multi‐species biofilm (Table ) produced from whole saliva on the three types of surfaces, it was possible to compare eight different debridement protocols which may be used for the non‐surgical treatment of peri‐implantitis. Using the crystal violet assay, we compared the biomass of biofilm remaining on the discs after each treatment with the baseline. All treatment methods gave a greater than 80% reduction in biofilm biomass when compared to baseline (Figures , , ). Overall, this is a better result than other studies, including those using ultrasonic scalers. , It is not possible to comment on whether any method had superior antimicrobial actions, as such effects could give better clinical outcomes. There is evidence for glycine having such actions, as shown in laboratory studies of the removal and recolonization of Streptococcus gordonnii biofilms on abraded surfaces, where there was reduced recolonization. Similarly, another study has shown that glycine powder used in an air polishing system gave 99.9% elimination of viable Streptococcus sanguinis . Advantages of air polishing Within the limitations of the disc model used in the present study, overall glycine powder delivered with an air polishing system was the superior approach, with the best biofilm removal from all surfaces, and the least surface damage or alteration, compared with all other debridement protocols. It gave complete biofilm elimination from smooth and SLA surfaces, however, traces of biofilm remained on abraded titanium surfaces. These results confirm and lend further weight to other positive findings for the use of glycine powder, including reports that glycine powder does not impair fibroblast attachment, and is biocompatible. The results of the present study also show that glycine powder used in an air polisher is a worthwhile approach for removing biofilm without causing surface changes. This is consistent with past findings which support its use for debridement of rough implant surfaces. , , Several in vitro studies have shown such powders to give equal or superior debridement when compared to other protocols. , Of note, a recent study which assessed the elemental distribution of failed titanium SLA surface implants treated by air polishing with glycine followed by 17% EDTA application showed this combination was more conductive at achieving the original elemental distribution. With any air powder abrasive system, there is an inherent trade‐off between debridement efficacy and surface damage. Glycine has the lowest density (1.61 g/cm 3 ) and smallest particle size (25 μm) of the three powders that were tested in the present study. , , As a result, it will transfer less energy when it impacts onto the surface than sodium bicarbonate or calcium carbonate. , This most likely explains why glycine did not cause discernible surface changes to the titanium surfaces used in the present study. These results should be considered with caution given that flat disc surfaces were used, and these are far more accessible than the undercuts and threads of an implant fixture. When considering an air polishing approach, factors other than the powder type could affect biofilm removal and surface damage, including the target (titanium alloy hardness), surface shape, duration of exposure, air pressure, angle of the delivery tip and distance of the tip from the surface being treated. All such factors need to be considered in the clinical setting. Following glycine, the next most desirable approach was using calcium carbonate in an air polisher. Like glycine, this gave significantly better biofilm removal than the remaining methods, for all three surfaces. Calcium carbonate powder did alter the SLA surface, reducing some of the microscopic projections, and some scratching of the smooth surface was also noted. Particle residues of both calcium carbonate and sodium bicarbonate remained after treatment, while in contrast no powder residues were seen after glycine powder treatment. This is consistent with studies showing contamination of the surface with sodium bicarbonate resulting in an altered titanium surface composition, subsequently varying the regeneration ability of the surface. In a previous study, we explored the use of the same three power types used at varying air pressures on an in vitro biofilm removal model using Southern implants with an abraded surface (grade IV titanium). In that study, where removal of ink from the surface was used as a surrogate for biofilm removal, both calcium carbonate (55 μm) and sodium bicarbonate (76 μm) performed more efficiently at a pressure of 2.5 bar than glycine powder (25 μm). Of note, glycine caused the least amount of damage to the implant surface, whereas calcium carbonate and sodium bicarbonate both produced surface alterations, such as rounding of angles and flattening of peaks, in the same pattern as seen on the flat abraded titanium discs used in the present study. A note of caution must therefore be made about using large dense particles in air polishers, particularly at high pressures, because of the risks of altering implant fixture topography by reducing its surface area and micro‐roughness, and of scratching smooth regions such as the implant neck. It would be prudent to balance biofilm removal efficiency against implant surface damage when choosing a particle material for use in air polishing. Smaller and less dense particles such as glycine may be better for debriding areas such as threads as they are less likely to change the topography of the surface. In support of this, in the present study, all three air polishing protocols gave slightly better biofilm removal on the SLA surface compared to the smooth surface. In contrast, all other debridement protocols showed a higher efficiency for biofilm removal on the smooth surface compared with both roughened surfaces. Limitations of conventional instrumentation The results of the present study show the limitations of using traditional instrumentation, even when fitted with tips that are intended to be more ‘implant safe’. The ultrasonic scaler with a titanium tip, carbon fibre and titanium hand scalers and the Ni‐Ti brush were, as a group, the least effective for biofilm elimination. Differences between these treatments were not statistically significant. Moreover, all of these approaches caused considerable surface alterations to all three titanium surfaces. This is consistent with studies that show that traditional instrumentation is not as effective for biofilm removal than air polishing methods. Areas such as threads will show more flattening and damage where they can be readily accessed, and the converse is also true. The present study used flat discs so is more representative of easily accessible surfaces which are more prone to damage from instruments. There was no standardization of the applied force used with the hand instruments, which is a limiting factor for the study; however, the technique used was in line with everyday clinical practice. The results for the Ni‐Ti brush were concerning, as this gave the most surface damage to the three surfaces that were used. Some previous work suggests that titanium brushes will not affect surface roughness when applied at a low rotational speed (300 rpm) for 40 s with concurrent irrigation. In contrast, in the present study the Ni‐Ti brush was used at a higher rotational speed of 900 rpm for 30 s, but without water irrigation. The higher speed and lack of irrigant may explain the greater effects seen in the present study, with gross surface changes from such brushes. Another relevant factor is that the present study used titanium discs with a flat surface with a mature (4 day old) biofilm (Figure ). It is possible that the biofilm may have altered the titanium oxide surface layer. Citric acid treatment As there is little benefit in achieving a clean surface if the treated surface is not biocompatible and does not enable osseointegration, , it was of interest in the present study to include one purely chemical ‘atraumatic’ method of surface debridement. To this end, 40% citric acid was applied using with cotton swab for 60 s using a horizontal rubbing motion. This gave a level of biofilm reduction that was comparable with mechanical debridement (84.71%–90.05%), and minimal alterations to the titanium surface. It is known that citric acid when used at concentrations of 20%–40% can etch titanium surfaces, particularly smooth and SLA surfaces, causing pitting and corrosion of a smooth surface. The positive results seen for citric acid reflect the short treatment time used. The results for citric acid in the present study are consistent with several past studies which reported a reduction in biofilm levels with citric acid, while maintaining a biocompatible titanium surface. , , Further studies should explore how changes in biofilm composition affect its removal using different methods. The present study used a multispecies biofilm model based on saliva as the inoculum. This method was originally developed in our laboratory to assess microbiome changes in cariogenic biofilms. , Increasing the shear forces applied during biofilm formation would generate a biofilm with different physical properties that could be more challenging to remove. CONCLUSIONS Non‐surgical treatment for peri‐implant mucositis and peri‐implantitis aims to eliminate biofilm and to decontaminate the surface, without causing gross modifications to the surface topography. The use of glycine in an air polisher and the application of 40% citric acid both gave minimal alterations across all implant surfaces studied, with glycine being the superior method in terms of biofilm removal. Mechanical instrumentation methods, especially those using hand instruments, all caused gross surface alterations, and left biofilm remaining on the surface. Consequently, these do not present a viable approach for decontamination of implant surfaces. CLINICAL RELEVANCE No standardized protocols exist for the effective removal of biofilm in cases of peri‐implantitis. In this laboratory study, eight commonly used implant debridement methods were trialled, with results showing that an air polisher with glycine powder was the most effective with a range of 94.84–96.53% reduction in biofilm across different implant surfaces when a mixed species biofilm was used. Conventionally used mechanical instruments (such as curettes and ultrasonic steel tips) were the most damaging to implant surfaces. These results support the use of an air polisher with glycine, for purposes of accessibility and effective decontamination without surface alteration, in the clinical setting. L.J.W. and N.M conceptualized and supervised the study; C.T, N.M and L.J.W contributed to methodology; C.T collected the data; C.T, N.M, and L.J.W analysed the data; C.T, A.K. and L.J.W wrote the manuscript. All authors have read and agreed to the published version of the manuscript. No conflicts of interest have been declared by the authors. This work was supported by grants from the Australian Dental Research Foundation and the Australian Periodontology Research Foundation. Author CT was supported by an NHMRC Dental Postgraduate Scholarship.
Awareness of Universal Design for Learning among anatomy educators in higher level institutions in the Republic of Ireland and United Kingdom
efdaac19-088b-4271-9a78-5d8a10429d90
10087201
Anatomy[mh]
INTRODUCTION Anatomy is an essential pillar of healthcare programs and the foundation for safe and effective practice (Sugand et al., ). Healthcare professionals have voiced their concern about poor anatomy knowledge among recent healthcare graduates (Bhangu et al., ; O' Keeffe et al., ) in particular their poor anatomical competency and lack of preparedness upon entering residency programs (Fillmore et al., ), suggesting that anatomy curriculum design requires reform. In the modern curriculum, there are constraints on the amount of time allocated to the formal teaching of anatomy within healthcare programs, and more specifically time dedicated to dissection (Harrison et al., ; Jeyakumar et al., ). In the Republic of Ireland (ROI) and United Kingdom (UK), anatomy is typically taught using an in‐person didactic format complemented with practical laboratory sessions to consolidate student learning (Smith et al., ). Both of these teaching modalities have been identified as critical for students' anatomy learning experience (Farkas et al., ). Recent literature indicates that these traditional methods should not be used in isolation but rather alongside new innovative methods to enhance accessibility of learning material and to promote student engagement and interest in anatomy (Dempsey et al., ; Iwanaga, Loukas, et al., ; Lochner et al., ). For example, studies have identified that modern teaching strategies, such as gamification, virtual and augmented reality which align with multiple means of representation, result in similar levels of knowledge acquisition and collaborative activity as traditional methods (Moro et al., ; Rezende et al., ). However, these modern methods have the added bonus of stimulating student autonomy and increasing satisfaction among learners (Alfalah et al., ; Rezende et al., ) all of which aligns with the Universal Design for Learning (UDL) framework. Furthermore, research has highlighted that certain teaching strategies may not suit all students, as students differ in academic preferences and abilities (Hu et al., ; Quinn et al., ; Ruthberg et al., ). The learning styles and social and cultural background of the student population is continually changing as more learners are traveling and migrating to enter third level education (Nortvedt et al., ; Vos et al., ). Specifically, in 2017 international students comprised 12.5% of the entire third level student population in the ROI (Department of Education, ) and this number is increasing as 18.4% of all graduates in the ROI in 2019 were international students (CSO, ). Additionally, widening participation agendas promote the accessibility of third‐level education (House of Commons, ). Thus, more students from diverse socio‐economic backgrounds or with learning difficulties or disabilities are entering higher education both in the ROI and UK (GOV, ; HEA, ). Therefore, to sustain student interest and engagement in anatomy education, educators need to adapt their curriculum to the present student population and educational environment (Chan et al., ; Jeyakumar et al., ; Kahu & Nelson, ). In March 2020, all higher level institutions in the ROI and UK were required to pivot to online teaching during the global COVID‐19 pandemic (Franchi, ; Longhurst et al., ). Educators had to adapt quickly to ensure that students could continue to learn and engage with their material, albeit remotely (Evans et al., ). Anatomy educators became creative and innovative in their teaching methods (de Carvalho Filho et al., ; Dickinson & Gronseth, ; Evans et al., ; Harmon et al., ; Iwanaga, Loukas, et al., ; Yoo et al., ). For example, Harrell et al. incorporated online laboratory dissection videos, Flynn et al. introduced a 3‐dimensional anatomical modeling program to medical students, and Zarcone and Saverino used Leica Acquire, a virtual microanatomy application, to teach pathological anatomy. Additionally, new teaching methods such as Augmented Reality (AR) (Iwanaga, Terada, et al., ) and asynchronous and synchronous lectures (Byrnes et al., ; Harrell et al., ) were utilized. Opportunities for students to demonstrate their understanding in multiple formats such as presentations (Flynn et al., ; Keet et al., ) and online breakout rooms (Al‐Neklawy & Ismail, ) were also provided. As we emerge from the pandemic and return to on‐campus teaching there is an increasing awareness of the benefits of student engagement to allow students to express their knowledge and understanding in a format which is most appropriate and accessible for them (Singh et al., ). These are fundamental features of UDL, a learning theory first articulated in the United States (US) in 1990. Universal Design for Learning is an educational framework designed to ensure that all types of learners can participate, engage and thrive in the same learning environment (CAST, ; Fornauf et al., ). The framework was developed by the Centre for Applied Special Technology (CAST) in the US as a guide for educators to design an accessible curriculum with the aim of changing the design of the learning environment rather than changing the learner (Bedrossian, ). The authors propose that anatomy students are encouraged to become life‐long and resilient learners through an enhanced educational experience that meets their learning preferences. To help accomplish this, three guiding principles were articulated, namely multiple means of engagement, multiple means of representation and multiple means of action and expression. Each principle is divided into three guidelines and each guideline has a set of associated checkpoints (31 in total) which educators may use in the design and delivery of curriculum, in order to enhance accessibility for all students (CAST, ) (Figure ). The aim of UDL is to afford the opportunity to all students to optimize their learning capabilities while also fulfilling the learning outcomes. If there are certain learning outcomes that need to be met or assessed, then there may be limited opportunity for flexibility. The authors contend that promoting and increasing knowledge and understanding of anatomy enhances safety in anatomy practice (Smith et al., ), which is a core component of the skillset needed for a professional degree (Lewis et al., ; Simons et al., ). However, the incorporation of UDL in the design and development of assessments in vocational degree programs requires further research as there is currently little information available. CAST advocate that providing learners with multiple options to engage with the learning material will help learners achieve their potential (CAST, ). Since the inception of UDL in primary school settings, educators in Canada and the US have reported that students retained increased amounts of class content, that student communication skills were enhanced and that participation in classroom activities had increased (Katz, ; Lowrey et al., ). A meta‐analysis carried out by Capp reported that UDL is an effective teaching strategy for enhancing the learning experience of all students with and without disabilities in environments ranging from second level to third level education (Capp, ). However, it must be determined if these promising results and benefits, as a result of embedding UDL in the curriculum of primary and secondary education, would translate to students enrolled in higher level institutions. Black et al. reported that presenting learning material in various formats, regularly providing students with constructive feedback and assessing students using diverse methods was beneficial to learning in a cohort of students enrolled in a higher level institution, in the US, all of which aligns with UDL checkpoints (CAST, ). Murphy et al. suggested that because UDL has been implemented in an effective manner to aid students with disabilities in their transition to higher level institutions, then perhaps it could potentially be used to help students without disabilities in their transition to higher level institutions (Murphy et al., ). The UDL framework incorporates strategies like flexibility and accessibility which any student enrolled in higher level education, regardless of ability, will find beneficial (Griful‐Freixenet et al., ). The UDL educational framework offers potential for enhancing students' engagement in anatomy learning in higher level institutions (Dempsey et al., ). Many of the teaching strategies incorporated into anatomy education during the pandemic are aligned with the UDL framework, although none of the studies (Byrnes et al., ; Harmon et al., ; Harrell et al., ) specifically mention that UDL underpinned the change in approaches to teaching anatomy. Although there is a lack of published research about the explicit utilization of UDL in anatomy education in general (Dempsey et al., ), it is considered an effective tool to enhance student learning of anatomy with a recognized need for multiple means of engagement, representation, action and expression. Research by Balta et al. and Dempsey et al. shows that anatomy educators currently, and perhaps unknowingly, use various teaching methods and strategies when designing their curricula including the use of technology in its many forms (Bell et al., ; Ruthberg et al., ). Particularly, Ruthberg et al. incorporated mixed reality as an alternative to cadaveric dissection to utilize students' time more efficiently and, in turn, improve practical examination scores. They concluded that their mixed reality platform “HoloAnatomy” reduced the amount of time required for learning musculoskeletal anatomy without compromising student understanding, but they also acknowledged that not all students benefited from 3D resources and that individuals with less developed spatial abilities required more time viewing the 3D images (Ruthberg et al., ). This is an example of how implementing one teaching method in isolation will not cater for all students equally. Therefore, there is a need for a variety of teaching methods to ensure that content is accessible to all students. Bell et al. included the use of ultrasound in their curriculum to nurture an active learning environment for medical students studying the anatomy of the floor of the mouth, a teaching method which 97% of participating students ( n = 31) agreed improved their learning. Furthermore, research has identified active learning as an essential element in creating a productive learning environment as it fosters student participation and engagement (Felder & Brent, ) all of which are aims of UDL (CAST, ). The approach of Bell et al. to incorporate active learning to help students become independent thinkers, also aligns with the aim of UDL (CAST, ). Universal Design for Learning caters for students with diverse learning needs, such as poor spatial recognition. This can be accomplished in anatomy curricula by including teaching strategies such as three‐dimensional visualization software (Jamil et al., ; Lone et al., ) which allows students to view the learning material in different ways, which in turn helps with comprehension among learners. More specifically, recent publications have reported the use of UDL in healthcare programs. Dickinson and Gronseth described the benefits and utility of the UDL framework for surgical education during the COVID‐19 pandemic when face‐to‐face interaction was limited. Specifically, Dickinson and Gronseth used simulation, online resources, and mobile applications to aid the delivery of their curriculum to surgical residents in the US. Murphy et al. investigated, by means of an online survey, the implementation and knowledge of the tenets of UDL among occupational therapy educators in the US. They concluded that there is a need for more extensive use of UDL in preparation for teaching and continuing professional development as they propose its implementation would improve student learning and preparation for entering the discipline of occupational therapy (Murphy et al., ). Additionally, Murphy et al. highlighted throughout their study that, as the student population in higher level institutions and student preferences for learning are becoming more diverse, UDL will become the key to ensuring that all occupational therapy students can be ready for effective clinical practice. Studies analyzing healthcare education in higher level institutions indicate that when designing curricula, educators may incorporate methods which align with the UDL framework, without explicitly stating that UDL was utilized (Dempsey et al., ). Furthermore, anatomy educators may be utilizing the UDL framework but may not have measured or published the impact of its utilization in healthcare programs. For both reasons, an exploration of anatomy educators' knowledge and use of UDL is necessary and timely. The aim of this study was to determine if anatomy educators in the ROI and UK were aware of the UDL framework and to assess if, and to what extent, they have been incorporating UDL in their curriculum design and delivery for healthcare students. Furthermore, the study explored if anatomy educators have identified or measured an impact on student learning, engagement, or motivation as a result of utilizing UDL; if there is an association between teaching experience and the incorporation of UDL; and finally, if educators have identified any barriers to implementing the UDL framework in their anatomy curricula. MATERIALS AND METHODS 2.1 Study design The authors developed and administered an anonymous online questionnaire to academic anatomy educators who were teaching in the anatomy departments of higher level institutions in the ROI and UK. A list of such higher level institutions was available on the Anatomical Society's website (Anatomical Society, ). The Anatomical Society seeks to promote and advance research and education in all aspects of anatomical science. The email addresses of anatomy educators from the above listed higher level institutions were accessed through each of the respective higher level institutions websites. The online questionnaire was distributed via email from the first author (A.M.K.D.) to potential participants. The questionnaire was accessible through the survey platform, Microsoft Forms (Microsoft Corp., Redmond, WA). Ethical approval was obtained from the Institutional Social Research Ethics Committee [Log 2021‐082]. 2.2 Questionnaire design The questionnaire was divided into two sections. Section one gathered demographic information such as academic position, methods of teaching and assessing anatomy and the healthcare programs on which educators taught anatomy. Section two focused on the respondents' knowledge, experience, and opinion of the UDL educational framework. The questions in section two were a mixture of both open and closed questions. Open‐ended questions were included to allow participants to elaborate and expand on questions. The questionnaire was open to participants for 12 weeks from July 5, 2021 to October 1, 2021. Reminders were sent out fortnightly via email to all potential participants. Although the questionnaire was distributed during the COVID‐19 pandemic, none of the questions specifically mention the pandemic. 2.3 Data analysis All data were exported to Microsoft Excel (Microsoft Corp., Redmond, WA) and frequency tables were created for categorical variables. Data were entered manually into Statistical Package for Social Scientists (SPSS), version 22 (IBM Corp., Armonk, NY). Descriptive analyses were completed including the lambda coefficient to identify associations between teaching experience and the utilization of UDL. Inductive content analysis was used to explore anatomy educators' opinions of UDL through open‐ended questions (Elo & Kyngäs, ; Kyngäs, ). Verbatim quotes that captured the concepts articulated by a number of participants were included. Study design The authors developed and administered an anonymous online questionnaire to academic anatomy educators who were teaching in the anatomy departments of higher level institutions in the ROI and UK. A list of such higher level institutions was available on the Anatomical Society's website (Anatomical Society, ). The Anatomical Society seeks to promote and advance research and education in all aspects of anatomical science. The email addresses of anatomy educators from the above listed higher level institutions were accessed through each of the respective higher level institutions websites. The online questionnaire was distributed via email from the first author (A.M.K.D.) to potential participants. The questionnaire was accessible through the survey platform, Microsoft Forms (Microsoft Corp., Redmond, WA). Ethical approval was obtained from the Institutional Social Research Ethics Committee [Log 2021‐082]. Questionnaire design The questionnaire was divided into two sections. Section one gathered demographic information such as academic position, methods of teaching and assessing anatomy and the healthcare programs on which educators taught anatomy. Section two focused on the respondents' knowledge, experience, and opinion of the UDL educational framework. The questions in section two were a mixture of both open and closed questions. Open‐ended questions were included to allow participants to elaborate and expand on questions. The questionnaire was open to participants for 12 weeks from July 5, 2021 to October 1, 2021. Reminders were sent out fortnightly via email to all potential participants. Although the questionnaire was distributed during the COVID‐19 pandemic, none of the questions specifically mention the pandemic. Data analysis All data were exported to Microsoft Excel (Microsoft Corp., Redmond, WA) and frequency tables were created for categorical variables. Data were entered manually into Statistical Package for Social Scientists (SPSS), version 22 (IBM Corp., Armonk, NY). Descriptive analyses were completed including the lambda coefficient to identify associations between teaching experience and the utilization of UDL. Inductive content analysis was used to explore anatomy educators' opinions of UDL through open‐ended questions (Elo & Kyngäs, ; Kyngäs, ). Verbatim quotes that captured the concepts articulated by a number of participants were included. RESULTS The overall response rate was 23% ( n = 61). The majority of anatomy educators who participated in this study were at lecturer level ( n = 29, 48%) and were located in England ( n = 39, 64%). The teaching experience of the participants ranged from less than 1 year to over 20 years. The majority of anatomy educators who taught on undergraduate programs taught medical students. Similarly, the majority of anatomy educators who taught on graduate entry programs (to students who already had an undergraduate degree in another discipline) also taught medical students. A large variety of methods were utilized to teach anatomy. Plastic models were reported to be the most utilized method ( n = 54, 89%), closely followed by imaging ( n = 52, 85%), didactic teaching ( n = 52, 85%), technology ( n = 52, 85%) and prosections ( n = 51, 84%). Teaching anatomy with microscopes was the least utilized method by anatomy educators ( n = 16, 26%). Participating educators were asked to indicate the most commonly used assessment methods for anatomy. Multiple choice questionnaires (MCQ) were the most selected method of assessment among the respondents ( n = 56, 92%), followed by short answer questions ( n = 39, 64%). Student self‐assessment was the least utilized method of assessment ( n = 15, 25%). All information from section one of the questionnaire is summarized in Table . 3.1 Awareness of UDL Of the 61 respondents, only 19 (31%) stated that they had heard of the term UDL prior to completing the questionnaire. Of these 19 educators, 15 (79%) (and thus 25% of all respondents) had incorporated elements of UDL in their teaching of anatomy. 3.2 Incorporation of UDL Respondents were asked whether the design and delivery of their anatomy curriculum aligned with any of the UDL checkpoints. Every respondent identified at least one checkpoint from each of the three UDL principles in their curriculum. The most utilized checkpoint, from the multiple means of engagement principle, was “optimize relevance, value and authenticity” ( n = 50) and the least utilized was “increase mastery‐orientated feedback” ( n = 13) (Figure ). The most utilized checkpoint from the multiple means of representation principle was “activate or supply background knowledge” ( n = 51), followed closely by “illustrate through multiple media” ( n = 50). The least utilized checkpoints from this principle were “support decoding of text, mathematical notation and symbols” ( n = 5) and “promote understanding across languages” ( n = 8) (Figure ). The most utilized checkpoint from the multiple means of action and expression principle was “use multiple media for communication” ( n = 54) and the least utilized was “build fluencies with graduated levels of support for practice and performance” ( n = 14) (Figure ). Thirteen (21%) of the anatomy educators responded to the open‐ended question “If you have utilized elements of UDL, can you give an example of how?”. Eight of these 13 (62%) stated that they provide multiple means of representing the learning material. This is achieved by using several different formats including videos with closed captions, Microsoft PowerPoint slides with background information, digitally accessible documents, prosections, plastic models, and body painting. Five of these 13 (38%) respondents stated that they incorporate multiple means of engagement into their anatomy curricula by providing opportunities for the students to work together in pairs or small groups or by encouraging students to participate in self‐directed learning. Two of these educators (15%) provided multiple means of action and expression by integrating a variety of assessment methods into the curriculum and by supporting the students to present their learning in a manner which best suits their learning style. For example, they encouraged students to palpate and draw surface anatomy and incorporate anatomy maps or applications when presenting their learning to their peers. 3.3 Impact of UDL on student learning, engagement, or motivation Inductive content analysis was used to identify the impact of UDL on student learning, engagement and motivation, as perceived by the participants. Of the 13 anatomy educators who responded to the open‐ended question “From your experience, how do you think UDL benefits teaching and learning within an anatomy module?”, the majority ( n = 11, 85%) stated that UDL provides students with options for interacting with the learning material which in turn promotes engagement among students, empowers them to learn, creates a nurturing environment and makes it easier for students to access information. More specifically educators stated that UDL “ reduces the need for reasonable adjustment ,” “ enables different cognitive processes to process information giving a richer learning experience and leading to deeper knowledge and longer‐term retention . It offers flexibility and choice for learners but unless it is guided it can be overwhelming ,” and “ improves engagement , appeals to wider variety of learners , helps students with self‐assessment of progress and keeps instructors aware of individual learning needs of students .” Furthermore, another educator commented that “ the (anatomical) language itself can be challenging and so it is important to ensure that this is clearly explained and providing a UDL approach gives students the best opportunity to be able to understand and engage not only with the physical anatomy but also with the language ” and “ variety is key as some students will engage with some aspects of the teaching and not in others .” Twelve respondents (20%) gave their opinion on whether the incorporation of UDL into curricula had an effect on student motivation to study anatomy. Two respondents did not think that the incorporation of UDL motivates students to learn anatomy. Three respondents said that they were not sure of the effect of UDL on motivation but that they were optimistic that it would have positive results. Specifically, one respondent said that they were “ not sure of the effect ” but that UDL has “ the potential for enhancing the student experience and in turn widening participation and motivation .” The remaining seven educators (58%) agreed that UDL has an effect on student motivation. Four of these educators did not elaborate on whether it was a positive or negative effect but of the other three, one stated that they have seen an increase in motivation among graduate entry students, another said that “ it (UDL) gives them (students) more freedom to learn in a way that is suitable for them and allows them to explore what works for them ” and the final respondent said “ it (UDL) has made the subject more ‘fun’ and less dry . Anatomy is not something to be learned and forgotten , it is meant to be applied to healthcare .” The anatomy educators were asked “if you are not familiar with UDL, from the description of the framework, what potential, if any, do you see UDL having for teaching and learning in anatomy curricula?”. Of the educators who were not familiar with UDL (69%), their opinion of the framework was mixed. Six (14%) of these respondents reported minimal or no potential of UDL application for teaching and learning in anatomy curricula. The remaining responses in relation to the potential of UDL were positive. For example, comments included “ lots of potential , especially for enhancing the student experience and being inclusive to different groups ,” “ potential to give students a learning experience that best suits their individual needs within an intended outcome framework ,” “ it would seem that UDL encapsulates those principles that are crucial for recognizing that individuals learn differently and facilitating such individual learning ,” and “ I think there is definitely scope for us to think more about providing variety of learning experiences within anatomy teaching ”. A word map was created to graphically represent the responses of participants to the open‐ended question “what potential, if any, do you see UDL having for teaching and learning within anatomy curricula?” to emphasize the recurring opinions. The size of a particular word in the figure corresponds with how frequently it appeared in the responses of the educators (Figure ). Word maps align with the UDL framework under the multiple means of representation principle. In particular, they align with Guideline 1, Checkpoint 1.1 “Offer ways of customizing the display of information” (CAST, ). For some readers, an image illustrating the most common responses may aid interpretation of the results. 3.4 Association between teaching experience and incorporation of UDL A lambda coefficient statistical test was carried out to determine whether there was an association between respondents' teaching experience and their utilization of UDL. This resulted in a lambda score of 0.022 suggesting that there is no association between the number of years of teaching experience and the utilization of UDL among the responding anatomy educators. 3.5 Barriers to implementing the UDL framework in anatomy curricula A small number (7%) of the anatomy educators commented on the practicalities of implementing UDL in anatomy curricula. Specifically, there was reference to a staff shortage “ with an intake of over 400 students per year and very few staff , personalization is not an option ” and “ it (UDL) is hard to formally implement when there is a staff shortage .” Another educator said that UDL “ fits with best practice on teaching and learning but offers challenges in relation to assessment .” Awareness of UDL Of the 61 respondents, only 19 (31%) stated that they had heard of the term UDL prior to completing the questionnaire. Of these 19 educators, 15 (79%) (and thus 25% of all respondents) had incorporated elements of UDL in their teaching of anatomy. Incorporation of UDL Respondents were asked whether the design and delivery of their anatomy curriculum aligned with any of the UDL checkpoints. Every respondent identified at least one checkpoint from each of the three UDL principles in their curriculum. The most utilized checkpoint, from the multiple means of engagement principle, was “optimize relevance, value and authenticity” ( n = 50) and the least utilized was “increase mastery‐orientated feedback” ( n = 13) (Figure ). The most utilized checkpoint from the multiple means of representation principle was “activate or supply background knowledge” ( n = 51), followed closely by “illustrate through multiple media” ( n = 50). The least utilized checkpoints from this principle were “support decoding of text, mathematical notation and symbols” ( n = 5) and “promote understanding across languages” ( n = 8) (Figure ). The most utilized checkpoint from the multiple means of action and expression principle was “use multiple media for communication” ( n = 54) and the least utilized was “build fluencies with graduated levels of support for practice and performance” ( n = 14) (Figure ). Thirteen (21%) of the anatomy educators responded to the open‐ended question “If you have utilized elements of UDL, can you give an example of how?”. Eight of these 13 (62%) stated that they provide multiple means of representing the learning material. This is achieved by using several different formats including videos with closed captions, Microsoft PowerPoint slides with background information, digitally accessible documents, prosections, plastic models, and body painting. Five of these 13 (38%) respondents stated that they incorporate multiple means of engagement into their anatomy curricula by providing opportunities for the students to work together in pairs or small groups or by encouraging students to participate in self‐directed learning. Two of these educators (15%) provided multiple means of action and expression by integrating a variety of assessment methods into the curriculum and by supporting the students to present their learning in a manner which best suits their learning style. For example, they encouraged students to palpate and draw surface anatomy and incorporate anatomy maps or applications when presenting their learning to their peers. Impact of UDL on student learning, engagement, or motivation Inductive content analysis was used to identify the impact of UDL on student learning, engagement and motivation, as perceived by the participants. Of the 13 anatomy educators who responded to the open‐ended question “From your experience, how do you think UDL benefits teaching and learning within an anatomy module?”, the majority ( n = 11, 85%) stated that UDL provides students with options for interacting with the learning material which in turn promotes engagement among students, empowers them to learn, creates a nurturing environment and makes it easier for students to access information. More specifically educators stated that UDL “ reduces the need for reasonable adjustment ,” “ enables different cognitive processes to process information giving a richer learning experience and leading to deeper knowledge and longer‐term retention . It offers flexibility and choice for learners but unless it is guided it can be overwhelming ,” and “ improves engagement , appeals to wider variety of learners , helps students with self‐assessment of progress and keeps instructors aware of individual learning needs of students .” Furthermore, another educator commented that “ the (anatomical) language itself can be challenging and so it is important to ensure that this is clearly explained and providing a UDL approach gives students the best opportunity to be able to understand and engage not only with the physical anatomy but also with the language ” and “ variety is key as some students will engage with some aspects of the teaching and not in others .” Twelve respondents (20%) gave their opinion on whether the incorporation of UDL into curricula had an effect on student motivation to study anatomy. Two respondents did not think that the incorporation of UDL motivates students to learn anatomy. Three respondents said that they were not sure of the effect of UDL on motivation but that they were optimistic that it would have positive results. Specifically, one respondent said that they were “ not sure of the effect ” but that UDL has “ the potential for enhancing the student experience and in turn widening participation and motivation .” The remaining seven educators (58%) agreed that UDL has an effect on student motivation. Four of these educators did not elaborate on whether it was a positive or negative effect but of the other three, one stated that they have seen an increase in motivation among graduate entry students, another said that “ it (UDL) gives them (students) more freedom to learn in a way that is suitable for them and allows them to explore what works for them ” and the final respondent said “ it (UDL) has made the subject more ‘fun’ and less dry . Anatomy is not something to be learned and forgotten , it is meant to be applied to healthcare .” The anatomy educators were asked “if you are not familiar with UDL, from the description of the framework, what potential, if any, do you see UDL having for teaching and learning in anatomy curricula?”. Of the educators who were not familiar with UDL (69%), their opinion of the framework was mixed. Six (14%) of these respondents reported minimal or no potential of UDL application for teaching and learning in anatomy curricula. The remaining responses in relation to the potential of UDL were positive. For example, comments included “ lots of potential , especially for enhancing the student experience and being inclusive to different groups ,” “ potential to give students a learning experience that best suits their individual needs within an intended outcome framework ,” “ it would seem that UDL encapsulates those principles that are crucial for recognizing that individuals learn differently and facilitating such individual learning ,” and “ I think there is definitely scope for us to think more about providing variety of learning experiences within anatomy teaching ”. A word map was created to graphically represent the responses of participants to the open‐ended question “what potential, if any, do you see UDL having for teaching and learning within anatomy curricula?” to emphasize the recurring opinions. The size of a particular word in the figure corresponds with how frequently it appeared in the responses of the educators (Figure ). Word maps align with the UDL framework under the multiple means of representation principle. In particular, they align with Guideline 1, Checkpoint 1.1 “Offer ways of customizing the display of information” (CAST, ). For some readers, an image illustrating the most common responses may aid interpretation of the results. Association between teaching experience and incorporation of UDL A lambda coefficient statistical test was carried out to determine whether there was an association between respondents' teaching experience and their utilization of UDL. This resulted in a lambda score of 0.022 suggesting that there is no association between the number of years of teaching experience and the utilization of UDL among the responding anatomy educators. Barriers to implementing the UDL framework in anatomy curricula A small number (7%) of the anatomy educators commented on the practicalities of implementing UDL in anatomy curricula. Specifically, there was reference to a staff shortage “ with an intake of over 400 students per year and very few staff , personalization is not an option ” and “ it (UDL) is hard to formally implement when there is a staff shortage .” Another educator said that UDL “ fits with best practice on teaching and learning but offers challenges in relation to assessment .” DISCUSSION At the outset of the present study, the majority of the respondents were unaware of the UDL framework. However, once they gained some information about UDL and the associated principles and checkpoints, they realized that they had, seemingly unknowingly, been implementing elements of the framework in their curriculum design and delivery. This suggests a need to inform educators of UDL and the potential benefits for teaching and learning of anatomy to healthcare students, especially when many of the respondents recognized the potential benefits of UDL in their responses to the open‐ended questions. The present study did not investigate where educators, who were aware of the framework, had previously heard of UDL or if they received guidance on how to incorporate UDL into their curricula. Further research is needed to understand where and how educators are being made aware of the UDL framework, and whether supports are available to help guide them through the process of incorporating UDL into their anatomy curriculum design. Fifteen of the respondents stated that they had been incorporating UDL into their anatomy curriculum and all 61 respondents identified at least one checkpoint from each of the UDL principles in their curriculum. This suggests that educators are already incorporating elements of the UDL framework, albeit potentially unknown to themselves. A scoping review analyzing the design and delivery of anatomy curricula in healthcare programs in higher level institutions revealed that when designing curricula, educators may incorporate methods which align with the UDL framework, without explicitly stating that UDL was utilized (Dempsey et al., ). However, there are still a number of checkpoints which very few respondents of this study identified in their curriculum design. For instance, only 21% of educators identified the checkpoint “increase mastery‐orientated feedback,” 31% identified the checkpoint “minimize threats and distractions” and 37% identified the checkpoint “optimize student choice and autonomy” (Figure ). Each of these checkpoints is categorized under the multiple means of engagement principle which helps foster self‐directed and motivated learning. Various studies have shown that providing students with autonomy over their own learning is vital to sustain engagement and interest in the subject matter (Alsharari & Alshurideh, ; Goodman et al., ; Hensley et al., ). Similarly, it has been shown that providing students with immediate feedback allows students to engage with the learning process (Blondeel et al., ; Young et al., ). There is no mandatory number of UDL checkpoints which educators should implement (Rao et al., ), as some checkpoints are not appropriate for all subjects. For example, under the multiple means of representation principle, the checkpoint “support decoding of text, mathematical notation and symbols” is difficult to implement within anatomy curricula and therefore it was not surprising that only 8% of the respondents identified this checkpoint in their own curriculum design. Similarly, few respondents (13%) identified the checkpoint “promote understanding across languages” (Figure ). In comparison, a high number of respondents stated that their curriculum aligned with checkpoints such as “activate or supply background knowledge” (84%), “illustrate through multiple media” (82%) and “highlight patterns, critical features, big ideas and relationships” (67%) (Figure ), all of which are from the multiple means of representation principle and have been established as influencing successful student learning across various disciplines (Manthra Prathoshni et al., ; Vieira et al., ; List et al., ; Ulfa et al., ). Specifically in anatomy education, Zafar and Zacher concluded that representing anatomy material in various formats, such as complementing the use of cadavers with AR, increased enjoyment and engagement among dental students. In relation to the multiple means of action and expression principle, some of the least identified checkpoints were “support planning and strategy development” (38%), “enhance capacity for monitoring progress” (36%) and “vary the methods for response and navigation” (36%) (Figure ). However, studies have identified that varying the way students navigate and participate in the learning environment, and guiding student goal‐setting helps nurture strategic and goal‐directed anatomy learners (Donkin & Rasmussen, ; Eleazer & Scopa Kelso, ; Grønlien et al., ; Hernandez et al., ). Incorporating each of these guidelines would provide learners with the opportunity to express their knowledge in a manner which is most appropriate to them, and to track their progress, which in turn would allow them to identify areas where they may be struggling, or indeed excelling, all of which helps learners to become strategic and goal driven (CAST, ). Edyburn and Edyburn describe the essential practices required to help any educator implement UDL in a manner which provides meaningful support for the diverse student population so that every learner can be successful. The authors emphasize that there is a requirement to understand the philosophy of UDL, but additionally that there is a need to bridge the gap between knowing about and implementing UDL. The authors propose the utilization of UDL for the design and delivery of anatomy curricula, as UDL encapsulates a number of pedagogical theories, such as self‐determination theory (SDT) (Deci & Ryan, ; Hu & Zhang, ), generative learning theory (GLT) (Brod, ; Wittrock, ) and cognitive flexibility theory (CFT) (Spiro et al., ) into one framework. Universal Design for Learning also aligns with the FAIR principles postulated by Harden and Laidlaw . Therefore, educators may look to one framework for guidance, when designing their curriculum delivery, rather than multiple different theories. The present study illustrates the knowledge and perceived impact of UDL among anatomy educators in the ROI and UK. At the outset of this study, many of the respondents stated that they were not aware of the UDL framework, but at the end of the questionnaire they stated that they believed they were implementing elements of the framework in their curriculum design and delivery. These anatomy educators utilized various accessible and inclusive teaching methods and, in a number of cases, noticed improvements in student motivation and engagement as a result. The questionnaire provided educators with a very basic description of UDL. Perhaps with more detailed information they would be able to identify the areas in their curriculum that could be updated or modified. Teaching methods that align with the UDL framework have been reported to have a positive impact on learning among students with a variety of learning preferences across an array of programs in a higher level institution in the US (Black et al., ). The authors concluded that representing learning material in various formats, regularly providing students with constructive feedback, and assessing students using diverse methods was beneficial to student learning, all of which aligns with UDL checkpoints (CAST, ). Murphy et al. suggested that since UDL has been effectively implemented in aiding the transition of students with disabilities to higher level institutions, then perhaps it could potentially be used to help students without disabilities in their transition to higher level institutions (Murphy et al., ). Furthermore, the UDL framework incorporates strategies like flexibility and accessibility to learning material which any higher level education student, regardless of ability would find beneficial (Griful‐Freixenet et al., ). The respondents' teaching experience ranged from less than 1 year to more than 20 years. There was no association between experience and the utilization of UDL, indicating that the educators' awareness of UDL and its implementation in anatomy curricula is not reliant on the extent of teaching experience. Inductive content analysis was used to explore the participating anatomy educators' opinions of UDL. From this analysis it became clear that, according to the respondents, the main barrier of incorporating the UDL principles into anatomy curricula is a staff shortage. Staff shortage in anatomy programs has been documented (Kramer et al., ; Wilson et al., ). Thus, there may be a negative association with implementation of a new teaching framework to anatomy curricula due to the increased workload that typically accompanies curricular change. However, respondents may not yet be aware of recent publications guiding educators to incorporate UDL into their curriculum design and delivery (Cotán et al., ; Edyburn, ; Luke, ; Xie & Rice, ). Arguably, once anatomy educators are more knowledgeable about UDL, they will have an increased understanding of how easily they can tailor their curricula to accommodate all students, without the major workload they may have originally thought was required. Lee and Griffin highlight that not all educators will be comfortable implementing UDL in their curriculum design and delivery right away. Rather, they will require time to become confident with potentially new teaching strategies. Incorporation of UDL into assessment is highlighted as a barrier by one respondent who stated that UDL “ fits with best practice on teaching and learning but offers challenges in relation to assessment .” This could be addressed by the suggestions compiled by CAST to help educators implement UDL strategies in the design and delivery of their assessments. They propose that educators consider which actions are relevant to the information being assessed and which actions can be supported or varied in order to obtain an accurate account of what each individual has learned (CAST, ). The authors propose including a variety of question styles within an end‐of‐year assessment paper such as multiple choice questions (MCQs), extended match questions, short answer questions, labelling of diagrams, true or false questions, and essays. For continuous assessment, educators could allow the students to decide how they want to demonstrate their knowledge, either through essay, PowerPoint presentation, or pre‐recorded video, while adhering to the learning outcomes of the module. Providing all the students with the same options removes the perception of unfairness, as they all have the same opportunity to choose which format they prefer to be examined in. Specific examples of how anatomy educators may incorporate UDL into the design and delivery of their anatomy curricula include: highlight the relevance of the information, the clinical significance, if there are any notable anatomical variations; vary the level of demands posed to students, include both straight forward, simple questions and higher order questions. Incorporation of these examples of UDL would thus serve to challenge the high performing students to sustain engagement, while also catering for the weaker student so that they do not become overwhelmed and discouraged. Other suggestions of how UDL could be incorporated into anatomy education include: provide immediate feedback via self‐assessed practice examinations or using game‐based learning platforms like Kahoot! During lectures or tutorials; display information in a variety of formats including text, images, video, animations and voice over narration; allow students to choose, when appropriate, how to complete an assignment using PowerPoint presentation, video presentation or written essay. The COVID‐19 pandemic provided a unique opportunity for educators in higher level institutions to implement UDL more extensively in anatomy curricula, as educators had to adjust and adapt their curriculum design and delivery rapidly. The multiple ways in which anatomy can be taught both successfully and inclusively were highlighted when educators were forced to teach anatomy remotely (Byrnes et al., ; Goldman et al., ; Patra et al., ). Longhurst et al. investigated the adaptations made specifically to anatomy education in the ROI and UK in response to the COVID‐19 pandemic. It was reported that the most commonly expressed concern among anatomy educators from the participating higher level institutions ( n = 14) was in relation to the time investment required to develop new resources to replace traditional lectures and practical classes. This concern overlaps with the concerns expressed by the anatomy educators participating in this study with regards to incorporating UDL into their curriculum design. Furthermore, 36% of the respondents in the Longhurst et al. study highlighted that there was a reduction in student engagement since the start of the pandemic. Engagement is one of the main three principles of the UDL framework (CAST, ). In Australia and New Zealand, Pather et al. concluded that flexibility and adaptability were essential for the continuity of anatomy education programs during the COVID 19 pandemic, both of which are aims of UDL. Perhaps if educators were more aware of UDL and the ways in which it can be incorporated into curriculum design and delivery, it may have eased some of the burden and stress among educators, in relation to teaching, during the COVID‐19 pandemic. The study is not without limitations. There was a 23% response rate and therefore the results cannot claim to be representative of all anatomy educators in ROI and the UK. However, the demographic of the respondents was quite varied, thus removing an opportunity for bias. Furthermore, only educators whose email address was accessible through their institutions website were contacted to participate. Although this study was carried out during the COVID‐19 pandemic, the questionnaire did not specifically mention the pandemic. Therefore, it cannot be stated for certain whether the teaching strategies utilized by the respondents were always being used, or whether they were new strategies utilized in response to a global pandemic. In conclusion, this study highlights that anatomy educators from higher level institutions in the ROI and UK are implementing teaching strategies which align with the UDL framework. However, the majority of respondents were not aware that a specific name can be used to collectively identify the teaching methods used. There is still a lack of information on the benefits of the explicit utilization of UDL in anatomy curricula of healthcare programs in higher level institutions. Furthermore, the authors conclude that it would be beneficial to introduce the UDL framework to anatomy educators in the ROI and UK. The potential positive impact of the explicit utilization of UDL on healthcare students' learning, engagement, motivation, and experience in higher level education is evident. The optimal method of distributing this information to educators requires consideration and research.
Validity and Reliability of Four Parent/Patient–Reported Outcome Measures for Juvenile Idiopathic Arthritis Remote Monitoring
52bd6ff5-675a-4898-93aa-1bfd6ba2d9b0
10087383
Internal Medicine[mh]
In recent years, the interest in the assessment of parent/child–reported outcomes in pediatric rheumatic diseases has gained increasing importance ( , , ). These measures reflect the parent's and child's perception of the disease course and effectiveness of therapeutic interventions. The integration of these perspectives in clinical assessment may facilitate concordance with physicians’ choices and improve adherence to treatment and participation in a shared decision‐making strategy ( , , ). In addition, the use of parent/child–reported outcomes may help the physician to identify with greater accuracy the salient issues for each patient and to focus the attention on relevant matters. Thus, information obtained from the parent or the child may contribute to the success of patient care ( ). Moreover, the availability of reliable parent/child–reported outcomes could be crucial for remote monitoring of patients when in‐person clinical evaluation may be difficult or even impossible. SIGNIFICANCE & INNOVATIONS The integration of parent/child–reported outcomes in clinical assessment may facilitate concordance with physicians’ choices and improve adherence to treatment and participation in a shared decision‐making strategy in juvenile idiopathic arthritis. The selected measures of parent/patient assessment of pain, disease activity level, joints with active arthritis, and morning stiffness were valid and reliable tools for patient self‐monitoring. The selected measures are ideally suited for remote assessment of disease course and could potentially be included in a patient/parent–reported disease activity score for juvenile idiopathic arthritis. The Outcome Measures in Rheumatology (OMERACT) Juvenile Idiopathic Arthritis (JIA) Working Group has recently provided a new core set of domains to be considered for the evaluation of children with JIA. JIA patients, their parents, and parents’ associations other than clinicians and researchers expert in pediatric rheumatology, contributed substantially to the identification and ranking of the most relevant disease domains ( , ). Consensus methods and selection of domains procedure have been described in detail elsewhere ( ). The domains may refer to physician‐reported measures, parent/child–reported outcomes, or laboratory examinations; some domains, such as the joint inflammatory signs, could be assessed by both a physician and a parent or patient. The aim of this work was to provide further evidence of validity and reliability for 4 parent/child–reported outcome measures, domains included in the OMERACT JIA core domain set. Among the domains that can be assessed by a parent/patient–reported measure, those that obtained the highest ranking after consensus voting were “pain” and “joint inflammatory signs/active joints.” Pain is the most relevant symptom of children with JIA ( ). Several studies have shown that pain is more prevalent in JIA than previously recognized and that a sizeable percentage of patients continue to report pain long after disease onset ( ). High levels of pain limit physical activities, disrupt school attendance, and contribute to psychosocial distress. These issues make reduction of pain a key goal of treatment, and therefore the identification of a reliable tool to measure this domain is of major importance. The evaluation of joint inflammatory signs and the count of joints with active disease is traditionally considered a physician‐reported domain. Joint count assessment by physicians through swollen and tender joints is considered the most conventional way of detecting clinical synovitis ( ), and its importance in disease activity assessment is supported by the inclusion of joint counts in core data sets of disease activity indices such as the Juvenile Arthritis Disease Activity Score (JADAS) ( ) and the American College of Rheumatology (ACR) pediatric response criteria ( ) used in clinical trials, research, and clinical practice. Although only few data are available on self‐ or proxy‐reported joint count in JIA ( ), a recent systematic literature review in adults with rheumatoid arthritis (RA) showed that patient‐reported joint counts have a potential role in the monitoring of disease activity, with satisfactory intraobserver and interobserver reliability ( ). Another domain that was highly ranked in the process leading to the development of the OMERACT JIA core domain set is the “patient's perception of disease/overall well‐being.” Surprisingly, physicians and other stakeholders considered this domain as more important than parents and patients. The domain of a patient's perception of disease activity is traditionally measured by the patient's global assessment or well‐being scale, such as in all the JADAS versions. Overall well‐being, or global health, and the patient's perception of disease activity, however, should probably be considered as different domains, with the former being broader and probably including the latter. Conceptually “global health” includes several aspects of health outcomes, that is, also those unrelated or not directly related to disease activity ( ). The most widely adopted disease activity indices for RA include a patient self‐report measure. In the Simplified Disease Activity Index and the Clinical Disease Activity Index, this item is defined as “patient global assessment of disease activity,” whereas it is defined as “global health” in the Disease Activity Score (DAS) and in the 28‐joint DAS ( , ). A measure of parent/patient perception of disease activity is available for JIA ( ), but so far, that measure has never been incorporated in disease activity scores or in core measurement sets. Finally, we decided to include in the study a fourth domain, “stiffness,” which was also highly ranked in the OMERACT core domain set consensus process. Morning stiffness is a major symptom of active disease in children with JIA and may have a profound impact on physical function and health‐related quality of life ( , ). Assessment of morning stiffness was incorporated in the 2011 criteria for clinically inactive disease in JIA; patients can satisfy the definition of clinically inactive disease only if they have morning stiffness lasting ≤15 minutes ( ). This cutoff was based on the belief that morning stiffness ≤15 minutes may represent damage from previous active disease or may be due to reasons other than active inflammation. Further analyses have shown that the presence of morning stiffness in JIA patients classified to be in clinically inactive disease by formal definitions is associated with worse parent perception of a child's health and disease status ( ). Furthermore, morning stiffness was also a consistent predictor of worse outcome in various categories of JIA patients ( ). The aim of this study was to provide evidence of validity and reliability for 4 outcome measures assessing the parent/patient–reported domains of pain, joint inflammatory signs, patient's perception of disease, and morning stiffness. The selected tools are included in the Juvenile Arthritis Multidimensional Assessment Report (JAMAR), which was recently translated and cross‐culturally validated in the national language of 49 countries ( ). These tools can be considered for inclusion in a parent/patient disease activity score. Subjects Patients’ data were obtained from a large multinational data set of subjects enrolled in the Epidemiology Treatment and Outcome of Childhood Arthritis (EPOCA) study ( ). Briefly, the EPOCA study is a survey conducted by the Pediatric Rheumatology International Trials Organization between 2011 and 2016, involving 9,081 JIA patients from 130 pediatric rheumatology centers in 49 countries, grouped into 8 geographical areas. Each participating center was asked to enroll 100 patients meeting the International League of Associations for Rheumatology (ILAR) criteria for JIA that were seen consecutively over 6 months or, if the center did not expect to see at least 100 patients within 6 months, to enroll all patients seen consecutively within the first 6 months after study start. Patients were included irrespective of their disease duration. For each visit, retrospective and physician‐reported data were collected, together with parent/child–reported outcomes included in the JAMAR, filled by a legal guardian and, when appropriate, by the patient. Ethical approval was obtained in all countries involved in the EPOCA study. Outcome measures In the EPOCA study, the questionnaire was proposed for completion by a caregiver (proxy‐reported measure) and by the patient when he/she was deemed by the caring physician able to understand and respond to the questions in the JAMAR (self‐reported measures). In some instances, the questionnaire was filled only by the patient. The intensity of the child's pain was rated on a 21‐numbered circular scale corresponding to the traditional visual analog scale (VAS; 0 = no pain, 10 = extreme pain) ( ), responding to the question “How much pain has your child had because of the illness over the past week?” The question was adapted for the patient's self assessment. The level of the child's disease activity was also rated on a 21‐numbered circular scale (0 = no activity, 10 = maximum activity), responding to the question “Considering all the symptoms, such as pain, joint swelling, morning stiffness, fever (if due to arthritis), and skin rash (if due to arthritis), please evaluate the level of activity of your child's illness at the moment.” The question was adapted for the patient's self assessment. The duration of morning stiffness was measured with a 5‐point Likert scale, with the following anchors: “less than 15 minutes,” “15–30 minutes,” 30 minutes to 1 hour,” “1–2 hours,” and “more than 2 hours.” The assessment of morning stiffness duration was preceded by a question asking whether morning stiffness was present or absent. The proxy‐ and self‐assessment of joint inflammatory signs was obtained by asking the parent or the patient to rate the presence of pain or swelling in the following joints or joint groups, listed in a table: cervical spine, lumbo‐sacral spine, shoulders, elbows, wrists, small hand joints, hips, knees, ankles, and small foot joints. Patients or parents had to mark with an “X” by the affected joint/joints group. To each joint or joint group, 1 point was given in case of monolateral involvement and 2 points in case of bilateral involvement, if applicable. The sum obtained yielded the parent/patient joint count, with a score range of 0–18. Validity Criterion validity of tested measures was assessed by examining the correlation of the 4 tested measures with physician‐reported measures, an acute phase reactant (erythrocyte sedimentation rate [ESR]), and composite disease activity scores. Physician measures included the physician global assessment (PhGA) on a 0–10 scale, the number of joints with active arthritis, swollen joint count, tender joint count, and the number of joints with limitation on motion. Composite scores included the clinical JADAS in 10 joints (cJADAS10). The cJADAS10 is given by the sum of the PhGA, the parent/patient assessment of well‐being on a 0–10 VAS, and the number of joints with active arthritis cut at 10. For each analysis, the correlations of the well‐being VAS with physician‐reported measures and ESR were also presented, as a reference. Correlations of the well‐being VAS with the composite scores were not considered, the former being part of the latter. To further assess the validity of the tools, correlations of the parents’ and patients’ measure with the cJADAS10 were also computed after grouping patients by ILAR category and by geographic area (northern Europe, western Europe, southern Europe, eastern Europe, North America, Latin America, Africa and Middle East, and southeast Asia). Correlations of parents’ measures were also analyzed grouped by family socioeconomic status (subjectively rated by the attending physician as low, average, or high), and by education level (elementary or lower, high school, or degree) of the parent completing the questionnaire. Finally, correlations of patients’ measures were analyzed after grouping subjects into 4 age groups: “6–10 years,” “11–13 years,” “14–18 years,” and “>18 years.” Correlations were computed using Spearman's rank correlation method. Correlations were considered high if >0.7, moderate from 0.4–0.7, and low if <0.4 ( ). We expected that correlations of tested tools would be higher with those measures more closely related to disease activity, such as the number of joints with active arthritis or the PhGA. Moreover, we expected that correlations would be higher with the composite score, because it includes a parent/child–reported outcome. Reliability When both parent's and patient's evaluations were available at the same visit, the Spearman's correlation (95% confidence interval) between the parent's and the child's rating of the 4 tested measures were calculated to demonstrate the interrater reliability of the tools. To assess test–retest reliability, a randomly selected subset of subjects was asked to complete the JAMAR again 7–14 days after the first time. In this subset of subjects, test–retest reliability of each measure was assessed with the intraclass correlation coefficient (ICC), using a 2‐way mixed‐effects model. The ICC was classified as follows: <0.2 = poor, 0.2–0.39 = fair, 0.4–0.59 = moderate, 0.6–0.79 = substantial, and ≥0.80 = almost perfect reproducibility ( ). Test–retest reliability for individual measures was further examined by the Bland‐Altman approach ( ) to test for random error of each variable. In this approach, the differences between the first and second measurement were plotted against their means. The mean difference ±1.96 × SD with its resulting interval represents 95% limits of agreement. Patients’ data were obtained from a large multinational data set of subjects enrolled in the Epidemiology Treatment and Outcome of Childhood Arthritis (EPOCA) study ( ). Briefly, the EPOCA study is a survey conducted by the Pediatric Rheumatology International Trials Organization between 2011 and 2016, involving 9,081 JIA patients from 130 pediatric rheumatology centers in 49 countries, grouped into 8 geographical areas. Each participating center was asked to enroll 100 patients meeting the International League of Associations for Rheumatology (ILAR) criteria for JIA that were seen consecutively over 6 months or, if the center did not expect to see at least 100 patients within 6 months, to enroll all patients seen consecutively within the first 6 months after study start. Patients were included irrespective of their disease duration. For each visit, retrospective and physician‐reported data were collected, together with parent/child–reported outcomes included in the JAMAR, filled by a legal guardian and, when appropriate, by the patient. Ethical approval was obtained in all countries involved in the EPOCA study. In the EPOCA study, the questionnaire was proposed for completion by a caregiver (proxy‐reported measure) and by the patient when he/she was deemed by the caring physician able to understand and respond to the questions in the JAMAR (self‐reported measures). In some instances, the questionnaire was filled only by the patient. The intensity of the child's pain was rated on a 21‐numbered circular scale corresponding to the traditional visual analog scale (VAS; 0 = no pain, 10 = extreme pain) ( ), responding to the question “How much pain has your child had because of the illness over the past week?” The question was adapted for the patient's self assessment. The level of the child's disease activity was also rated on a 21‐numbered circular scale (0 = no activity, 10 = maximum activity), responding to the question “Considering all the symptoms, such as pain, joint swelling, morning stiffness, fever (if due to arthritis), and skin rash (if due to arthritis), please evaluate the level of activity of your child's illness at the moment.” The question was adapted for the patient's self assessment. The duration of morning stiffness was measured with a 5‐point Likert scale, with the following anchors: “less than 15 minutes,” “15–30 minutes,” 30 minutes to 1 hour,” “1–2 hours,” and “more than 2 hours.” The assessment of morning stiffness duration was preceded by a question asking whether morning stiffness was present or absent. The proxy‐ and self‐assessment of joint inflammatory signs was obtained by asking the parent or the patient to rate the presence of pain or swelling in the following joints or joint groups, listed in a table: cervical spine, lumbo‐sacral spine, shoulders, elbows, wrists, small hand joints, hips, knees, ankles, and small foot joints. Patients or parents had to mark with an “X” by the affected joint/joints group. To each joint or joint group, 1 point was given in case of monolateral involvement and 2 points in case of bilateral involvement, if applicable. The sum obtained yielded the parent/patient joint count, with a score range of 0–18. Criterion validity of tested measures was assessed by examining the correlation of the 4 tested measures with physician‐reported measures, an acute phase reactant (erythrocyte sedimentation rate [ESR]), and composite disease activity scores. Physician measures included the physician global assessment (PhGA) on a 0–10 scale, the number of joints with active arthritis, swollen joint count, tender joint count, and the number of joints with limitation on motion. Composite scores included the clinical JADAS in 10 joints (cJADAS10). The cJADAS10 is given by the sum of the PhGA, the parent/patient assessment of well‐being on a 0–10 VAS, and the number of joints with active arthritis cut at 10. For each analysis, the correlations of the well‐being VAS with physician‐reported measures and ESR were also presented, as a reference. Correlations of the well‐being VAS with the composite scores were not considered, the former being part of the latter. To further assess the validity of the tools, correlations of the parents’ and patients’ measure with the cJADAS10 were also computed after grouping patients by ILAR category and by geographic area (northern Europe, western Europe, southern Europe, eastern Europe, North America, Latin America, Africa and Middle East, and southeast Asia). Correlations of parents’ measures were also analyzed grouped by family socioeconomic status (subjectively rated by the attending physician as low, average, or high), and by education level (elementary or lower, high school, or degree) of the parent completing the questionnaire. Finally, correlations of patients’ measures were analyzed after grouping subjects into 4 age groups: “6–10 years,” “11–13 years,” “14–18 years,” and “>18 years.” Correlations were computed using Spearman's rank correlation method. Correlations were considered high if >0.7, moderate from 0.4–0.7, and low if <0.4 ( ). We expected that correlations of tested tools would be higher with those measures more closely related to disease activity, such as the number of joints with active arthritis or the PhGA. Moreover, we expected that correlations would be higher with the composite score, because it includes a parent/child–reported outcome. When both parent's and patient's evaluations were available at the same visit, the Spearman's correlation (95% confidence interval) between the parent's and the child's rating of the 4 tested measures were calculated to demonstrate the interrater reliability of the tools. To assess test–retest reliability, a randomly selected subset of subjects was asked to complete the JAMAR again 7–14 days after the first time. In this subset of subjects, test–retest reliability of each measure was assessed with the intraclass correlation coefficient (ICC), using a 2‐way mixed‐effects model. The ICC was classified as follows: <0.2 = poor, 0.2–0.39 = fair, 0.4–0.59 = moderate, 0.6–0.79 = substantial, and ≥0.80 = almost perfect reproducibility ( ). Test–retest reliability for individual measures was further examined by the Bland‐Altman approach ( ) to test for random error of each variable. In this approach, the differences between the first and second measurement were plotted against their means. The mean difference ±1.96 × SD with its resulting interval represents 95% limits of agreement. Descriptive characteristics of patients A total of 8,643 parents and 6,060 patients had all the evaluations available for the tested tools in the EPOCA data set. In 5,947 instances, the questionnaire was filled by the patient and a parent at the same visit. Demographic figures, disease activity parameters, and parent/child–reported outcomes of patient samples are shown in Table . Validity correlations In the EPOCA parents’ data set, correlations of all tested measures are in the moderate range with physician‐reported measures of disease activity, with the exception of morning stiffness (ρ = 0.17–0.24) and in the poor range with the limited joint count (ρ = 0.30–0.41) and with ESR (ρ = 0.32–0.43). Correlations of the parent/patient joint count, the disease activity scale, and the pain scale were strong with the cJADAS10 (Table ). Correlations of patient‐reported measures were similar. The level of correlation of the tested parent measures with the cJADAS10 remained stable after grouping patients by ILAR category (Figure ). Similar results were obtained for patient measures (see Supplementary Figure , available on the Arthritis Care & Research website at http://onlinelibrary.wiley.com/doi/10.1002/acr.24855 ). In the same analysis with patients grouped in 8 geographic areas, correlation levels were similar, although on average, they were higher in Latin America and slightly lower in North America (Figure for parents’ measures, and for patients see Supplementary Figure , available on the Arthritis Care & Research website at http://onlinelibrary.wiley.com/doi/10.1002/acr.24855 ). In 6,287 patients in the EPOCA data set for whom these data were available, the level of correlation of the 4 measures with the cJADAS10 did not change according to the level of education of the parent completing the questionnaire (data not shown). Finally, in 7,336 subjects, correlations remained in the same category across 3 different categories of socioeconomic status (low, moderate, or high) of the patient's family (Table ). The correlations with cJADAS10 of the 4 measures obtained from patients progressively increased from the lower age group to the higher age group (Table ). Reliability measurement Interrater reliability Paired data for parents and patients were available in 5,947 visits. The Spearman's correlations between the parent's and the patient's rating were 0.83 for the disease activity scale, 0.84 for the morning stiffness scale, and 0.88 for both the pain scale and the joint count. As a reference, the correlation of the well‐being scale between parent's and patient's rating was 0.80. Test–retest reliability After a median of 7 (interquartile range 6–7) and 7 (6; 7) days from first completion, the questionnaire was filled a second time by 442 parents and 344 patients, respectively. ICCs showed almost perfect reproducibility (ICC >0.80) for all measures, with the exception of the disease activity VAS for parents’ assessment (ICC = 0.78) and the well‐being VAS for parents’ assessment (ICC = 0.73) (Table ). Figure presents Bland‐Altman plots for each of the 4 disease activity indices, demonstrating the mean difference between measurements with 95% limits of agreement (morning stiffness 0.05 [–1.3, 1.4], joint count 0.03 [–2.9, 3.0], VAS disease activity 0.3 [–3.1, 3.7], and VAS pain 0.3 [–2.6, 3.3]) according to the baseline value. Bland‐Altman plots for patients’ measures are shown in Supplementary Figure , available on the Arthritis Care & Research website at http://onlinelibrary.wiley.com/doi/10.1002/acr.24855 . A total of 8,643 parents and 6,060 patients had all the evaluations available for the tested tools in the EPOCA data set. In 5,947 instances, the questionnaire was filled by the patient and a parent at the same visit. Demographic figures, disease activity parameters, and parent/child–reported outcomes of patient samples are shown in Table . In the EPOCA parents’ data set, correlations of all tested measures are in the moderate range with physician‐reported measures of disease activity, with the exception of morning stiffness (ρ = 0.17–0.24) and in the poor range with the limited joint count (ρ = 0.30–0.41) and with ESR (ρ = 0.32–0.43). Correlations of the parent/patient joint count, the disease activity scale, and the pain scale were strong with the cJADAS10 (Table ). Correlations of patient‐reported measures were similar. The level of correlation of the tested parent measures with the cJADAS10 remained stable after grouping patients by ILAR category (Figure ). Similar results were obtained for patient measures (see Supplementary Figure , available on the Arthritis Care & Research website at http://onlinelibrary.wiley.com/doi/10.1002/acr.24855 ). In the same analysis with patients grouped in 8 geographic areas, correlation levels were similar, although on average, they were higher in Latin America and slightly lower in North America (Figure for parents’ measures, and for patients see Supplementary Figure , available on the Arthritis Care & Research website at http://onlinelibrary.wiley.com/doi/10.1002/acr.24855 ). In 6,287 patients in the EPOCA data set for whom these data were available, the level of correlation of the 4 measures with the cJADAS10 did not change according to the level of education of the parent completing the questionnaire (data not shown). Finally, in 7,336 subjects, correlations remained in the same category across 3 different categories of socioeconomic status (low, moderate, or high) of the patient's family (Table ). The correlations with cJADAS10 of the 4 measures obtained from patients progressively increased from the lower age group to the higher age group (Table ). Interrater reliability Paired data for parents and patients were available in 5,947 visits. The Spearman's correlations between the parent's and the patient's rating were 0.83 for the disease activity scale, 0.84 for the morning stiffness scale, and 0.88 for both the pain scale and the joint count. As a reference, the correlation of the well‐being scale between parent's and patient's rating was 0.80. Test–retest reliability After a median of 7 (interquartile range 6–7) and 7 (6; 7) days from first completion, the questionnaire was filled a second time by 442 parents and 344 patients, respectively. ICCs showed almost perfect reproducibility (ICC >0.80) for all measures, with the exception of the disease activity VAS for parents’ assessment (ICC = 0.78) and the well‐being VAS for parents’ assessment (ICC = 0.73) (Table ). Figure presents Bland‐Altman plots for each of the 4 disease activity indices, demonstrating the mean difference between measurements with 95% limits of agreement (morning stiffness 0.05 [–1.3, 1.4], joint count 0.03 [–2.9, 3.0], VAS disease activity 0.3 [–3.1, 3.7], and VAS pain 0.3 [–2.6, 3.3]) according to the baseline value. Bland‐Altman plots for patients’ measures are shown in Supplementary Figure , available on the Arthritis Care & Research website at http://onlinelibrary.wiley.com/doi/10.1002/acr.24855 . Paired data for parents and patients were available in 5,947 visits. The Spearman's correlations between the parent's and the patient's rating were 0.83 for the disease activity scale, 0.84 for the morning stiffness scale, and 0.88 for both the pain scale and the joint count. As a reference, the correlation of the well‐being scale between parent's and patient's rating was 0.80. After a median of 7 (interquartile range 6–7) and 7 (6; 7) days from first completion, the questionnaire was filled a second time by 442 parents and 344 patients, respectively. ICCs showed almost perfect reproducibility (ICC >0.80) for all measures, with the exception of the disease activity VAS for parents’ assessment (ICC = 0.78) and the well‐being VAS for parents’ assessment (ICC = 0.73) (Table ). Figure presents Bland‐Altman plots for each of the 4 disease activity indices, demonstrating the mean difference between measurements with 95% limits of agreement (morning stiffness 0.05 [–1.3, 1.4], joint count 0.03 [–2.9, 3.0], VAS disease activity 0.3 [–3.1, 3.7], and VAS pain 0.3 [–2.6, 3.3]) according to the baseline value. Bland‐Altman plots for patients’ measures are shown in Supplementary Figure , available on the Arthritis Care & Research website at http://onlinelibrary.wiley.com/doi/10.1002/acr.24855 . Patient self‐assessment or parent proxy‐assessment are nowadays considered of foremost importance in the care of chronic conditions, and in particular, of JIA, with a disease course that is mostly unpredictable. Remote patient self‐assessment could foster the early recognition of disease flares, leading to timely and effective medical treatment. This study describes the assessment of validity and reliability of 4 parent/child–reported outcomes for JIA. The choice of the 4 measures to be tested was based on the updated OMERACT core domain set for studies in JIA. In fact, 3 of these measures (pain, disease activity, and joint count) refer to domains indicated as mandatory by the OMERACT workshop, whereas stiffness is considered an important, even though optional, domain. To provide adequate strength to the validation process, the criterion validity and reliability were assessed in a large sample, including >6,000 patients from several different countries. These patients are likely to be representative of the whole spectrum of JIA phenotypes, as well as cultural background, education, and socioeconomic status. Although the patient sample was skewed toward a low level of disease activity, the EPOCA study data set was large enough to include a representative number of subjects for each disease state based on recent JADAS10 thresholds ( ). All tested measures demonstrated good criterion validity, by yielding moderate correlations with the physician‐reported measures, such as PhGA and the number of joints with active arthritis, and strong correlations with the JADAS10 and cJADAS10, with the exception of morning stiffness, which remained moderately correlated with the composite disease activity scores. Correlations with cJADAS10 were similar after grouping patients by ILAR category and geographic area, suggesting that our results could be representative of different clinical settings. Furthermore, the level of correlation remained stable irrespective of the socioeconomic status of the family and the parent education level, indicating that the criterion validity of the 4 measures is not significantly affected by the social context of the family. On the other hand, the correlations with cJADAS10 of the 4 measures obtained by the patients increased in the older age group, suggesting that the higher the patient age the more reliable the parent/child–reported outcome. This finding is in line with previously reported results on the general pediatric population ( ). The 4 parent/child–reported outcomes were also found to be very reliable tools, by obtaining correlations in a strong range both in interrater and in test–retest reliability analysis. Bland‐Altman plots showed 95% limits of agreement, with approximately ±3 for VAS pain, disease activity, and joint count, meaning that a difference of >3 could be interpreted as a real change, with a 5% risk of being wrong. Furthermore, the plots showed that differences between test–retest evaluations were more pronounced in the middle of the scales (almost all test–retest combinations outside the limits of agreement occur between 2.5 and 7.5 points), whereas scores toward the lower end of the scales tended to be reproduced more accurately. Thus, parents and children deeming themselves in remission or low disease activity could report this fact trustworthily. Also, children with at least some disease activity would probably report that fact again, if asked to re‐evaluate their disease activity, even though the exact score attributed to their disease activity might vary by ±3 points. Pain perception in children with JIA is multifactorial and results from the combination of biologic, psychological, and environmental factors ( ). Despite being the most common and distressful symptom of JIA, pain has been widely neglected in the development of outcome measures for JIA ( ). Indeed, pain assessment is not included in the Wallace criteria for clinically inactive disease ( ) or in the American College of Rheumatology Pediatric response criteria ( ), which have been used as outcome measure in all the recent trials on biotechnologic drugs in JIA. Yet pain evaluation has been included in the updated core domain set for studies in JIA by OMERACT as a mandatory domain ( ). The use of age‐appropriate, reliable, and valid tools is recommended to assess pain in children with chronic arthritis ( ). In fact, a reliable appraisal of pain in patients with JIA requires the use of well‐validated pain assessment tools that could capture the multifaceted aspects of the pain experience ( ). The 21‐numbered circular VAS has been found to be a simpler and more feasible measure for pain self‐report compared to the 100‐mm VAS ( ). Our study confirmed the good criterion validity of the pain 21‐numbered circular VAS, which yielded strong correlations with the composite scores for disease activity JADAS10 and cJADAS10 and moderate correlations with physician‐reported measures, such as the PhGA and the active joint counts. In the reliability analysis, the pain scale performed better among the 4 measures tested. Altogether, these results confirm that the 21‐numbered circle is a feasible tool for pain self‐ or proxy‐report in JIA, and its use should be encouraged both in standard clinical practice and in research settings to allow clinicians and researchers to track child pain over time. To our knowledge, only 2 studies have investigated the role of self‐ or proxy‐reported joint count in JIA ( , ). Even though both showed that patients and/or parents tended to overestimate the presence of arthritis when marking active joints on a manikin‐format joint, Dijkstra et al found a moderate agreement between the physician and the patient total joint count. In line with that, in our analysis, both parent and patient joint count yielded moderate correlation with the number of active, swollen, and tender joint counts provided by the physician, demonstrating good criterion validity. Furthermore, parents’ joint counts correlated strongly with the patient's count, and both demonstrated a very high interrater and retest reliability. In many instances, such as when evaluating whether treatment needs to be escalated, the exact number and location of active joints is of less importance, as long as the overall evaluation of joint activity is in agreement between parents, patients, and physicians. This result suggests that, even though parent/patient–reported joint count cannot replace the physician's joint assessment in clinical practice, it could be helpful in JIA disease activity remote monitoring. Admittedly, the tested joint count is based on a reduced and selected list of joints as it is included in the JAMAR ( ). So far, the patient's perception of the level of disease activity in JIA has been measured through the parent/child overall well‐being VAS, both in disease activity scores and in a core set of multiple criteria for the definition of different disease activity states ( , ). However, the well‐being VAS measures a broader construct than the level of disease activity, including all the aspects of the disease burden affecting the patient's health‐related quality of life. In this study, we provided evidence supporting the efficacy of a VAS specifically designed to assess the level of disease activity, as disease level is perceived by the patient or by caregivers. Notably, of the 2 most widely adopted disease activity scores for adults with RA, the DAS incorporates a patient global health tool ( ), whereas the Simplified Disease Activity Index incorporates a patient global disease activity tool ( ). Further discussion is urgently needed to identify the measure that better serves the purpose of describing the parents’ or patients’ perspective of the disease course. In the present study, the correlation of the disease activity scale with physician‐reported measures reached greater levels compared to the overall well‐being VAS. On this basis, parent and child disease activity VAS may be a suitable indicator of disease status in children with JIA, and its incorporation in the composite disease activity scores should be further investigated. Among the 4 parent/child–reported outcomes tested, morning stiffness was the one with the lower performance in the correlation analysis, although still moderately correlated with the PhGA and the JADAS10 and highly reliable. This finding may be at least in part due to the use of a 5‐point Likert scale, transformed to a 0–10 scale. Although not included in the OMERACT core‐set list of mandatory variables ( ), the duration of morning stiffness is included in the ACR provisional definition of inactive disease ( ). Recently, some discussion has been raised on the possibility of allowing a morning stiffness duration of 15 minutes in the definition of remission, as most parents do not consider their child to be in remission in the presence of morning stiffness, even of a short duration ( ). Our results should be interpreted in the light of some potential limitations. First, multiple tools are available to measure the selected domains. Our analysis was limited to the instruments included in the JAMAR. Second, test–retest reliability was assessed with a time interval of 7–14 days between the first and second assessment. We believe this time span is appropriate to assess test–retest reliability in a chronic disease like JIA on a large scale, but we did not formally assess whether the level of disease activity was the same at the 2 time points. Another key aspect of the evaluation of outcome measures is responsiveness to change and determining minimal clinically important differences, which requires longitudinal data analysis. In conclusion, we have provided further evidence of validity and reliability of 4 parent/child–reported outcome measures, whose referring domains are included in the OMERACT JIA core domain set. By documenting these key measurement properties, we have shown that these measures are valid instruments for patient/parents’ evaluation of disease activity in JIA and are, therefore, potentially applicable not only in a research setting but also in the standard clinical care. In particular, these parent/child–reported outcomes are ideally suited to be included in a parent/patient–reported disease activity score for remote monitoring of patients. All authors were involved in drafting the article or revising it critically for important intellectual content, and all authors approved the final version to be submitted for publication. Dr. Consolaro had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study conception and design van Dijkhuizen, Ridella, Naddei, Trincianti, Ruperto, Ravelli, Consolaro. Acquisition of data Avrusin, Mazzoni, Sutera, Ayaz, Penades, Constantin, Herlin, Oliveira, Rygg, Sanner, Susic, Sztajnbok, Varbanova. Analysis and interpretation of data van Dijkhuizen, Ridella, Consolaro. van Dijkhuizen, Ridella, Naddei, Trincianti, Ruperto, Ravelli, Consolaro. Avrusin, Mazzoni, Sutera, Ayaz, Penades, Constantin, Herlin, Oliveira, Rygg, Sanner, Susic, Sztajnbok, Varbanova. van Dijkhuizen, Ridella, Consolaro. Disclosure Form Click here for additional data file. Supplementary figure I Comparison of Spearman correlations of morning stiffness duration, active joint count, level of disease activity and level of pain assessed by patients with the clinical Juvenile Arthritis Disease Activity Score 10 among the ILAR categories of JIA Supplementary figure II Comparison of Spearman correlations of morning stiffness duration, active joint count, level of disease activity and level of pain assessed by patients with the clinical Juvenile Arthritis Disease Activity Score 10 grouped by geographic areas Supplementary figure III Agreement between scores obtained by the morning stiffness duration, patient assessment of joint count, level of disease activity and level of pain measures at first and second assessment illustrated by Bland– Altman plots. Interval between first and second assessment was 7 (6;7) days Click here for additional data file.
Association between lack of dental service utilisation and caregiver‐reported caries in Australian Indigenous children: A national survey
ea332c46-d84c-479f-a07d-aadacf14345a
10087467
Dental[mh]
Reduced dental service utilisation is broadly associated with poorer oral health and untreated dental caries can cause discomfort, pain, poor sleep, irritability and hospitalisation. Australian Indigenous children experience higher risks of developing dental caries, have greater unmet oral health; and face substantial barriers to accessing dental services compared to non‐Indigenous people. Few studies have investigated the association between dental service utilisation and dental caries, or understood the reasons for reduced dental service utilisation in Australian Indigenous children. The lack of dental service utilisation was associated with an increased likelihood of caregiver‐reported dental caries and teeth removed due to dental caries. The shortage of dental treatment providers and geographical remoteness posed as the main barriers to accessing dental services amongst Australian Indigenous children. These findings suitably represent the Australian Indigenous children as a whole population, as the Longitudinal Study of Indigenous Children (LSIC) is the largest national study of Australian Indigenous children to date. Ethical considerations The LSIC received ethical approval from the Australian Government Department of Health and Ageing Departmental Ethics Committee for the study. This study obtained permission to use the LSIC data set from the National Centre for Longitudinal Data (NCLD) Access for analysis under The University of Queensland's organisational licence. Study population and sampling This study is a secondary analysis of data from the Longitudinal Study of Indigenous Children (LSIC), which was established in 2008 through a partnership between the Department of Social Services (DSS), the Footprints in Time Steering Committee, and the Australian Institute of Aboriginal and Torres Strait Islander Studies (AIATSIS). LSIC is a nationally representative longitudinal study that collects information about the environment in which Australian Indigenous children grow up in, and the social, economic, educational and family issues that impact their development and well‐being. The first data set (Wave 1) comprised of a total of 1687 children and was collected in 2008. Follow‐up data were collected every year thereafter, and new participants were added only in Wave 2. A cluster sampling technique was used to select geographic sites. Data were collected from primary caregivers (Parent 1/P1), P1's partner or father (Parent 2/P2) and educators via self‐reported questionnaires. The LSIC study sample identified geographical sites based on Aboriginal and Torres Strait Islander people concentrations and aimed to obtain 150 children from each site. Distribution of participants varied across sites and waves, due to limited population size, participant relocation and withdrawal at different stages of the study. A detailed description of the methodology has been published in the Department of Social Services LSIC data user guide. Data collection The variable of interest; self‐reported access to dental services, was only reported in Waves 2, 4 and 7. Data from Wave 7 was selected for the cross‐sectional analysis on the basis of recency of information, and the greatest proportion of positive responses. Only information from the primary caregiver (P1) was considered. Participants were stratified into two cohorts: the Baby (B) cohort aged 6.5–8 years old; and the Child (K) Cohort aged 9.5–11 years old at point of data collection in Wave 7. Variables To investigate the associations between reduced dental service utilisation and self‐reported caries in Australian Indigenous children, the outcome variable was self‐reported dental caries, with the main independent explanatory variable being self‐reported access to dental services reported by P1. Other independent variables were cohort, gender, Indigenous status, oral hygiene habits, daily frequency of sweet food/beverage intake, family construct, socio‐economic status and geographical remoteness. The outcome variable (carer‐reported caries) was assessed by the question ‘Has study child (SC) ever had any of the following problems with (his/her) teeth or gums – Any cavities, holes or tooth decay?’ and ‘Has SC ever had any of the following problems with (his/her) teeth or gums – tooth pulled out because of decay?’ . Responses to each question were dichotomised into two groups (yes/no). The exposure variable (reduced dental service utilisation) was assessed by the question ‘Has SC ever needed a dentist but didn't see one’ and was dichotomised into two categories (yes/no). Gender was categorised into male and female. Indigenous status of study child was grouped into three categories: Aboriginal, Torres Strait Islander, and Aboriginal and Torres Strait Islander. To assess oral hygiene habits, the question ‘SC brushes.’ was classified into frequent (twice or more daily) and infrequent (less than twice daily) toothbrushing. , As the actual value of sugar consumption could not be accurately measured directly, recommended daily intakes from the current Australian Dietary Guidelines for children were referenced, and the frequency of ‘added sugar’ intake was categorised into ‘Low’ (B: 0, K: 0–2) or ‘High’ (B: 1 or more; K: 3 or more). As naturally present sugars do not make important contribution to the development of dental caries, only food and beverages with added sugars were included as part of ‘sugar intake’. For socio‐economic status, decile of the 2006 Index of Relative Indigenous Socioeconomic Outcomes (IRISEO) scores were used and coded into three categories most disadvantaged (1st–4th deciles), moderately disadvantaged (5th–7th deciles) and least disadvantaged (8th–10th deciles). Caregiver education was assessed by the question ‘ P1 Highest completed qualification ’ and was categorised into four categories based on highest education qualification: (i) Bachelor's degree and higher, (ii) Diploma/TAFE/ Certificate, (iii) Year 12 and (iv) Year 11 or below. The study child's family construct was assessed by the question ‘Study child lives with…’ and was categorised into four categories, (i) parent and partner, (ii) lone parent, (iii) carer and partner and (iv) one carer. Data analysis The data corresponding to the dependent, independent and confounding variables was identified and downloaded from the LSIC database using IBM SPSS Statistics for Windows v26 (IBM, Armonk, NY, USA). Descriptive statistics, expressed as count (percentage), were used to summarise the characteristics of the study population. Frequency distribution of the variables of interest was reported by caregiver‐reported dental caries and caregiver‐reported reduced dental service utilisation. Logistic regression analysis was conducted to estimate the association of outcome variable (self‐reported dental caries) with the variable of interest (reduced dental service utilisation). An unadjusted analysis was first performed, followed by a multivariable analysis (using the complete case approach) which accounted for the effects of the other independent variables. Finally, a regression analysis with multiple imputation was performed using the fully conditional specifications (FCS) approach. Missing values of diet (frequency of sweet food/beverage intake) and caregiver education were imputed as each had more than 5% missing. Number of imputations was based on the percentage of missing data of both variables (14 imputations, 13.7% missing). All study variables (dental caries, teeth removal, dental service utilisation, cohort, gender, Indigenous status, oral hygiene, family construct and socio‐economic status) were employed to impute the missing values and both logistic (for diet) and ordinal logistic (for education) regressions were used for imputation modelling. To test the departure from missing at random (MAR) assumption, a weighted sensitivity analysis using the selection model approach for the outcome was used. The findings of such analysis confirmed the assumption. Regression estimates were reported as odds ratios (ORs) with 95% confidence intervals (CIs). All statistical analyses were carried out in Stata version 17.0 (StataCorp. 2021. Stata Statistical Software: Release 17. College Station, TX: StataCorp LLC.). The LSIC received ethical approval from the Australian Government Department of Health and Ageing Departmental Ethics Committee for the study. This study obtained permission to use the LSIC data set from the National Centre for Longitudinal Data (NCLD) Access for analysis under The University of Queensland's organisational licence. Study population and sampling This study is a secondary analysis of data from the Longitudinal Study of Indigenous Children (LSIC), which was established in 2008 through a partnership between the Department of Social Services (DSS), the Footprints in Time Steering Committee, and the Australian Institute of Aboriginal and Torres Strait Islander Studies (AIATSIS). LSIC is a nationally representative longitudinal study that collects information about the environment in which Australian Indigenous children grow up in, and the social, economic, educational and family issues that impact their development and well‐being. The first data set (Wave 1) comprised of a total of 1687 children and was collected in 2008. Follow‐up data were collected every year thereafter, and new participants were added only in Wave 2. A cluster sampling technique was used to select geographic sites. Data were collected from primary caregivers (Parent 1/P1), P1's partner or father (Parent 2/P2) and educators via self‐reported questionnaires. The LSIC study sample identified geographical sites based on Aboriginal and Torres Strait Islander people concentrations and aimed to obtain 150 children from each site. Distribution of participants varied across sites and waves, due to limited population size, participant relocation and withdrawal at different stages of the study. A detailed description of the methodology has been published in the Department of Social Services LSIC data user guide. Data collection The variable of interest; self‐reported access to dental services, was only reported in Waves 2, 4 and 7. Data from Wave 7 was selected for the cross‐sectional analysis on the basis of recency of information, and the greatest proportion of positive responses. Only information from the primary caregiver (P1) was considered. Participants were stratified into two cohorts: the Baby (B) cohort aged 6.5–8 years old; and the Child (K) Cohort aged 9.5–11 years old at point of data collection in Wave 7. Variables To investigate the associations between reduced dental service utilisation and self‐reported caries in Australian Indigenous children, the outcome variable was self‐reported dental caries, with the main independent explanatory variable being self‐reported access to dental services reported by P1. Other independent variables were cohort, gender, Indigenous status, oral hygiene habits, daily frequency of sweet food/beverage intake, family construct, socio‐economic status and geographical remoteness. The outcome variable (carer‐reported caries) was assessed by the question ‘Has study child (SC) ever had any of the following problems with (his/her) teeth or gums – Any cavities, holes or tooth decay?’ and ‘Has SC ever had any of the following problems with (his/her) teeth or gums – tooth pulled out because of decay?’ . Responses to each question were dichotomised into two groups (yes/no). The exposure variable (reduced dental service utilisation) was assessed by the question ‘Has SC ever needed a dentist but didn't see one’ and was dichotomised into two categories (yes/no). Gender was categorised into male and female. Indigenous status of study child was grouped into three categories: Aboriginal, Torres Strait Islander, and Aboriginal and Torres Strait Islander. To assess oral hygiene habits, the question ‘SC brushes.’ was classified into frequent (twice or more daily) and infrequent (less than twice daily) toothbrushing. , As the actual value of sugar consumption could not be accurately measured directly, recommended daily intakes from the current Australian Dietary Guidelines for children were referenced, and the frequency of ‘added sugar’ intake was categorised into ‘Low’ (B: 0, K: 0–2) or ‘High’ (B: 1 or more; K: 3 or more). As naturally present sugars do not make important contribution to the development of dental caries, only food and beverages with added sugars were included as part of ‘sugar intake’. For socio‐economic status, decile of the 2006 Index of Relative Indigenous Socioeconomic Outcomes (IRISEO) scores were used and coded into three categories most disadvantaged (1st–4th deciles), moderately disadvantaged (5th–7th deciles) and least disadvantaged (8th–10th deciles). Caregiver education was assessed by the question ‘ P1 Highest completed qualification ’ and was categorised into four categories based on highest education qualification: (i) Bachelor's degree and higher, (ii) Diploma/TAFE/ Certificate, (iii) Year 12 and (iv) Year 11 or below. The study child's family construct was assessed by the question ‘Study child lives with…’ and was categorised into four categories, (i) parent and partner, (ii) lone parent, (iii) carer and partner and (iv) one carer. Data analysis The data corresponding to the dependent, independent and confounding variables was identified and downloaded from the LSIC database using IBM SPSS Statistics for Windows v26 (IBM, Armonk, NY, USA). Descriptive statistics, expressed as count (percentage), were used to summarise the characteristics of the study population. Frequency distribution of the variables of interest was reported by caregiver‐reported dental caries and caregiver‐reported reduced dental service utilisation. Logistic regression analysis was conducted to estimate the association of outcome variable (self‐reported dental caries) with the variable of interest (reduced dental service utilisation). An unadjusted analysis was first performed, followed by a multivariable analysis (using the complete case approach) which accounted for the effects of the other independent variables. Finally, a regression analysis with multiple imputation was performed using the fully conditional specifications (FCS) approach. Missing values of diet (frequency of sweet food/beverage intake) and caregiver education were imputed as each had more than 5% missing. Number of imputations was based on the percentage of missing data of both variables (14 imputations, 13.7% missing). All study variables (dental caries, teeth removal, dental service utilisation, cohort, gender, Indigenous status, oral hygiene, family construct and socio‐economic status) were employed to impute the missing values and both logistic (for diet) and ordinal logistic (for education) regressions were used for imputation modelling. To test the departure from missing at random (MAR) assumption, a weighted sensitivity analysis using the selection model approach for the outcome was used. The findings of such analysis confirmed the assumption. Regression estimates were reported as odds ratios (ORs) with 95% confidence intervals (CIs). All statistical analyses were carried out in Stata version 17.0 (StataCorp. 2021. Stata Statistical Software: Release 17. College Station, TX: StataCorp LLC.). This study is a secondary analysis of data from the Longitudinal Study of Indigenous Children (LSIC), which was established in 2008 through a partnership between the Department of Social Services (DSS), the Footprints in Time Steering Committee, and the Australian Institute of Aboriginal and Torres Strait Islander Studies (AIATSIS). LSIC is a nationally representative longitudinal study that collects information about the environment in which Australian Indigenous children grow up in, and the social, economic, educational and family issues that impact their development and well‐being. The first data set (Wave 1) comprised of a total of 1687 children and was collected in 2008. Follow‐up data were collected every year thereafter, and new participants were added only in Wave 2. A cluster sampling technique was used to select geographic sites. Data were collected from primary caregivers (Parent 1/P1), P1's partner or father (Parent 2/P2) and educators via self‐reported questionnaires. The LSIC study sample identified geographical sites based on Aboriginal and Torres Strait Islander people concentrations and aimed to obtain 150 children from each site. Distribution of participants varied across sites and waves, due to limited population size, participant relocation and withdrawal at different stages of the study. A detailed description of the methodology has been published in the Department of Social Services LSIC data user guide. The variable of interest; self‐reported access to dental services, was only reported in Waves 2, 4 and 7. Data from Wave 7 was selected for the cross‐sectional analysis on the basis of recency of information, and the greatest proportion of positive responses. Only information from the primary caregiver (P1) was considered. Participants were stratified into two cohorts: the Baby (B) cohort aged 6.5–8 years old; and the Child (K) Cohort aged 9.5–11 years old at point of data collection in Wave 7. To investigate the associations between reduced dental service utilisation and self‐reported caries in Australian Indigenous children, the outcome variable was self‐reported dental caries, with the main independent explanatory variable being self‐reported access to dental services reported by P1. Other independent variables were cohort, gender, Indigenous status, oral hygiene habits, daily frequency of sweet food/beverage intake, family construct, socio‐economic status and geographical remoteness. The outcome variable (carer‐reported caries) was assessed by the question ‘Has study child (SC) ever had any of the following problems with (his/her) teeth or gums – Any cavities, holes or tooth decay?’ and ‘Has SC ever had any of the following problems with (his/her) teeth or gums – tooth pulled out because of decay?’ . Responses to each question were dichotomised into two groups (yes/no). The exposure variable (reduced dental service utilisation) was assessed by the question ‘Has SC ever needed a dentist but didn't see one’ and was dichotomised into two categories (yes/no). Gender was categorised into male and female. Indigenous status of study child was grouped into three categories: Aboriginal, Torres Strait Islander, and Aboriginal and Torres Strait Islander. To assess oral hygiene habits, the question ‘SC brushes.’ was classified into frequent (twice or more daily) and infrequent (less than twice daily) toothbrushing. , As the actual value of sugar consumption could not be accurately measured directly, recommended daily intakes from the current Australian Dietary Guidelines for children were referenced, and the frequency of ‘added sugar’ intake was categorised into ‘Low’ (B: 0, K: 0–2) or ‘High’ (B: 1 or more; K: 3 or more). As naturally present sugars do not make important contribution to the development of dental caries, only food and beverages with added sugars were included as part of ‘sugar intake’. For socio‐economic status, decile of the 2006 Index of Relative Indigenous Socioeconomic Outcomes (IRISEO) scores were used and coded into three categories most disadvantaged (1st–4th deciles), moderately disadvantaged (5th–7th deciles) and least disadvantaged (8th–10th deciles). Caregiver education was assessed by the question ‘ P1 Highest completed qualification ’ and was categorised into four categories based on highest education qualification: (i) Bachelor's degree and higher, (ii) Diploma/TAFE/ Certificate, (iii) Year 12 and (iv) Year 11 or below. The study child's family construct was assessed by the question ‘Study child lives with…’ and was categorised into four categories, (i) parent and partner, (ii) lone parent, (iii) carer and partner and (iv) one carer. The data corresponding to the dependent, independent and confounding variables was identified and downloaded from the LSIC database using IBM SPSS Statistics for Windows v26 (IBM, Armonk, NY, USA). Descriptive statistics, expressed as count (percentage), were used to summarise the characteristics of the study population. Frequency distribution of the variables of interest was reported by caregiver‐reported dental caries and caregiver‐reported reduced dental service utilisation. Logistic regression analysis was conducted to estimate the association of outcome variable (self‐reported dental caries) with the variable of interest (reduced dental service utilisation). An unadjusted analysis was first performed, followed by a multivariable analysis (using the complete case approach) which accounted for the effects of the other independent variables. Finally, a regression analysis with multiple imputation was performed using the fully conditional specifications (FCS) approach. Missing values of diet (frequency of sweet food/beverage intake) and caregiver education were imputed as each had more than 5% missing. Number of imputations was based on the percentage of missing data of both variables (14 imputations, 13.7% missing). All study variables (dental caries, teeth removal, dental service utilisation, cohort, gender, Indigenous status, oral hygiene, family construct and socio‐economic status) were employed to impute the missing values and both logistic (for diet) and ordinal logistic (for education) regressions were used for imputation modelling. To test the departure from missing at random (MAR) assumption, a weighted sensitivity analysis using the selection model approach for the outcome was used. The findings of such analysis confirmed the assumption. Regression estimates were reported as odds ratios (ORs) with 95% confidence intervals (CIs). All statistical analyses were carried out in Stata version 17.0 (StataCorp. 2021. Stata Statistical Software: Release 17. College Station, TX: StataCorp LLC.). Socio‐demographic characteristics of the study population are presented in Table . The study included 1258 children, of which 734 (58.3%) and 524 (41.7%) were from the baby and child cohort, respectively. Roughly equal proportions of male (49.1%) and female participants (50.9%) were represented in the study sample. The majority of the children were identified as Aboriginal (87.1%), 6.8% as Torres Strait Islander and 6.1% as Aboriginal and Torres Strait Islander. Regarding socio‐economic status, approximately three quarters of the participants fell into moderate (46.6%) and most disadvantaged (30.8%) categories on the IRISEO scale. Approximately one third (36.8%) of mothers had completed a higher education course while the remaining 57.1% had completed high school or less. Nearly half (46.1%) of the children were taken care by either lone parent (40.1%) or lone carer (6.1%), while the remaining children had family constructs including a parent and a partner (50.2%) or carer and their partner (3.2%). In terms of oral hygiene, 64.8% of children had performed oral hygiene less than twice daily while 34.6% performed oral hygiene twice or more daily. As for sugar consumption, the highest percentage of children (51.7%) was evident in the group with low frequency of sweet food or beverage intake in a given day. Furthermore, on dental service utilisation, slightly more than one tenth (12.3%) of parents reported their children not utilising dental services when needed. Multiple reasons for this were selected by their caregivers. In descending order, 31.4% reported having the lack of an available dentist as a barrier, 19.8% reported having transportation or distance barriers, 13.9% reported long waiting times affecting access to dental services, 5.8% reported cost as a barrier, 4.6% reported not utilising dental services as they ‘felt they could cope’. The option ‘others’ was also explored, with 24.4% of participants selecting this option. No participants responded to ‘another carer didn't take the child’, ‘dislikes the service or staff’, ‘discrimination’, ‘language problems’ or ‘someone else dealt with the problem’ as a reason for not utilising dental services when needing one (Table ). Indigenous children who did not utilise dental services when required showed higher prevalence of caregiver‐reported caries compared to those who utilised dental services when needed (49.4% and 28.9%, respectively). Similarly, Indigenous children who did not utilise dental services when needed also showed higher prevalence of having teeth extracted due to caries compared to those who utilised dental services when needed (12.7% and 5.9%, respectively). Higher prevalence of caregiver reported caries and teeth removed due to dental caries were also observed in four groups: (i) had poor oral hygiene, (ii) high sugar consumption frequency, (iii) low‐income backgrounds and (iv) lone carer family constructs (Table ). Results from the regression analysis are presented in Table . The lack of dental service utilisation when needed was associated with an increased likelihood of caregiver‐reported dental caries (unadjusted OR 2.4, 95% CI 1.5–3.8) and teeth removal due to dental caries (unadjusted OR 2.3, 95% CI 1.1–4.7), and the effects remained after adjusting for confounding factors in the model with multiple imputation (Model 3) (dental caries: adjusted OR 2.4, 95% CI 1.5–3.8; teeth removal: adjusted OR 2.1, 95% CI 1.0–4.3). In summary, it was observed an association of the lack of dental service utilisation with both caregiver‐reported dental caries and teeth extracted due to dental caries in Australian Indigenous children. These associations were found to be present after adjusting for confounding factors including cohort, gender, Indigenous status, oral hygiene habits, daily frequency of sweet food/beverage intake, family construct, socio‐economic status and geographical remoteness. These findings supplement the growing body of evidence on the negative impact of lack of dental service utilisation amongst the Australian Indigenous population. These findings are congruent with other previously completed studies where Indigenous populations were found to have higher DMFT scores. , It is attributed to a plethora of reasons such as geographical limitations, lack of available health‐care providers, long waiting times for oral health‐care services, family's financial limitations, family constructs and cultural barriers. , , Further examination of the reasons for the lack of dental service utilisation explained that majority of the respondents within the study responded having the ‘ lack of available dentist’ as the primary barrier in seeking dental services. Despite the Australian Government Department of Health reporting a compound annual growth rate in the dental workforce in 2019, a disproportionate allocation of dental practitioners between metropolitan regions and remote areas was noted, with half as many dental practitioners (full‐time equivalent) distributed in remote and very remote areas in comparison to major cities. In addition, the geographical remoteness of Indigenous Australians also poses logistical challenges to both the construction and maintenance of required oral health‐care infrastructures. These logistical constraints, coupled with staffing shortages, geographical distance and/or equipment failure, can limit supply of dental service provision in remote and regional areas, stressing the operating capacities of existing dental service providers and resulting in longer waiting times for treatment. Financial barriers to accessing dental services were also cited in the LSIC. The socio‐economic disadvantage of the Australian Indigenous population can limit access to dental services, which exacerbates the development and implications of dental caries. Socio‐economic factors such as caregiver education, household financial hardship, household overcrowding, parents' occupation and employment status have been reported to affect the capacity of Australian Indigenous children to afford private dental services. The Australian Government has implemented strategies to curb financial barriers such as the Child Dental Benefits Schedule (CDBS) which provides benefits for dental services, and the Dental Relocation and Infrastructure Scheme (DRISS), which provides relocation and infrastructure support grants for dental providers. While these schemes have made private dental services more available, Australian Indigenous children are still face challenges in utilising and accessing these targeted dental initiatives. , Factors that are considered to influence this include distance and remoteness, lack of knowledge of the services available and limited integration of culturally safe practice. It is thus important to acknowledge that while there were no participants who responded to the discrimination or language options as barriers, a quarter of respondents indicated there were ‘other’ barriers to access without further elaboration. Several other studies have put forward potential reasons for reduced dental service use amongst Australian Indigenous children, including family and cultural level influences. Family constructs with single parents or carers, have been identified as a risk factor for dental caries in children due to family stresses arising from parenting responsibilities and reduced personal resources related to separation and family conflict. Restricted finances and lack of partner support, coupled with the opportunity cost of missing work for their children's dental appointments can subsequently hinder the ability of these children to access dental services. Cultural misconceptions such as ‘poor oral health being natural’, ‘oral hygiene being non‐essential’, and notions that ‘primary teeth being less important than permanent teeth’ present can also impair motivation to seek for dental services. In addition, Indigenous Australians also engage in problem‐based dental attendance patterns. This delayed response to dental problems can subsequently lead to the need for more invasive dental treatment, compared to their non‐Indigenous counterparts. One major strength of this study is that LSIC is the largest national study of Australian Indigenous children to date, with study sites selected amongst all of Australia's states and territories, inclusive of urban, regional and remote areas. Hence, it is a suitable representation of the Australian Indigenous children population. In addition, the LSIC also included participants across a diverse range of life circumstances, location and cultures – which enabled a more precise depiction of the Aboriginal and Torres Strait Islander people's life. Additionally, results found were adjusted for consistent confounding factors reported within the LSIC dataset including cariogenic diets, oral hygiene, family construct, socio‐economic status and caregiver education. The biggest limitation of this study is that it is a secondary analysis of the LSIC dataset. As such, the results are bounded by the constructs of the LSIC dataset, and the authors were unable to control for these limitations. One other limitation is that the study outcomes (dental caries and teeth extracted) were reported by caregivers and not evaluated clinically possibly leading to potential under‐reporting and underestimation of dental caries. However, it has been demonstrated that caregiver‐reporting proves to be a reliable method of data collection when collection of clinical data is restricted due to cost or logistics. Another limitation is that the LSIC adopted a non‐probability sampling method during the initial recruitment process. Nevertheless, it is still important to acknowledge that the Aboriginal and Torres Strait Islander peoples share similar general characteristics, and that the LSIC study is not intended to be a representative of all Aboriginal and Torres Strait Islander families but rather, to provide a snapshot of life in a diverse range of environments. , Within the constructs of this study, the shortage of dental treatment providers and geographical remoteness posed as the main barriers to accessing dental services amongst Australian Indigenous children. However, due to the close‐ended design of the LSIC study, not all reasons for reduced dental service utilisation could be accurately evaluated. More studies involving clinically measured outcomes for an objective assessment of oral health conditions including caries, and open‐ended questionnaires to facilitate a qualitative analysis are required to better understand the relationship between the lack of dental service utilisation and oral health outcomes in Australian Indigenous children. This study confirms that the lack of dental service utilisation when required is associated with caregiver‐reported dental caries and teeth removed due to caries in Australian Indigenous children. Findings from this study could potentially contribute to future policy planning, in terms of creating effective oral health promotion and dental service utilisation programmes with emphasis primary prevention of caries and its adverse outcomes among Australian Indigenous children.
Feasibility of perioperative remote monitoring of patient‐generated health data in complex surgical oncology
88bff660-d7b7-4808-9e5a-149ad15c78cd
10087541
Internal Medicine[mh]
INTRODUCTION Surgeons and cancer centers are increasingly asked to provide evidence of the quality and value of their care. This paradigm shift is occurring at a time where tremendous advances in surgical techniques are taking place. The advent of minimally invasive surgical techniques has resulted in a shift in the way surgical teams care for patients postoperatively. Patients are now discharged earlier and earlier after surgery, with postoperative recovery primarily taking place at home. Postoperative complications that traditionally arise in the hospital are now developing, potentially unnoticed, at home and in the outpatient setting. Outcomes that are used to measure quality surgical oncology care are also evolving, with efforts to focus on more patient‐centered variables. Historically, medical and surgical outcomes are measured by disease‐ and systems‐related parameters like length of hospital stay, morbidity, readmission rates, and mortality. , While important, these measures may not accurately reflect the surgical care experience from a patient's perspective. A promising approach to modernize perioperative care and improve patient‐centeredness is through remote monitoring of patient‐generated health data (PGHD). In the United States, the Office of the National Coordinator for Health Information Technology defines PGHDs as “health‐related data created, recorded, or gathered by or from patients (or family members or other caregivers) to help address a health concern.” PGHDs may include health history, treatment history, biometric data, symptoms, lifestyle choices. Importantly, patients are responsible for capturing these data, which is distinct from traditional data generated in clinical settings. PGHDs are increasingly being used in routine cancer care as quality and value indicators, , , , with robust evidence on its impact on clinical outcomes in advanced cancer populations. Research on PGHDs and remote monitoring in surgical oncology is somewhat nascent, with the majority of the current evidence focused on electronic monitoring of symptoms. Knowledge gaps remain on PGHD's utility and impact on surgical outcomes/care decision‐making, particularly for data captured remotely through wearables and other devices. Our research team had previously conducted a proof‐of‐concept study to assess the feasibility and acceptability of electronic symptom and functional status monitoring in major abdominal cancer surgery. We found that remote monitoring was feasible and acceptable, and exploratory analysis suggest that the number of daily steps may be associated with postoperative complications. The original proof‐of‐concept study above did not include remote capture and monitoring of patient‐generated physiologic/biometrics data such as vital signs. Therefore, in the present study we conducted a second proof‐of‐concept feasibility trial of remote perioperative telemonitoring that combines objectively measured physiologic data (vital signs and daily steps) and electronic patient‐reported outcomes (ePROs) in a complex surgical oncology and urologic oncology setting. METHODS This was a proof‐of‐concept trial that aimed to assess (1) the feasibility of remote monitoring of combined objective/subjective PGHDs/ePROs in surgical/urologic oncology, and (2) to explore the trajectory of the PGHDs overtime, from before surgery to after surgery. Patients eligible for participation in the study were scheduled to undergo a curative resection for urologic (kidney and bladder) or GI cancers (gastric, colorectal, and peritoneal surface malignancy, liver and pancreas), were >18 years old and English speaking. Between August and December 2020, eligible patients who met the study inclusion criteria were identified and recruited from the surgical and urologic oncology ambulatory clinics of a National Cancer Institute‐designated comprehensive cancer center. The Institutional Review Board (IRB19040) approved study procedures and all participating patients provided written informed consent before enrollment. The study was registered at clinicaltrials.gov (NCT04501913) and was Health Insurance Portability and Accountability Act (HIPAA) compliant. 2.1 Remote monitoring and outcomes thresholds design Following informed consent, patients were provided with the following: (1) Federal and Drug Administration (FDA)‐cleared blue‐tooth enabled devices from the company mTelehealth (thermometer, digital weight scale, sphygmometer, pulse oximeter) for the capture of vital signs; (2) a commercially available wristband pedometer (Vivofit 4; Garmin Ltd) for tracking functional recovery (daily steps); and (3) a study tablet with HIPAA compliant Aetonix A Touch Away™ mobile application platform. The authors have no financial relationships with the third‐party vendors and their products used in the study. The FDA‐cleared devices for vital signs were paired with the study tablet, and assessments of ePROs were also captured through the tablet. Trained research staff assisted patients with setup and provided instructions on device/engagement platform use. Thresholds for all PGHDs were predetermined to guide actions based on the data that patients provided. Vital sign thresholds included: (1) weight increase/decrease of 2 kg from discharge; (2) temperature >38°C with heart rate >110 or systolic blood pressure <90 or >180 or a temperature of >38.3°C independent of other vital signs; (3) oxygen saturation <90%; 4) heart rate >110 beats per min (>120 if last heart rate at discharge was >100); (5) systolic blood pressure less than 90 (<85 if last systolic blood pressure at discharge was <100) or >180. Functional status threshold was based on our first proof‐of‐concept trial and set at daily steps count of <1500. For the ePROs (symptoms/Quality of life [QOL]), thresholds were set at one or more symptoms/QOL items with a moderate to severe intensity score (4 or higher). 2.2 Outcome measures Feasibility was assessed by (1) overall accrual, and attrition rates and (2) patient's ability to use the remote perioperative monitoring equipment. The study included three brief measures to assess ePROs. Symptom severity and symptom interference with activities were assessed using the MD Anderson Symptom Inventory (MDASI), a validated measure of 13 common cancer‐related symptoms as rated on a 10‐point scale. The EuroQol 5‐dimensional descriptive system (EQ‐5D‐5L) was used to assess quality of life and general health status. This validated instrument functions to evaluate the following 5 QOL variables: mobility, self‐care, usual activities, pain or discomfort, and anxiety or depression. Overall health state using a visual analog scale with end points labeled. “best to worst imaginable” health state (range, 0−100) was used as a final overall metric. The EQ‐5D‐5L instrument has been widely employed in quality‐adjusted survival analyses and clinical trials. , , Finally, the Patient‐Reported Outcomes Measurement Information System (PROMIS) General Physical and Mental Health‐Short From was used to assess physical and mental health status. The four items are scored on a 5‐point Likert scale: 1 (poor); 2 (fair); 3 (good); 4 (very good); and 5 (excellent). Relevant surgical and other clinical data were obtained via the electronic health record, including surgery date, comorbidities, primary diagnosis, procedure type, surgical technique (open, laparoscopic, or robot assisted), American Society of Anesthesiologists' (ASA) classification, length of hospital stay, and readmissions. Each patient's performance status was evaluated utilizing the Eastern Cooperative Oncology Group performance scale score ranging from 0 to 5, where 0 denotes no symptoms and 5 indicates death. Postoperative complications were calculated using the Comprehensive Complication Index (CCI) based on the Clavien−Dindo classification. , 2.3 Study procedures All patients were consented at least 3−7 days before surgery. This design was included to capture preop/baseline data on all outcomes, and also provided ample time for device setup and instructions. All patients completed baseline assessments of vital signs and ePROs electronically before surgery. Patients were instructed to bring their Vivofit pedometer and the study tablet to their hospital admissions. Outcomes were collected again postoperatively at hospital discharge and at Days 2, 7, 14, and 30 postdischarge. For the at hospital discharge time point, we used the last set of inpatient data recorded; this design was included so patients did not need to bring the FDA‐cleared devices to the hospital. At each of the postdischarge time points, patients received a reminder through their study tablet to provide a set of vital signs and complete ePROs. Daily steps data were continuously collected 3−7 days before surgery, during hospitalization and up to 30 days postdischarge. When PGHDs deviated from the predetermined thresholds, an alert via the Aetonix application was automatically generated within 1 min to trained Research Nurses. The alerts prompted the Research Nurses to proactively contact patients via telephone for further assessments, triage, and surgical team notification. Standard institutional triage nursing protocols were followed. Each encounter prompted by an alert were documented as an encounter note in the electronic medical records. At the final assessment time point (Day 30 postdischarge), patients completed a brief survey to assess acceptability and overall satisfaction with the monitoring. Patients provided feedback on the following: (1) use of devices and mobile application; (2) items in electronic surveys that were distressing or challenging to understand; (3) length of surveys and point of administration; and (4) items that were not covered but should be considered. After the study was completed patients also participated in an exit interview with questions regarding feedback on useful aspects of the telemonitoring program and devices in functional recovery and communication with their surgical team. 2.4 Statistical analysis Vital signs and ePROs were wirelessly captured through the study tablet, synchronized for feedback system/alert purposes, and transferred to a study‐specific REDCap database. Vivofit daily steps data were wirelessly and automatically transferred to the same REDCap database. Data were summarized using means, medians, standard deviations, minimum and maximum for continuous data, and proportions and percentages for categorical data. For PGHD alerts, every instance of Research Nurse initiated telephone assessments was considered a monitoring encounter, and the total number of monitoring encounters was recorded. Associations determined in this study were exploratory. A linear regression was performed between the number of daily steps at Day 14 and postoperative complications as calculated by the CCI and a correlation coefficient was calculated. Established instruments were scored according to standard protocols, and exploratory descriptive statistics were calculated. Exploratory analysis was performed with paired t ‐tests performed for scores between each time points and the p Value were determined accordingly. A p Value less than 0.05 was considered as statistically significant. Outcomes were calculated for the percentage of patients who were able to complete (1) the MDASI, (2) the EQ‐5D‐5L (3) PROMIS4 after discharge. The percentage of patients who wore the pedometer was also assessed at each time point. Remote monitoring and outcomes thresholds design Following informed consent, patients were provided with the following: (1) Federal and Drug Administration (FDA)‐cleared blue‐tooth enabled devices from the company mTelehealth (thermometer, digital weight scale, sphygmometer, pulse oximeter) for the capture of vital signs; (2) a commercially available wristband pedometer (Vivofit 4; Garmin Ltd) for tracking functional recovery (daily steps); and (3) a study tablet with HIPAA compliant Aetonix A Touch Away™ mobile application platform. The authors have no financial relationships with the third‐party vendors and their products used in the study. The FDA‐cleared devices for vital signs were paired with the study tablet, and assessments of ePROs were also captured through the tablet. Trained research staff assisted patients with setup and provided instructions on device/engagement platform use. Thresholds for all PGHDs were predetermined to guide actions based on the data that patients provided. Vital sign thresholds included: (1) weight increase/decrease of 2 kg from discharge; (2) temperature >38°C with heart rate >110 or systolic blood pressure <90 or >180 or a temperature of >38.3°C independent of other vital signs; (3) oxygen saturation <90%; 4) heart rate >110 beats per min (>120 if last heart rate at discharge was >100); (5) systolic blood pressure less than 90 (<85 if last systolic blood pressure at discharge was <100) or >180. Functional status threshold was based on our first proof‐of‐concept trial and set at daily steps count of <1500. For the ePROs (symptoms/Quality of life [QOL]), thresholds were set at one or more symptoms/QOL items with a moderate to severe intensity score (4 or higher). Outcome measures Feasibility was assessed by (1) overall accrual, and attrition rates and (2) patient's ability to use the remote perioperative monitoring equipment. The study included three brief measures to assess ePROs. Symptom severity and symptom interference with activities were assessed using the MD Anderson Symptom Inventory (MDASI), a validated measure of 13 common cancer‐related symptoms as rated on a 10‐point scale. The EuroQol 5‐dimensional descriptive system (EQ‐5D‐5L) was used to assess quality of life and general health status. This validated instrument functions to evaluate the following 5 QOL variables: mobility, self‐care, usual activities, pain or discomfort, and anxiety or depression. Overall health state using a visual analog scale with end points labeled. “best to worst imaginable” health state (range, 0−100) was used as a final overall metric. The EQ‐5D‐5L instrument has been widely employed in quality‐adjusted survival analyses and clinical trials. , , Finally, the Patient‐Reported Outcomes Measurement Information System (PROMIS) General Physical and Mental Health‐Short From was used to assess physical and mental health status. The four items are scored on a 5‐point Likert scale: 1 (poor); 2 (fair); 3 (good); 4 (very good); and 5 (excellent). Relevant surgical and other clinical data were obtained via the electronic health record, including surgery date, comorbidities, primary diagnosis, procedure type, surgical technique (open, laparoscopic, or robot assisted), American Society of Anesthesiologists' (ASA) classification, length of hospital stay, and readmissions. Each patient's performance status was evaluated utilizing the Eastern Cooperative Oncology Group performance scale score ranging from 0 to 5, where 0 denotes no symptoms and 5 indicates death. Postoperative complications were calculated using the Comprehensive Complication Index (CCI) based on the Clavien−Dindo classification. , Study procedures All patients were consented at least 3−7 days before surgery. This design was included to capture preop/baseline data on all outcomes, and also provided ample time for device setup and instructions. All patients completed baseline assessments of vital signs and ePROs electronically before surgery. Patients were instructed to bring their Vivofit pedometer and the study tablet to their hospital admissions. Outcomes were collected again postoperatively at hospital discharge and at Days 2, 7, 14, and 30 postdischarge. For the at hospital discharge time point, we used the last set of inpatient data recorded; this design was included so patients did not need to bring the FDA‐cleared devices to the hospital. At each of the postdischarge time points, patients received a reminder through their study tablet to provide a set of vital signs and complete ePROs. Daily steps data were continuously collected 3−7 days before surgery, during hospitalization and up to 30 days postdischarge. When PGHDs deviated from the predetermined thresholds, an alert via the Aetonix application was automatically generated within 1 min to trained Research Nurses. The alerts prompted the Research Nurses to proactively contact patients via telephone for further assessments, triage, and surgical team notification. Standard institutional triage nursing protocols were followed. Each encounter prompted by an alert were documented as an encounter note in the electronic medical records. At the final assessment time point (Day 30 postdischarge), patients completed a brief survey to assess acceptability and overall satisfaction with the monitoring. Patients provided feedback on the following: (1) use of devices and mobile application; (2) items in electronic surveys that were distressing or challenging to understand; (3) length of surveys and point of administration; and (4) items that were not covered but should be considered. After the study was completed patients also participated in an exit interview with questions regarding feedback on useful aspects of the telemonitoring program and devices in functional recovery and communication with their surgical team. Statistical analysis Vital signs and ePROs were wirelessly captured through the study tablet, synchronized for feedback system/alert purposes, and transferred to a study‐specific REDCap database. Vivofit daily steps data were wirelessly and automatically transferred to the same REDCap database. Data were summarized using means, medians, standard deviations, minimum and maximum for continuous data, and proportions and percentages for categorical data. For PGHD alerts, every instance of Research Nurse initiated telephone assessments was considered a monitoring encounter, and the total number of monitoring encounters was recorded. Associations determined in this study were exploratory. A linear regression was performed between the number of daily steps at Day 14 and postoperative complications as calculated by the CCI and a correlation coefficient was calculated. Established instruments were scored according to standard protocols, and exploratory descriptive statistics were calculated. Exploratory analysis was performed with paired t ‐tests performed for scores between each time points and the p Value were determined accordingly. A p Value less than 0.05 was considered as statistically significant. Outcomes were calculated for the percentage of patients who were able to complete (1) the MDASI, (2) the EQ‐5D‐5L (3) PROMIS4 after discharge. The percentage of patients who wore the pedometer was also assessed at each time point. RESULTS 3.1 Sociodemographic and surgical characteristics A total of 21 patients participated in the study; their sociodemographic characteristics are presented in Table . The median age was 58, 64% were male, 55% were of white race, 73% were married and 96% lived with a spouse/partner/friends, 36% lived with children and 27% were retired. The majority (76.2%) lived >15 miles away from the medical center. For surgical characteristics (see Table ), the patients were relatively high risk, with 17 patients (77%) with either an ASA III or IV classification. Patients with ASA III are characterized as having systemic disease that is not incapacitating and ASA IV are patients with incapacitating systemic disease that is a constant threat to life. The majority of patients (68%) had a minimally invasive surgery (either robotic or laparoscopic). Nine patients (43%) had combined surgeries that involved multiple organs (colorectal + genitourinary or colorectal + liver). The median length of hospital stay was 7 days (range 0−36). The 30‐day readmission rate was 33%, with 7 patients returning for hospitalization. 3.2 Feasibility Feasibility was assessed by accrual, attrition and the ability of patients to use the equipment and the staff ability to act on alerts to threshold health care parameters. A total of 28 patients were identified and invited to participate in the study over a 4‐month period (7 patients per month). The accrual rate was 78% with 22 patients that agreed to participate and provided written informed consent. The most common reasons patients gave for declining participation were not being tech savvy, being overwhelmed/stressed, and already having pre‐arranged home services. One patient consented but subsequently withdrew due to reports of feeling overwhelmed, yielding a final sample of 21 patients with evaluable data and yielding an attrition rate of 4.5% (1/22). Following informed consent, 20 out of the 21 patients (95%) completed the preoperative baseline PGHD assessment (vital signs and electronic surveys). At discharge, 91% (19/21) completed the assessments. At post‐discharge Days 2, 7, 14, and 30, the adherence rate for remote assessment completion was 82%, 68%, 64%, and 64%, respectively. The patients ability to complete the entirety of the study was 64% speaking to feasibility at 30 days postdischarge. Overall, we observed high to acceptable levels of adherence with wearing the Vivofit pedometer. Before surgery, 18 out of 21 patients (85.7%) wore the Vivofit pedometer; the median of wear days was 6 (range 1−29 days). During postop hospitalization, 18 out of 21 (85.7%) wore the pedometer; the median number of wear days while hospitalized was 7 (range 2−36 days). At Days 2, 7, 14, and 30 postdischarge, the percentage of patients that wore the pedometer was 85.7%, 85.7%, 85.7%, and 61.9%. The median number of wear days after discharge was 35 (range 3−51 days). Thus feasibility as measured by patients ability to use the Vivofit varied with days from discharge as noted above indicating 85.7% up to 2 weeks postdischarge. Findings on patients with triggered alerts types over time are depicted in Figure . The greatest number of vital sign alerts occurred on Day 2 postdischarge with three patients having pulse oximetry triggered alerts. For ePROs (symptom alerts), the greatest number of patients with alerts also occurred at Day 2 postdischarge with 18/21 (85.7%) patients generating some form of alert. A total of 200 ePRO alerts were triggered at that postdischarge Day 2 assessment time point. The most common and significant complaints related to ePRO alerts on Day 2 after discharge were secondary to mobility, self‐care, participation in usual activities, and loss of appetite ( p < 0.05). It should be noted that over time the number of alerts generated for steps and symptoms trended down by Day 30. 3.3 Acceptability At the end of the pilot, 72% (15/21) patients completed the satisfaction survey. Of those completing the survey, 80% felt the Vivofit watch was easy/extremely easy to use and 67% felt it was helpful to monitor daily activities. Overall 93% felt it was easy/extremely easy to use the devices to monitor blood pressure, weight, heart rate, temperature and oxygen levels and to complete the online surveys. The majority (73%) felty the length of the surveys was just right. Patients were also queried via open ended exit interviews on their subjective assessment of the perioperative telemonitoring program. The team was able to conduct exit interviews in 15 patients to obtain feedback on the perioperative monitoring. Of those interviewed, 87% (13/15) felt that the vital sign devices were helpful in monitoring how they were recovering physically. When asked if the devices helped them communicate their physical needs after surgery, 73% (11/15) said yes. Patient also felt that the pedometer was helpful in monitoring how they were recovering functionally (67%) and the vast majority felt that the online surveys were helpful in monitoring their symptoms and quality of life before and after surgery. They were generally satisfied with the ability to monitor their vital signs and thus “eliminate a nurse coming” amidst the COVID‐19 pandemic. There were comments pertaining to the ease of use after overcoming the lack of familiarity with technology. Additionally, the most frequent comments were centered on the patient's ability to independently track their progress as it pertains to mobility, well‐being and pain assessment. 3.4 Trajectory of vital signs and daily steps overtime Patient Generated Health data in the form of vital signs does not comprehensively reflect the patients perioperative experience or recovery. In this pilot study of 21 patients we did not see a trajectory of vital signs that changed over the postdischarge period and thus no definitive conclusions can be made from this aspect of the data. Daily steps trajectory over time is depicted in Figure . The median daily step count before surgery was 4957 (range 0−15 484); this number dropped to 178 (range 0−10 577) during hospitalization. A gradual increase in the number of daily steps were observed after Day 14 postdischarge. Overall, the number of daily steps did not reach baseline levels by Day 30 postdischarge. Exploratory analysis on potential trends in functional declines and recovery found that patients took significantly fewer steps during hospitalization than baseline (before surgery); this trend persisted up to Day 14 postdischarge. There were no significant differences between the number of daily steps at baseline and those after Day 14 postdischarge. 3.5 ePRO trajectories overtime Symptoms, QOL, and physical/mental health status score trajectories are presented in Table . In the Quality of Life Health Dimensions on a Scale of 0−5 (higher scores indicating worse problems), there was significant worsening of mobility, self‐care and usual daily activity at Day 2 postsurgery compared to baseline before surgery ( p < 0.05). The challenges in usual activities persisted thought Day 14. As it pertains to the overall symptom scores on a Scale of 0−10 with higher scores indicating worse symptoms, Day 2 was significantly worse than before surgery ( p < 0.05). As it pertains to individual symptom scores, the only value that was significant at discharge and at Day 2 postdischarge was appetite loss ( p < 0.05). These results indicate that patients are most vulnerable and symptomatic at Day 2 postdischarge. This implies that Day 2 postdischarge is an opportunity to reach out to patients to intervene and address any symptoms, concerns or questions. These improved from postdischarge Day 7 to 30 and in many circumstances, the symptoms at Day 30 after discharge, were less than the before surgery time point. Overall, the symptoms that most often triggered an alert included pain, fatigue, sleep disturbance, appetite loss and distress. 3.6 PGHDs and postoperative complications We conducted exploratory analysis to understand the potential relationships between daily steps/functional recovery and postoperative complications. In Figure , we do not see a significant relationship between the number of daily steps at Day 14 postdischarge and postoperative complications (as measured by the Comprehensive Complication Index—CCI). These findings were the same for all of the postdischarge time points. To assess if there was a difference in mobility based on the presence or absence of significant complications, we looked at patients step counts dichotomized by the ultimate development of a Grade 3a or higher complication at time points before surgery, during hospitalization, and at 5 day intervals after discharge. We found significant differences between the two groups before surgery, during hospitalization, up to Day 5 after discharge, Days 5−9, and Days 10−14 (Figure ). The median number of daily steps before surgery in the patients that had no complications or less than grade 3 was 6062 versus 4166 in patients that developed a complication that was grade 3 or higher ( p < 0.05). These results imply that the mobility of the patient preoperatively may be associated with a higher grade of postoperative complications. It should also be noted that on the day before discharge there was a significant number of fewer steps taken by patients that developed ≥Grade 3 complications compared to those that had no complications or <Grade 3 morbidity (3186 [0−11066] vs. 3655 [0−15484], p < 0.05; Figure ). Similarly, on the day before discharge, patients that were ultimately readmitted took significantly fewer steps than patients who were not readmitted (2969 [0−15484] vs. 3469 [0−11066], p < 0.05; Figure ). These results indicate that degree of mobility on the day before discharge is associated with both more severe complications and readmissions and these patients may need consideration for more careful postdischarge monitoring. Sociodemographic and surgical characteristics A total of 21 patients participated in the study; their sociodemographic characteristics are presented in Table . The median age was 58, 64% were male, 55% were of white race, 73% were married and 96% lived with a spouse/partner/friends, 36% lived with children and 27% were retired. The majority (76.2%) lived >15 miles away from the medical center. For surgical characteristics (see Table ), the patients were relatively high risk, with 17 patients (77%) with either an ASA III or IV classification. Patients with ASA III are characterized as having systemic disease that is not incapacitating and ASA IV are patients with incapacitating systemic disease that is a constant threat to life. The majority of patients (68%) had a minimally invasive surgery (either robotic or laparoscopic). Nine patients (43%) had combined surgeries that involved multiple organs (colorectal + genitourinary or colorectal + liver). The median length of hospital stay was 7 days (range 0−36). The 30‐day readmission rate was 33%, with 7 patients returning for hospitalization. Feasibility Feasibility was assessed by accrual, attrition and the ability of patients to use the equipment and the staff ability to act on alerts to threshold health care parameters. A total of 28 patients were identified and invited to participate in the study over a 4‐month period (7 patients per month). The accrual rate was 78% with 22 patients that agreed to participate and provided written informed consent. The most common reasons patients gave for declining participation were not being tech savvy, being overwhelmed/stressed, and already having pre‐arranged home services. One patient consented but subsequently withdrew due to reports of feeling overwhelmed, yielding a final sample of 21 patients with evaluable data and yielding an attrition rate of 4.5% (1/22). Following informed consent, 20 out of the 21 patients (95%) completed the preoperative baseline PGHD assessment (vital signs and electronic surveys). At discharge, 91% (19/21) completed the assessments. At post‐discharge Days 2, 7, 14, and 30, the adherence rate for remote assessment completion was 82%, 68%, 64%, and 64%, respectively. The patients ability to complete the entirety of the study was 64% speaking to feasibility at 30 days postdischarge. Overall, we observed high to acceptable levels of adherence with wearing the Vivofit pedometer. Before surgery, 18 out of 21 patients (85.7%) wore the Vivofit pedometer; the median of wear days was 6 (range 1−29 days). During postop hospitalization, 18 out of 21 (85.7%) wore the pedometer; the median number of wear days while hospitalized was 7 (range 2−36 days). At Days 2, 7, 14, and 30 postdischarge, the percentage of patients that wore the pedometer was 85.7%, 85.7%, 85.7%, and 61.9%. The median number of wear days after discharge was 35 (range 3−51 days). Thus feasibility as measured by patients ability to use the Vivofit varied with days from discharge as noted above indicating 85.7% up to 2 weeks postdischarge. Findings on patients with triggered alerts types over time are depicted in Figure . The greatest number of vital sign alerts occurred on Day 2 postdischarge with three patients having pulse oximetry triggered alerts. For ePROs (symptom alerts), the greatest number of patients with alerts also occurred at Day 2 postdischarge with 18/21 (85.7%) patients generating some form of alert. A total of 200 ePRO alerts were triggered at that postdischarge Day 2 assessment time point. The most common and significant complaints related to ePRO alerts on Day 2 after discharge were secondary to mobility, self‐care, participation in usual activities, and loss of appetite ( p < 0.05). It should be noted that over time the number of alerts generated for steps and symptoms trended down by Day 30. Acceptability At the end of the pilot, 72% (15/21) patients completed the satisfaction survey. Of those completing the survey, 80% felt the Vivofit watch was easy/extremely easy to use and 67% felt it was helpful to monitor daily activities. Overall 93% felt it was easy/extremely easy to use the devices to monitor blood pressure, weight, heart rate, temperature and oxygen levels and to complete the online surveys. The majority (73%) felty the length of the surveys was just right. Patients were also queried via open ended exit interviews on their subjective assessment of the perioperative telemonitoring program. The team was able to conduct exit interviews in 15 patients to obtain feedback on the perioperative monitoring. Of those interviewed, 87% (13/15) felt that the vital sign devices were helpful in monitoring how they were recovering physically. When asked if the devices helped them communicate their physical needs after surgery, 73% (11/15) said yes. Patient also felt that the pedometer was helpful in monitoring how they were recovering functionally (67%) and the vast majority felt that the online surveys were helpful in monitoring their symptoms and quality of life before and after surgery. They were generally satisfied with the ability to monitor their vital signs and thus “eliminate a nurse coming” amidst the COVID‐19 pandemic. There were comments pertaining to the ease of use after overcoming the lack of familiarity with technology. Additionally, the most frequent comments were centered on the patient's ability to independently track their progress as it pertains to mobility, well‐being and pain assessment. Trajectory of vital signs and daily steps overtime Patient Generated Health data in the form of vital signs does not comprehensively reflect the patients perioperative experience or recovery. In this pilot study of 21 patients we did not see a trajectory of vital signs that changed over the postdischarge period and thus no definitive conclusions can be made from this aspect of the data. Daily steps trajectory over time is depicted in Figure . The median daily step count before surgery was 4957 (range 0−15 484); this number dropped to 178 (range 0−10 577) during hospitalization. A gradual increase in the number of daily steps were observed after Day 14 postdischarge. Overall, the number of daily steps did not reach baseline levels by Day 30 postdischarge. Exploratory analysis on potential trends in functional declines and recovery found that patients took significantly fewer steps during hospitalization than baseline (before surgery); this trend persisted up to Day 14 postdischarge. There were no significant differences between the number of daily steps at baseline and those after Day 14 postdischarge. ePRO trajectories overtime Symptoms, QOL, and physical/mental health status score trajectories are presented in Table . In the Quality of Life Health Dimensions on a Scale of 0−5 (higher scores indicating worse problems), there was significant worsening of mobility, self‐care and usual daily activity at Day 2 postsurgery compared to baseline before surgery ( p < 0.05). The challenges in usual activities persisted thought Day 14. As it pertains to the overall symptom scores on a Scale of 0−10 with higher scores indicating worse symptoms, Day 2 was significantly worse than before surgery ( p < 0.05). As it pertains to individual symptom scores, the only value that was significant at discharge and at Day 2 postdischarge was appetite loss ( p < 0.05). These results indicate that patients are most vulnerable and symptomatic at Day 2 postdischarge. This implies that Day 2 postdischarge is an opportunity to reach out to patients to intervene and address any symptoms, concerns or questions. These improved from postdischarge Day 7 to 30 and in many circumstances, the symptoms at Day 30 after discharge, were less than the before surgery time point. Overall, the symptoms that most often triggered an alert included pain, fatigue, sleep disturbance, appetite loss and distress. PGHDs and postoperative complications We conducted exploratory analysis to understand the potential relationships between daily steps/functional recovery and postoperative complications. In Figure , we do not see a significant relationship between the number of daily steps at Day 14 postdischarge and postoperative complications (as measured by the Comprehensive Complication Index—CCI). These findings were the same for all of the postdischarge time points. To assess if there was a difference in mobility based on the presence or absence of significant complications, we looked at patients step counts dichotomized by the ultimate development of a Grade 3a or higher complication at time points before surgery, during hospitalization, and at 5 day intervals after discharge. We found significant differences between the two groups before surgery, during hospitalization, up to Day 5 after discharge, Days 5−9, and Days 10−14 (Figure ). The median number of daily steps before surgery in the patients that had no complications or less than grade 3 was 6062 versus 4166 in patients that developed a complication that was grade 3 or higher ( p < 0.05). These results imply that the mobility of the patient preoperatively may be associated with a higher grade of postoperative complications. It should also be noted that on the day before discharge there was a significant number of fewer steps taken by patients that developed ≥Grade 3 complications compared to those that had no complications or <Grade 3 morbidity (3186 [0−11066] vs. 3655 [0−15484], p < 0.05; Figure ). Similarly, on the day before discharge, patients that were ultimately readmitted took significantly fewer steps than patients who were not readmitted (2969 [0−15484] vs. 3469 [0−11066], p < 0.05; Figure ). These results indicate that degree of mobility on the day before discharge is associated with both more severe complications and readmissions and these patients may need consideration for more careful postdischarge monitoring. DISCUSSION In this study we demonstrate that perioperative telemonitoring in complex surgical oncology patients is feasible with adherence rates up to 64% at 30 days after discharge. We also demonstrate that the greatest alerts generated by patients are on postdischarge Day 2; highlighting this as a critical time point to monitor and intervene in the perioperative setting. Patients may be counseled that Day 2 postdischarge is most significant in terms of symptoms. By Day 30 postdischarge, patients should be back to baseline as it pertains to symptoms and in some cases improved from baseline. Interestingly, we find that patient mobility the day before discharge correlates with ≥Grade 3a complications and readmissions. This identifies a patient population that needs more careful postdischarge perioperative monitoring to mitigate readmissions and severity of complications. As this was not a causality study, we are not able to discern whether the complication led to the lack of mobility. Additionally, mobility postdischarge is most impaired up to 2 weeks. After Day 14 postdischarge, patients steadily improve their activity and this time frame can be used to counsel and educate patients for postdischarge expectations in mobility improvements. More than 45 million Americans undergo surgery each year, with expenditures exceeding $500 billion (40% of national health care spending). Expenditures for cancer care, including surgery, were $127 billion in 2013; this cost is projected to increase to $158 billion in 2020. More than 60% of cancer patients undergo surgical interventions, and surgery is often used as either the sole treatment modality or in combination with radiation and/or chemotherapy. Surgical interventions account for the most cures after a cancer diagnosis. To mitigate these costs, various approaches have been utilized to address these challenges. This study did not address potential cost savings however, this will be a critical metric in future work to assess the role of telemonitoring in this perioperative setting. In recent years, surgical care is increasingly focused on using enhanced recovery after surgery (ERAS) pathways to improve surgical outcomes. ERAS pathways are created to include standard, prescribed tasks in the perioperative care setting to shorten length of hospital stay and contain cost. Perioperative care provided through ERAS pathways should also include remote monitoring and real‐time interventions after hospital discharge and until full postsurgical recovery. The concept is to identify postdischarge impending complications before them escalating and thus mitigate them and decrease readmissions. In this study we found that patients had the greatest aberrations in their vital signs, specifically their oxygenation at 2 days after discharge. Additionally, we found that the greatest concerns as they pertain to symptomatology and quality of life were at Day 2 postdischarge. By 30 days, these had all returned to baseline ePROs pertaining to both symptoms and quality of life and in many categories, these were actually better than baseline. These perhaps are a reflection of symptoms alleviated by the surgery itself or by the anxiety and concern of the uncertainty that may be associated with an upcoming surgery or recovery. The past 50 years have seen an explosion in biomedical knowledge, dramatic innovations in surgical procedures, and management of complex medical conditions, with ever more exciting clinical capabilities on the horizon. Yet, the American health care system is falling short on key components of quality, outcomes, and cost. The overarching imperatives for health care include the need to develop ways to manage its ever‐increasing complexity, and curb ever‐escalating costs. Opportunities now exist to address these problems; these include (1) computational power that is affordable and widely available; (2) connectivity that allows information to be accessed in real‐time virtually anywhere; (3) human and organizational capabilities that improve the reliability and efficiency of care processes; (4) the recognition that effective care must be delivered in a patient‐centric fashion; and (5) the recognition that, regardless of incentive structures, penalties, and payment reforms, nothing about the experiences and outcomes/value of care will improve until progress is made to revolutionize the care delivery system. , , Telemonitoring in the perioperative period allows for this. In particular we were able to demonstrate that this method of interacting with the patients is acceptable and helpful in their subjective assessment of aiding in functional, symptom focus and quality of life arenas. It should be noted that as the primary outcome measure in this study was feasibility, there was a clear decrease of adherence as the study progressed to postdischarge data. Several factors may contribute to this. In qualitative interviews of the participants and staff, these included challenges with the technology, recovery from complications and decreased motivation as recovery progressed. In future work it will be important to address these challenges with additional patient support in this perioperative window. Our findings of 30‐day adherence in the 64% range have been seen in other studies including the recently published Post‐discharge after surgery Virtual Care with Remote Automated Monitoring‐1 (PVC‐RAM‐1) trial. Our partnership with an existing home health monitoring and digital patient engagement infrastructure (mTelehealth™) leveraged a “real world” platform, which enhanced the successful implementation of this study and the future adoption into clinical surgical oncology practice. It should be noted that there are a multitude of digital patient engagement technologies. The agreement with mTelehealth was purely in the context of answering a research question with no financial relationship by any of the authors. Our study design moved away from traditional clinic‐based care paradigms to telehealth patient engagement. Most interestingly, we were able to show in this small pilot that patients who ultimately developed complications in the postoperative setting demonstrated significantly less mobility as measured by daily steps in the day before discharge. This implies that preoperative mobility is a potential predictor of outcomes and should be factored into the counseling and patient selection of patients planned for these complex oncologic procedures. The challenges of the United States health care system demand an innovative, transformative approach to perioperative and post‐discharge care. A great deal of further work is needed to optimize the modality, frequency, and mechanisms of comprehensive patient‐centric telemonitoring as an adjunct to traditional health care delivery. 4.1 Limitations There were several limitations of this study. First, there were only 21 analyzable patients in this pilot. The knowledge gained with this study will inform future prospective trials. Second, was the potential for over‐testing given over 50 variables and over 100 tests for 21 patients. In designing this pilot, patient burden is a critical consideration in ePROs and PGHD. However, in this study the tools utilized have been validated and at the set intervals are of reasonable burden spanning up to 7−10 min. The addition of nursing for technical support was there to mitigate some of this burden. This compilation of vital signs, ePROs and mobility data is critical to be done simultaneously to inform what aspects of telemonitoring is the most valuable. This data can then be utilized in larger multicenter trials. The recently published PVC‐RAM‐1 trial failed to find a difference in survival in this context and that postoperative telemonitoring can be effectively carried in about 2/3 of complex surgical oncology patients. Our findings were similar in this pilot as it pertains to adherence for the duration of the 30 days postdischarge. Third, in evaluating the presence of Grade 3A or higher complications, we are not able to retrospectively assess risk factors in a multivariable analysis in this 21 patient pilot. Additionally, we were not able to monitor patients 24 h a day, 7 days a week in a continuous fashion thus potentially missing changes in vital signs that may have correlated with complications or readmissions. We were not able to complete exit interviews in all 21 patients in the study and thus perhaps are missing important insight from the 6 remaining participants. Last, it should be acknowledged that all analyses in this pilot are exploratory and hypothesis generating. Limitations There were several limitations of this study. First, there were only 21 analyzable patients in this pilot. The knowledge gained with this study will inform future prospective trials. Second, was the potential for over‐testing given over 50 variables and over 100 tests for 21 patients. In designing this pilot, patient burden is a critical consideration in ePROs and PGHD. However, in this study the tools utilized have been validated and at the set intervals are of reasonable burden spanning up to 7−10 min. The addition of nursing for technical support was there to mitigate some of this burden. This compilation of vital signs, ePROs and mobility data is critical to be done simultaneously to inform what aspects of telemonitoring is the most valuable. This data can then be utilized in larger multicenter trials. The recently published PVC‐RAM‐1 trial failed to find a difference in survival in this context and that postoperative telemonitoring can be effectively carried in about 2/3 of complex surgical oncology patients. Our findings were similar in this pilot as it pertains to adherence for the duration of the 30 days postdischarge. Third, in evaluating the presence of Grade 3A or higher complications, we are not able to retrospectively assess risk factors in a multivariable analysis in this 21 patient pilot. Additionally, we were not able to monitor patients 24 h a day, 7 days a week in a continuous fashion thus potentially missing changes in vital signs that may have correlated with complications or readmissions. We were not able to complete exit interviews in all 21 patients in the study and thus perhaps are missing important insight from the 6 remaining participants. Last, it should be acknowledged that all analyses in this pilot are exploratory and hypothesis generating. CONCLUSIONS Telemonitoring and telehealth are a perioperative modality of care that will surely be incorporated as we improve our digital interface with patients once they leave the hospital. In this study we demonstrate that this is a feasible and acceptable approach. Future work is needed to assess which aspects and of what duration does telemonitoring render the most value to patients and health systems to optimize outcomes and minimize resource utilization. The authors declare no conflicts of interest. The feasibility of remote perioperative telemonitoring of patient‐generated physiologic health data and patient‐reported outcomes in a high risk complex general and urologic oncology surgery population is evaluated. Future studies will ascertain optimal patient selection, duration, and extent of perioperative monitoring.
The experience and role of mentorship for paediatric occupational therapists
302d3ead-445f-4c23-ae45-0c5d6991df62
10087586
Pediatrics[mh]
INTRODUCTION Paediatric occupational therapy has expanding opportunities for practice, but this growth presents challenges for how to best support novice therapists during their early transition to practice. The rollout of the National Disability Insurance Scheme (NDIS) (NDIS, ) in 2016 in Australia has significantly changed the way in which therapy is funded for people with a disability. It has led to the establishment of new services and the expansion of existing organisations. This has resulted in increasing employment of paediatric therapists within community settings and a greater demand from the workforce for supervision or mentoring from senior occupational therapists. However, increased administrative demands under the NDIS and the pressure of billable hours (Green et al., ) place significant limits on the time available for therapists to access profession‐specific support. Rural and remote practice present additional challenges, with therapists more frequently expected to work across the lifespan, impacting opportunities to access support and build paediatric‐specific professional practice competencies (Bourke et al., ; Moran et al., ). Paediatric practice requires skills and knowledge often beyond those taught within university occupational therapy programmes. The literature on issues facing new graduate occupational therapists reflects the challenges experienced, including the application of knowledge and skills, engaging in professional reasoning, managing time and caseload, and adapting to work culture. Feeling overwhelmed and having self‐doubt was central to many of these challenges (Asseraf‐Pasin, ; Moir et al., ; Murray et al., ). Working in paediatrics requires occupational therapists to be both family‐centred and occupation focussed, while maintaining an understanding of the developmental, sensory‐motor, social and play‐based foundations supporting participation (Barfoot et al., ; Bourke‐Taylor, ). Adding to this complexity is that paediatric occupational therapists are employed across a range of setting including health and disability services, private practice, and also school settings. Evidence notes that for therapists employed in school settings, or working closely with educators of school‐aged clients, the unique nature of the knowledge required to work in partnership with teachers and families means that therapists feel poorly prepared for practice with entry‐level education alone (Brandenburger‐Shasby, ; Bucey & Provident, ; Laverdure, ; Pollock et al., ). Applying occupational therapy knowledge to work with children and their families, in complex practice settings that involve a range of stakeholders and systems, requires targeted and context‐specific support during early professional development. Learning through experience and reflection on practice is the bridge between tertiary education and workforce development (Toal‐Sullivan, ). Professional development is sought out to help build competencies and is required to maintain professional registration (Myers et al., ). Professional development can include professional supervision, attending conferences and workshops, reading and reflecting on research, and engaging in a mentoring relationship. Research related to paediatric practice has found that workshops and conferences alone are often insufficient in building competencies (Brandenburger‐Shasby, ; Bucey & Provident, ; Laverdure, ; Pollock et al., ). Other forms of professional development, including mentoring, are seen as a key to developing professional competence and confidence (Myers et al., ). Mentorship is viewed as distinct to clinical or professional supervision; supervision being a process where the supervisor is responsible for the employee's performance to ensure they meet job expectations (Occupational Therapy Australia, ; Scheerer, ). Mentorship is generally viewed as a relationship that involves a degree of intellectual and/or emotional connection, is non‐authoritarian and reciprocal in nature, and offers the mentee guidance, support, and encouragement towards professional development and improvement (Clutterbuck et al., ). A number of national occupational therapy associations worldwide label mentoring as a key professional development tool (Doyle et al., ). Over 20 years ago, the Australian Occupational Therapy Association recognised the need for mentorship, establishing Mentorlink, a programme matching mentees to mentors (Occupational Therapy Australia, ; Wilding et al., ; Wilding & Marais‐Strydom, ) and the Mentorlink programme has consistently noted the high demand for paediatric mentors, with mentees often waiting long periods to be matched to a mentor (Mentorlink programme, email communication, 10th October 2019). This appears to reflect that paediatric therapists are aware of their need for support and perceive mentorship as a core professional development strategy. Access to structured mentoring has been seen as a key strategy to support clinical reasoning, while also building professional identity and integration into the workplace (Asseraf‐Pasin, ; Wainwright & McGinnis, ), and a large and long‐standing body of clinical and professional reasoning research in occupational therapy has consistently reflected that one of the fundamental differences between novice and expert clinicians is their reasoning abilities (Scanlan et al., ; Unsworth, ; Unsworth & Baker, ). The literature on the role of mentoring in education, business, and nursing is plentiful (Clutterbuck et al., ). A scoping review of mentoring research in occupational therapy was conducted by Doyle et al. . Of the 20 studies that met inclusion criteria, most studies were non‐empirical or at lower levels of evidence. There was not a consistent definition of mentorship; however, the authors proposed the following definition: ‘mentoring is a goal‐oriented learning process which takes place in a supportive relationship.’ (p.544). Doyle and colleagues reported that mentoring included support with knowledge acquisition, translation, clinical reasoning, and goal setting. Commitment, respect, and trust supported successful mentorship, with greater satisfaction when there was a clear process or plan of how support would be provided (e.g. establishing meeting frequency and types of communication to be used). Effective mentoring develops, in mentees, a sense of belonging to the profession. Only four studies included by Doyle and colleagues examined mentoring in paediatric practice. King et al.  investigated the role of mentorship in a children's rehabilitation service over an 11‐month programme. Significant changes were found on a range of measures including self‐confidence, information provision, listening, and clinical skills. Two studies focussed solely on school‐based practice. Bucey and Provident  evaluated outcomes of a 6‐week peer mentoring programme for school‐based therapists, with therapists reporting improved competency in targeted areas, whereas Pollock et al.  examined outcomes of a 2‐year multifaceted professional development programme, with mentorship as one of the three key strategies. Training and mentorship were highly valued in facilitating practice change. The fourth paper by Ashburner et al.  evaluated the role of co‐mentoring (also known as peer mentoring) in effective translation of knowledge to practice following a 3‐day course on Autism Spectrum Disorder. Short‐term co‐mentoring (2–3 sessions over a 2‐month period) provided psychosocial support during trialling of newly learned strategies. No studies were found on the role, experience, and/or outcomes of mentorship for therapists working in community paediatric settings. Given the significant increase in numbers of paediatric therapists working in often complex community settings, it is critical to understand what the perceived needs are when mentorship is sought in this area of practice. With the range of organisations employing paediatric therapists at all levels of experience, there will be differing expectations of client‐related or billable hours required, and time allocated for professional development. Current employment contexts are unlikely to change; thus, it is important to understand how mentoring support can be most effective within these constraints. Doyle et al.  have summarised the main elements of a mentoring relationship based on the occupational therapy literature; what is not known is the role mentorship fulfils within the Australian context for community paediatric practice. With the time and effort committed by mentors and mentees to this relationship, employers and the profession as a whole will benefit from a more targeted and evidence‐based approach to mentorship. This has potential to drive successful professional development in community paediatric practice, including disability services, private practice, and school‐based settings. 1.1 Purpose This study is part of a doctoral research study, led by the first author, that aims to understand the role of mentorship in professional development support for novice occupational therapists working in community paediatric practice. This paper reports on a qualitative study that examined the perspective of both mentors and mentees on why they enter into a mentoring relationship, the support needs that mentoring responds to, and their experience of giving/receiving mentorship. How mentoring builds capability and resilience for working with children and their families in community practice contexts is discussed along with questions this study raised about how to best support novice occupational therapists during their early career development in community paediatric practice. Purpose This study is part of a doctoral research study, led by the first author, that aims to understand the role of mentorship in professional development support for novice occupational therapists working in community paediatric practice. This paper reports on a qualitative study that examined the perspective of both mentors and mentees on why they enter into a mentoring relationship, the support needs that mentoring responds to, and their experience of giving/receiving mentorship. How mentoring builds capability and resilience for working with children and their families in community practice contexts is discussed along with questions this study raised about how to best support novice occupational therapists during their early career development in community paediatric practice. METHOD 2.1 Design Interpretive Description (ID) was applied to examine the experience of mentorship in paediatric occupational therapy. ID is responsive to the experience‐based questions of professional practitioners and is designed to yield practical implications to inform practice development (Thorne, ). Hence, it was well suited to this study and the broader aims of this research programme. Importantly, ID acknowledges the contextualised experiences of participants (Ajjawi & Higgs, ; Thorne et al., ). The two groups studied in this research were paediatric occupational therapy mentors and mentees. ID was used to examine similarities and differences in the experiences of mentors and mentees. Ethical approval was received from the University Human Research Ethics Committee (2018/738). 2.2 Participants With the goal of examining mentorship in diverse community paediatric practice contexts, mentors and mentees from across Australia were invited to participate through a number of avenues including: Emails to coordinators of Occupational Therapy Paediatric Communities of Practice Posting a recruitment request on the Occupational Therapy Australia research site Posting a recruitment request on an Australian Paediatric Occupational Therapy Facebook page Snowball recruitment, with some participants introducing the first researcher to other therapists with mentorship experience. The following inclusion criteria were applied: (1) Occupational therapists currently working, or with recency of practice (in the last 12 months) in paediatrics and experience in community settings, and (2) therapists currently in a mentoring relationship where both therapists are occupational therapists, and the mentor is a more experienced therapist. Recruitment criteria did not require both partners within a mentorship relationship to be interviewed, although the recruitment did lead to one mentor and mentee in a current relationship participating in the research. Nine mentors and eight mentees were interviewed (see Table ). The frequency of mentoring sessions for each participant was variable, ranging from several times weekly to monthly and changed over time as mentee needs changed. Sessions were provided via a combination of phone, face‐to‐face, or videoconferencing. 2.3 Data collection A semi‐structured interview guide was used to prompt participants to reflect on their engagement in mentoring. The guide was sent to participants in advance to allow time to reflect on their mentorship experiences. Interview questions were developed to be consistent with the ID approach in examining the experience of therapists and were guided by the review of the mentoring and paediatric literature. Questions included are as follows: (a) what led them to become a mentee/mentor, (b) the nature of mentorship (given/received) including what was helpful about the relationship, how mentoring supported practice development and perceived outcomes, and (c) the structure of support, including the practicalities of managing the relationship over time. The first author conducted all interviews via teleconference, videoconference, or in person. Interviews ranged from 45 to 60 minutes. All interviews were audio recorded and transcribed verbatim. All transcripts were de‐identified at the time of transcription. Analysis progressed in tandem with data gathering which was stopped when data saturation was reached. Saturation was the point at which no new information was provided and repetition of ideas occurred within each group (Creswell, ). 2.4 Data analysis NVIVO 12 qualitative software was used to organise and code data. Each transcript was read in whole to gain an initial impression. This was followed by an inductive approach to open coding that captured the content of what the participant shared and used the participant's words/or phrases to generate initial codes. OJ coded two mentor and two mentee transcripts in NVIVO, documenting with memos to explain coding decisions. MV independently coded three mentor transcripts, and MM coded three mentee transcripts. Coding discrepancies were discussed and resolved to ensure consistency with coding from the voice and perspective of participants before collapsing any open codes into categories. OJ then proceeded with the remainder of the coding. Some data were allocated to more than one code if the participant discussed more than one concept. For example, a number of participants discussed difficulties engaging with a large number of stakeholders and often feeling they did not have the skill or experience to manage this. This response was coded as a feature of paediatric practice and preparedness for paediatric practice. Analysis within each group (mentors and mentees) was attended to first, with grouping, collapsing, and expanding on codes that captured the experience of mentoring, and with constant comparison within each group. For example, codes that were categorised into ‘features of the mentoring relationship’ were re‐examined, and the importance of the mentor providing emotional support emerged as a significant feature. Visual displays of the codes with representative data were used to support team discussion and representation of the findings from each group. In the final stage, the two sets of findings were integrated in order to respond to the research questions. Design Interpretive Description (ID) was applied to examine the experience of mentorship in paediatric occupational therapy. ID is responsive to the experience‐based questions of professional practitioners and is designed to yield practical implications to inform practice development (Thorne, ). Hence, it was well suited to this study and the broader aims of this research programme. Importantly, ID acknowledges the contextualised experiences of participants (Ajjawi & Higgs, ; Thorne et al., ). The two groups studied in this research were paediatric occupational therapy mentors and mentees. ID was used to examine similarities and differences in the experiences of mentors and mentees. Ethical approval was received from the University Human Research Ethics Committee (2018/738). Participants With the goal of examining mentorship in diverse community paediatric practice contexts, mentors and mentees from across Australia were invited to participate through a number of avenues including: Emails to coordinators of Occupational Therapy Paediatric Communities of Practice Posting a recruitment request on the Occupational Therapy Australia research site Posting a recruitment request on an Australian Paediatric Occupational Therapy Facebook page Snowball recruitment, with some participants introducing the first researcher to other therapists with mentorship experience. The following inclusion criteria were applied: (1) Occupational therapists currently working, or with recency of practice (in the last 12 months) in paediatrics and experience in community settings, and (2) therapists currently in a mentoring relationship where both therapists are occupational therapists, and the mentor is a more experienced therapist. Recruitment criteria did not require both partners within a mentorship relationship to be interviewed, although the recruitment did lead to one mentor and mentee in a current relationship participating in the research. Nine mentors and eight mentees were interviewed (see Table ). The frequency of mentoring sessions for each participant was variable, ranging from several times weekly to monthly and changed over time as mentee needs changed. Sessions were provided via a combination of phone, face‐to‐face, or videoconferencing. Data collection A semi‐structured interview guide was used to prompt participants to reflect on their engagement in mentoring. The guide was sent to participants in advance to allow time to reflect on their mentorship experiences. Interview questions were developed to be consistent with the ID approach in examining the experience of therapists and were guided by the review of the mentoring and paediatric literature. Questions included are as follows: (a) what led them to become a mentee/mentor, (b) the nature of mentorship (given/received) including what was helpful about the relationship, how mentoring supported practice development and perceived outcomes, and (c) the structure of support, including the practicalities of managing the relationship over time. The first author conducted all interviews via teleconference, videoconference, or in person. Interviews ranged from 45 to 60 minutes. All interviews were audio recorded and transcribed verbatim. All transcripts were de‐identified at the time of transcription. Analysis progressed in tandem with data gathering which was stopped when data saturation was reached. Saturation was the point at which no new information was provided and repetition of ideas occurred within each group (Creswell, ). Data analysis NVIVO 12 qualitative software was used to organise and code data. Each transcript was read in whole to gain an initial impression. This was followed by an inductive approach to open coding that captured the content of what the participant shared and used the participant's words/or phrases to generate initial codes. OJ coded two mentor and two mentee transcripts in NVIVO, documenting with memos to explain coding decisions. MV independently coded three mentor transcripts, and MM coded three mentee transcripts. Coding discrepancies were discussed and resolved to ensure consistency with coding from the voice and perspective of participants before collapsing any open codes into categories. OJ then proceeded with the remainder of the coding. Some data were allocated to more than one code if the participant discussed more than one concept. For example, a number of participants discussed difficulties engaging with a large number of stakeholders and often feeling they did not have the skill or experience to manage this. This response was coded as a feature of paediatric practice and preparedness for paediatric practice. Analysis within each group (mentors and mentees) was attended to first, with grouping, collapsing, and expanding on codes that captured the experience of mentoring, and with constant comparison within each group. For example, codes that were categorised into ‘features of the mentoring relationship’ were re‐examined, and the importance of the mentor providing emotional support emerged as a significant feature. Visual displays of the codes with representative data were used to support team discussion and representation of the findings from each group. In the final stage, the two sets of findings were integrated in order to respond to the research questions. FINDINGS Analysing both mentee and mentor interview data not only provided insight into the experience of mentorship for both groups but also reflected the more specific challenges in this area of practice that led mentees to seek out a mentor. Four core themes were identified; the first clearly demonstrated the challenge of navigating the complexity of practice. Theme two related to a safe and trusting relationship, which was viewed as an essential feature of mentoring. The role of mentors in supporting clinical reasoning and translating theory to practice translation, and the building of resilience through mentorship support were identified as the final two themes. 3.1 Theme 1. Navigating the complexity Most mentors expressed a desire to take on the mentoring role as they (a) understood the complexity of practice and (b) felt they had something to contribute to both the profession and the client population by providing this support. Mentors discussed their role as enabling mentees to translate graduate‐level competencies into effective work with children. They have what I believe are core competencies but then we have to teach them how to work in paediatrics because they do not have the paediatric skillset that they need … to better reflect and apply [knowledge to practice], otherwise mentoring is a waste of time. (Mentor 9) All mentors articulated that paediatric practice brings unique challenges, where often it is unclear if the client is the child, and/or the parent/carer(s), teacher or school. This challenge impacted the ability of the mentees to set client‐centred goals and know who to involve in planning and intervention. Making decisions and goal setting was complicated by a variety of factors including who is paying for therapy and the context of intervention. Mentors reported that effectively responding as a paediatric occupational therapist also requires adaptability to meet the changing needs of children as they grow, ‘you're always taking childhood development into account’ (Mentor 5). They said that this requires a sound knowledge of child development, an understanding of the demands of environments where children spend their time, and a familiarity with child and adolescent occupations. For example, Paediatrics is extremely complex … there's so many different things then that can impact a child's development and progress. All of the family issues, the things that are going on at school for them, the number of stakeholders. It's never just working with that child. (Mentor 5) Understanding how to work with kids with autism who are non‐verbal, have multiple challenges, navigating their way through discussions with families which can be quite difficult or confronting. (Mentor 3) Mentees described the challenges they experienced as novice practitioners as often not knowing how or where to begin, ‘You go into a case and there is just so much you are confronted with.’ (Mentee 8). The demands of caseload management, reduced opportunities for reflection, and challenges accessing regular support with heavy caseloads added to the complexity. For example, It was a bunch of assessments … a bunch of information, and I've got a lot of the complex kids. So, the caseload has been really heavy and there have been a lot of administrative time (needed to manage the caseload) to try and get a handle on, even just what their goals are when parents were a little bit unsure. (Mentee 2) Mentees reflected on how these challenges reduced their confidence in providing what they felt was effective intervention. This frequently resulted in feelings of inadequacy. When I first came out and got into paediatrics I felt totally out of my depth and very much like I (was) not giving clients adequate therapy … especially [being] a young therapist without children. (Mentee 1) 3.2 Theme 2. A safe and trusting relationship Interview data strongly indicated that feelings of incompetence experienced by mentees led them to reach out for emotional support in the safe space of a mentoring relationship. Some participants discussed the mentee's sense of not providing value for money and that they should be ‘across everything’ (Mentor 6), further adding to their need to reach out for support. Mentorship helped some mentees feel less isolated when confronted with complexity. The reassurance provided by mentors when mentees were overwhelmed by practice and feeling ‘very stressed and very pressured’ (mentor 4) helped mentees know that their feelings were valid and to be expected, given the often challenging situations they were navigating. [My mentee] talked about their emotional and their mental health. There was some alarming stuff, and it really affirmed for me the role of mentoring and helping people to be safe. (Mentor 9) I've got quite an honest, open relationship with (mentor) … she is very supportive … and really puts a lot of importance on moving forward without pressure … it feels very safe. If you do not have a decent relationship, then you are not going to really be as open or as honest about [how] you are coping. (Mentee 3) Six of the eight mentees in this study accessed mentors external to their organisation, with one mentee stating, ‘if it's someone external [to the organisation] then you don't feel judgement made about your ability and capacity’ (Mentee 8). Mentees described the support they received as non‐judgemental, which encouraged them to openly share their concerns regarding their competency. 3.3 Theme 3: Theory to practice translation Mentors in this study played a critical role in helping mentees break problems down and scaffold their clinical reasoning. Clinical reasoning was seen by mentors as a key role, bringing to the relationship their years of experience, relevant theories, and wider knowledge base in paediatric practice. For example, It's really supporting them to go back to the basics of OT, go through that clinical reasoning, talk about bringing together everything that we know ‐ developmental, behavioural, attachment‐based family‐centred stuff, and then applying (these theories) to achieve these goals. (Mentor 5) Mentees welcomed support with clinical reasoning. They looked to their mentor to facilitate the integration of the information about the client, the context and the mentee's own knowledge base, with the mentor's perspectives on the situation. The mentor's role was frequently described as a process by which the mentor probed for further information and supported the mentee to step back from the immediate details of the problem presented to help them ‘re‐build a bigger picture’, while understanding the child in their context. No participants articulated any specific approach/framework used by mentors to facilitate the identification of the necessary information to engage in this process. The process used appeared to be ad hoc. Despite this, the participants expressed that the support provided allowed problems to be reframed and helped mentees to then prioritise intervention. For example, She [mentor] was really helpful with guiding me as to what is next, opening my thoughts up to I suppose what else I could consider. Not just what's the next piece but consider other factors and how to bring them into my sessions and into my GAS [Goal Attainment Scaling] goals … I think the number one thing is that she has been able to help me narrow down a few things. (Mentee 8) 3.4 Theme 4. Resilience building Mentees perceived the development of capabilities for professional or clinical reasoning as being built on the foundation of emotional support provided by the mentor. This appeared to help mentees develop confidence in their own skill set and build resilience to cope with challenging situations. Although building resilience was not overtly stated by participants as a goal of mentorship, the language used by both groups, for example, ‘ learning to be flexible ,’ ‘coping’ and ‘increasing capacity to adapt to challenging situations’ , are all features of resilience (Robertson et al., ). You can see them go through those ‘Ah‐ha!’ moments. You can see them grow as clinicians and grow in their confidence and their abilities. You start to get those success stories and you see them really start to respect their knowledge and their practices and start to love their profession. (Mentor 3) A number of participants reflected on the importance of increasing confidence and resilience for the mentee to develop and take ownership of their own approach to practice. For example, Reaching out to [my mentor] meant that I was able to develop my own style of occupational therapy. (Mentee 1) In helping a therapist become more confident and resilient, mentor 9 reflected, ‘I think mentoring is giving our clinicians the capacity and the skillset to be highly respected by other people.’ Theme 1. Navigating the complexity Most mentors expressed a desire to take on the mentoring role as they (a) understood the complexity of practice and (b) felt they had something to contribute to both the profession and the client population by providing this support. Mentors discussed their role as enabling mentees to translate graduate‐level competencies into effective work with children. They have what I believe are core competencies but then we have to teach them how to work in paediatrics because they do not have the paediatric skillset that they need … to better reflect and apply [knowledge to practice], otherwise mentoring is a waste of time. (Mentor 9) All mentors articulated that paediatric practice brings unique challenges, where often it is unclear if the client is the child, and/or the parent/carer(s), teacher or school. This challenge impacted the ability of the mentees to set client‐centred goals and know who to involve in planning and intervention. Making decisions and goal setting was complicated by a variety of factors including who is paying for therapy and the context of intervention. Mentors reported that effectively responding as a paediatric occupational therapist also requires adaptability to meet the changing needs of children as they grow, ‘you're always taking childhood development into account’ (Mentor 5). They said that this requires a sound knowledge of child development, an understanding of the demands of environments where children spend their time, and a familiarity with child and adolescent occupations. For example, Paediatrics is extremely complex … there's so many different things then that can impact a child's development and progress. All of the family issues, the things that are going on at school for them, the number of stakeholders. It's never just working with that child. (Mentor 5) Understanding how to work with kids with autism who are non‐verbal, have multiple challenges, navigating their way through discussions with families which can be quite difficult or confronting. (Mentor 3) Mentees described the challenges they experienced as novice practitioners as often not knowing how or where to begin, ‘You go into a case and there is just so much you are confronted with.’ (Mentee 8). The demands of caseload management, reduced opportunities for reflection, and challenges accessing regular support with heavy caseloads added to the complexity. For example, It was a bunch of assessments … a bunch of information, and I've got a lot of the complex kids. So, the caseload has been really heavy and there have been a lot of administrative time (needed to manage the caseload) to try and get a handle on, even just what their goals are when parents were a little bit unsure. (Mentee 2) Mentees reflected on how these challenges reduced their confidence in providing what they felt was effective intervention. This frequently resulted in feelings of inadequacy. When I first came out and got into paediatrics I felt totally out of my depth and very much like I (was) not giving clients adequate therapy … especially [being] a young therapist without children. (Mentee 1) Theme 2. A safe and trusting relationship Interview data strongly indicated that feelings of incompetence experienced by mentees led them to reach out for emotional support in the safe space of a mentoring relationship. Some participants discussed the mentee's sense of not providing value for money and that they should be ‘across everything’ (Mentor 6), further adding to their need to reach out for support. Mentorship helped some mentees feel less isolated when confronted with complexity. The reassurance provided by mentors when mentees were overwhelmed by practice and feeling ‘very stressed and very pressured’ (mentor 4) helped mentees know that their feelings were valid and to be expected, given the often challenging situations they were navigating. [My mentee] talked about their emotional and their mental health. There was some alarming stuff, and it really affirmed for me the role of mentoring and helping people to be safe. (Mentor 9) I've got quite an honest, open relationship with (mentor) … she is very supportive … and really puts a lot of importance on moving forward without pressure … it feels very safe. If you do not have a decent relationship, then you are not going to really be as open or as honest about [how] you are coping. (Mentee 3) Six of the eight mentees in this study accessed mentors external to their organisation, with one mentee stating, ‘if it's someone external [to the organisation] then you don't feel judgement made about your ability and capacity’ (Mentee 8). Mentees described the support they received as non‐judgemental, which encouraged them to openly share their concerns regarding their competency. Theme 3: Theory to practice translation Mentors in this study played a critical role in helping mentees break problems down and scaffold their clinical reasoning. Clinical reasoning was seen by mentors as a key role, bringing to the relationship their years of experience, relevant theories, and wider knowledge base in paediatric practice. For example, It's really supporting them to go back to the basics of OT, go through that clinical reasoning, talk about bringing together everything that we know ‐ developmental, behavioural, attachment‐based family‐centred stuff, and then applying (these theories) to achieve these goals. (Mentor 5) Mentees welcomed support with clinical reasoning. They looked to their mentor to facilitate the integration of the information about the client, the context and the mentee's own knowledge base, with the mentor's perspectives on the situation. The mentor's role was frequently described as a process by which the mentor probed for further information and supported the mentee to step back from the immediate details of the problem presented to help them ‘re‐build a bigger picture’, while understanding the child in their context. No participants articulated any specific approach/framework used by mentors to facilitate the identification of the necessary information to engage in this process. The process used appeared to be ad hoc. Despite this, the participants expressed that the support provided allowed problems to be reframed and helped mentees to then prioritise intervention. For example, She [mentor] was really helpful with guiding me as to what is next, opening my thoughts up to I suppose what else I could consider. Not just what's the next piece but consider other factors and how to bring them into my sessions and into my GAS [Goal Attainment Scaling] goals … I think the number one thing is that she has been able to help me narrow down a few things. (Mentee 8) Theme 4. Resilience building Mentees perceived the development of capabilities for professional or clinical reasoning as being built on the foundation of emotional support provided by the mentor. This appeared to help mentees develop confidence in their own skill set and build resilience to cope with challenging situations. Although building resilience was not overtly stated by participants as a goal of mentorship, the language used by both groups, for example, ‘ learning to be flexible ,’ ‘coping’ and ‘increasing capacity to adapt to challenging situations’ , are all features of resilience (Robertson et al., ). You can see them go through those ‘Ah‐ha!’ moments. You can see them grow as clinicians and grow in their confidence and their abilities. You start to get those success stories and you see them really start to respect their knowledge and their practices and start to love their profession. (Mentor 3) A number of participants reflected on the importance of increasing confidence and resilience for the mentee to develop and take ownership of their own approach to practice. For example, Reaching out to [my mentor] meant that I was able to develop my own style of occupational therapy. (Mentee 1) In helping a therapist become more confident and resilient, mentor 9 reflected, ‘I think mentoring is giving our clinicians the capacity and the skillset to be highly respected by other people.’ DISCUSSION AND IMPLICATIONS To understand the potential of mentoring for professional development support, it is important to firstly understand the context of community paediatric practice from which mentoring is sought. Paediatric practice occurs within the wider context of the organisations employing paediatric therapists, access to profession‐specific support from more experienced practitioners, how therapy services are funded, the social and cultural climate, and the varied state and national policies that impact service provision. The participants reflected on the struggle for novice paediatric therapists to navigate their role and scope of practice, while learning to meet the demands of how services are organised and delivered in different settings with children whose needs frequently change. Mentees reached out to mentors for support that they felt they were not able to access elsewhere. Supporting mentees in a safe, non‐judgemental, and honest relationship was seen to be an essential characteristic of successful mentorship for all participants. Most mentees sought this support external to their organisation, with some expressing that this helped them to manage their perception that their colleagues might question their competence. It was unclear if accessing the support externally was also because of challenges in accessing senior occupational therapists within their workplace. The occupational therapy mentorship literature suggests that features of emotional support such as reassurance, trust, and commitment contribute to a positive mentoring experience (Doyle et al., ; King, ; King et al., ). It could be argued that clinical supervision should be providing the support offered through mentorship, and that a mentor is an ‘added benefit’. Supervision certainly is essential, particularly for clinicians new to an area of practice, and a mentor should not be seen as a replacement to a supervisor within the practice setting. A supervisor can be viewed by a therapist as a mentor (Scheerer, ); however, mentees in this study reflected that they sought out an experienced therapist to provide emotional support and to scaffold clinical reasoning, regardless of how this is support was labelled. It is imperative that employers provide access to, and allocate regular time for, novice clinicians to engage with a more experienced occupational therapist, to support both their emotional wellbeing and the development of competence. The establishment of a trusting and safe emotional relationship appeared to allow mentees to develop confidence and build their resilience, providing the opportunity to problem solve and feel reassured when situations felt overwhelming and challenging. Resilience enables capacity to cope when problems seem too complicated to solve (Robertson et al., ), and building resilience led to professional growth for mentees. In examining therapists' experiences of resilience and the role of mentoring when working with children with mental health difficulties, Lowe  found mentoring (both formal and informal) was crucial in allowing a therapist to debrief and problem‐solve, with resilience seen as a key to allowing therapists to cope and move forward. The mentoring literature in occupational therapy underscores the role of effective mentoring in supporting a sense of belonging (Doyle et al., ), and the current study reflected the role that increasing confidence and building of resilience played in allowing mentees to develop their own style, reinforcing their professional role and identity as a paediatric occupational therapist. This study identified that one of the key functions of mentorship was giving/receiving support to identify and select the relevant information from a broad knowledge base, and applying that knowledge to priority goals and aspirations of the child in context. All participants observed that working with children and families requires skills in integrating and translating of an expansive knowledge base in partnership with multiple stakeholders. They regarded this as a complex set of knowledge, skills, and behaviours that cannot be fully developed in pre‐service education. This is not unique to community paediatric settings. Bringing together the theories and concepts learnt as an occupational therapy student and applying these to practice using sound clinical reasoning is a common challenge expressed by new graduates (Bourke‐Taylor, ; King et al., ; Moir et al., ; Murray et al., ; Scanlan et al., ; Turpin & Iwama, ). Copley et al.  explored how an experienced paediatric therapist made clinical decisions. They found that information was combined and then prioritised by the therapist based on each child within their contexts, with the therapist using a range of knowledge sources built up throughout years of experience. The more experienced paediatric mentors in this study drew on their extensive knowledge from working with children and families to guide mentees in their clinical reasoning. This study highlighted the role of mentors in building competence, by supporting mentees to break down problems, understand their component parts, and reframe assessment and intervention planning by drawing on the relevant knowledge and evidence to support professional reasoning, decisions, and actions. This study did not examine mentors' paediatric practice knowledge, their skills in providing mentoring (such as interpersonal or communication skills), nor the capabilities of mentors to provide this support effectively. Despite the positive experiences of mentorship in this study, the assumption that a more experienced therapist will be an effective mentor of clinical and professional reasoning should be challenged (Nyanjom, ). In the absence of any guidance, mentoring support for clinical reasoning in the current study appeared to be ad hoc. Mentors lacked a coherent framework to make transparent, for their mentees, how they prioritised and selected knowledge for decision‐making. In examining the relationship of clinical reasoning to knowledge development, Higgs et al.  suggest that to improve clinical reasoning, ‘education must focus on the development of adequate knowledge structures.’ (p. 119). They argued that a clinician needs both opportunity and support to identify their knowledge gaps and make effective connections. A framework or tool is needed to organise occupational therapy bodies of knowledge related to paediatric practice. Such a structure would support consistency when mentoring novice clinicians to identify, select, and prioritise knowledge to address the problems presented by the child in context. Using a knowledge‐based framework to facilitate professional reasoning would reinforce the recommendation made by Copley et al. for a reconceptualisation of the different types and sources of information required to support paediatric clinical decision‐making. This could be a critical scaffold to support novice practitioners to develop their confidence in clinical decision‐making by making these critical steps of the occupational therapy process (identifying, selecting, and prioritising knowledge) transparent. LIMITATIONS There were a number of limitations to this study. As the study aim was to investigate current experience of mentorship in paediatric practice, therapists who had left this area of practice did not meet inclusion criteria. This group would have added further depth to understanding if mentorship might have had a place in supporting clinicians professionally and emotionally to remain in paediatrics. In addition, the recruitment process was likely to attract participants who felt positive about mentorship, and thus, mentors or mentees who had a negative or neutral experience were to less likely to be part of the research. Finally, the experience and skill set of mentors to provide quality mentoring using evidence‐based practice was not measured and is an unknown variable in this research. Future research to better understand mentor competencies is recommended. Despite these limitations, the current study has contributed new understanding of the experience of mentorship as professional development for community‐based paediatric therapists. Positive mentoring experiences, particularly when provided with appropriate emotional support, allowed mentees to unpack complex clinical issues to support decision‐making and appeared to enhance their resilience to cope with complex and challenging situations experienced in community paediatric practice contexts. The profession must advocate for novice clinicians to be given the time and opportunity to access a senior clinician who is able to fulfil the role of mentor. To scaffold clinical and professional reasoning, the development of a framework that organises evidence‐based paediatric knowledge is recommended, providing a structure or common language for both parties in the relationship. A more competent workforce will build capacity to provide intervention to children and their families, contributing to long‐term benefits for the occupational therapy profession as a whole. This study was performed in partial fulfilment of the requirements of PhD of the first author, and under the supervision of the second and third author. The authors acknowledge that each author has read and approved the contents of this article. All authors listed meet criteria of the International Committee of Medical Journal Editors (ICMJE). This research received no specific grant from any funding agency in the public, commercial, or not‐for‐profit sectors. The authors have no conflict of interest to declare. We declare that all authors have contributed and that all authors are in agreement with the content of the manuscript. OJ: Conception and design, data acquisition, analysis and interpretation of results, drafting of manuscript, and preparation of final manuscript for submission. MV: Conception and design, interpretation of results, review of manuscript, and final draft for submission. MM: Analysis and interpretation of results.
Perspectives and Attitudes of Dutch Healthcare Professionals Regarding the Integration of Complementary Medicine in Oncology
85c47270-22ff-4288-a7c6-c14b08aed359
10087649
Internal Medicine[mh]
In 2020, the estimated number of new cancer cases was around 19.3 million worldwide. To improve their quality of life and to cope with treatment toxicities, - almost half of all patients with cancer worldwide use complementary medicine (CM) alongside conventional cancer treatment. - Corresponding with these global findings, around 65% of Dutch cancer patients with breast cancer and 42% with hematological cancer indicated use of complementary medicine. A questionnaire focusing on the use of both complementary and alternative medicine in pediatric oncology in the Netherlands showed that 42.4% of the children with cancer made use of such therapies. Cancer patients are more likely to use CM during an advanced state of illness and overall, the prevalence of use has been increasing over the years. Complementary medicine consists of “supportive measures that help control symptoms, enhance well-being, and contribute to overall patient care,” such as acupuncture, physiotherapy, mindfulness, nutritional supplements, and yoga. , Although evidence exists that CM can be beneficial for patients with cancer, - there is also a risk of harm when used in combination with conventional cancer treatment, in particular for supplements - Therefore, effective communication between patients and their healthcare providers on the use of complementary medicine is pertinent. However, a systematic review indicated that around 40% to 50% of cancer patients (range 20%-77%) do not discuss their use of CM with their healthcare provider. In the Netherlands, patients with breast cancer, hematological cancer, and pediatric cancer indicated that 29%, 38%, and 66% respectively did not discuss their use of complementary medicine with their healthcare provider. - Patients explained this lack of communication by stating that they either experienced barriers for discussing these options with their provider or that they did not consider it necessary. - ,18 To gain a better understanding of how improvement in communication about CM can be facilitated, it is important to be aware of the attitudes and beliefs of healthcare providers on this topic and what underlying factors influence these. Previous self-report studies show that one of the biggest hurdles that oncologists experience is not having enough knowledge on the topic. , - Further integration of CM into clinical practice could enhance the communication between healthcare providers and patients on the topic of CM, as well as reduce the risk of harm by improving the coordination between complementary medicine and conventional care. Within this paper, integration is broadly defined, from research and education on the topic of CM, to internally offering evidence-based CM services or having a referral network with qualified external CM providers. There is a growing interest in CM integration, due to greater awareness of the potential benefits, and an increase in demand for CM by patients with cancer. - Key stakeholders of the integration of complementary medicine in conventional cancer care include healthcare providers and healthcare managers. Research indicates that healthcare managers are less familiar with CM compared to other healthcare professionals and that familiarity is a large factor for positive attitudes. From the healthcare providers’ side, the main reasons mentioned to explain why CM is not further integrated into oncological care were missing evidence, financial resources and qualifications. , , In the Netherlands, mostly positive attitudes toward complementary medicine were found for diverse healthcare providers, although the majority of respondents considered their knowledge on CM to be lacking. - In a study surveying the opinions of (non)-clinical Dutch healthcare professionals active in various fields, 64% of the respondents believed that the use of complementary medicine is of importance for optimal healthcare and would consider integration in their institution. To the best of our knowledge, no study has focused on the attitudes of Dutch healthcare providers and managers in oncology specifically. The purpose of this study is to assess the attitudes and beliefs of Dutch healthcare providers and managers on complementary medicine, as well as to gain a better understanding of the current status of integration of and communication about CM in oncology. In this study, a self-reporting, anonymous, online questionnaire was administered among healthcare providers and healthcare managers working in an oncological setting in the Netherlands. Respondents A convenience, volunteer sample of healthcare providers and healthcare managers working in oncology was recruited. When opening the link to the questionnaire, respondents gave explicit consent for the use of their data. Procedure A comprehensive overview of general oncology centers and all oncology outpatient clinics of hospitals in the Netherlands was created, resulting in a list of 74 oncology departments. The secretaries of these departments were approached by phone or by e-mail with the request to distribute the link to the online questionnaire among healthcare providers and healthcare managers working at their department. Secretary contact details were derived from the websites of the hospitals. After a month, a reminder was sent to all departments. In order to reach more participants, the link to the online questionnaire was also placed on the websites of the Netherlands Comprehensive Cancer Organization (IKNL), the Oncological Collaboration Foundation (SONCOS) and the Professional Association of Nurses and Caretakers in Oncology (V&VN). Furthermore, an announcement was made in the online IKNL newsletter. The link to the online questionnaire remained open for 2 months (4 May-1 July 2021). Of the 74 oncology departments that were directly approached, we received (partly) completed questionnaires from 54 departments (73%). Besides, healthcare providers or managers from 20 additional institutes responded, such as radiotherapy clinics and specialized oncology centers (eg, breast cancer). The mean number of respondents per institute was 3.3 (SD = 4.6). Only respondents who completed at least the first part of the questionnaire were included in the analyses. Questionnaire The questionnaire was adapted from previous publications , , and piloted among 2 healthcare providers (1 oncologist, 1 oncology nurse) and 1 healthcare manager working in an oncology department at an academic hospital in the Netherlands. Completion time during the pilot appeared to be approximately 5 minutes. The pilot resulted in minor modifications to the formulation of the questions before the final version was disseminated. The final questionnaire starts with an assessment of the demographic characteristics of respondents (sex, age category, organization, department, specialization). This was followed by a clear description of what complementary medicine entails, with examples provided, to ensure that the respondents had similar definitions of CM in mind. The survey that followed consisted of 2 parts, with 14 questions in total. The first part relates to the status of integration. First, a general assessment was made of the degree and area of implementation of complementary medicine in the respondent’s oncology department (areas: healthcare program, research, education, organization policy, department policy, personal policy, or other). Then, respondents were queried about barriers to implementation (knowledge, experience, scientific evidence, financial sources, support of management or colleagues, or other). The subsequent four questions were focused on specific details of the integration of complementary medicine in their institution related to communication: discussing, advising, referring or offering complementary medicine. The second part aims to gain more insight into the attitudes of healthcare professionals toward this integration. With this intent, 7 statements about the integration of complementary medicine in oncology care were presented to the respondents. Data Analysis Descriptive statistics were used to summarize the demographic characteristics of respondents, the perspectives of healthcare providers on the integration of CM, and the respondents’ general attitudes and beliefs toward CM. A multivariable analysis was performed to gain a better understanding of the underlying influences for the attitudes and beliefs toward CM. Adopting the stepwise method conducted by Lee et al, chi-squared tests were performed to explore the variables relating to the attitudes and beliefs of respondents toward CM. The variables that had a P -value ≤.20 in the chi-squared tests were included in the binomial logistic regression models. An exception was made for the variables “type of healthcare providers” and their sex, due to the high level of multicollinearity (chi-squared P -value <.000). Therefore, when both variables were significant in the univariable analyses, the variable with the lowest P -value was chosen to include in the multivariable analysis. The independent variables included in the analyses were “type of healthcare provider,” “age” and “sex of respondents,” as well as whether or not CM activities have been implemented in the institution. Due to the low response rate of managers, only healthcare providers (medical specialists and nurses) were included in the model. The univariable analyses can be found in the . The nominal outcome variables included in the univariable analyses had to be transformed into dichotomous outcome variables for the multivariable logistic regression model. In line with for example, van Vliet et al., positive attitudes and beliefs were captured by (completely) disagreeing with “healthcare providers should not engage in CM” and “my institution should not offer CM internally,” or (completely) agreeing with “CM is important as supplement to oncological treatment,” “healthcare providers should routinely inquire about patients CM use,” “healthcare providers should be able to advise on the effectiveness and safety of CM,” and “my institution should have a referral network of external CM providers.” All statistical analyses were performed using Stata/SE software (version 16.1). Ethical Considerations The research project “COMMON” was exempted from formal approval under the Dutch Medical Research Involving Human Subjects Act by the Arnhem-Nijmegen Medical Ethics Committee (case number 2020-6917). A convenience, volunteer sample of healthcare providers and healthcare managers working in oncology was recruited. When opening the link to the questionnaire, respondents gave explicit consent for the use of their data. A comprehensive overview of general oncology centers and all oncology outpatient clinics of hospitals in the Netherlands was created, resulting in a list of 74 oncology departments. The secretaries of these departments were approached by phone or by e-mail with the request to distribute the link to the online questionnaire among healthcare providers and healthcare managers working at their department. Secretary contact details were derived from the websites of the hospitals. After a month, a reminder was sent to all departments. In order to reach more participants, the link to the online questionnaire was also placed on the websites of the Netherlands Comprehensive Cancer Organization (IKNL), the Oncological Collaboration Foundation (SONCOS) and the Professional Association of Nurses and Caretakers in Oncology (V&VN). Furthermore, an announcement was made in the online IKNL newsletter. The link to the online questionnaire remained open for 2 months (4 May-1 July 2021). Of the 74 oncology departments that were directly approached, we received (partly) completed questionnaires from 54 departments (73%). Besides, healthcare providers or managers from 20 additional institutes responded, such as radiotherapy clinics and specialized oncology centers (eg, breast cancer). The mean number of respondents per institute was 3.3 (SD = 4.6). Only respondents who completed at least the first part of the questionnaire were included in the analyses. The questionnaire was adapted from previous publications , , and piloted among 2 healthcare providers (1 oncologist, 1 oncology nurse) and 1 healthcare manager working in an oncology department at an academic hospital in the Netherlands. Completion time during the pilot appeared to be approximately 5 minutes. The pilot resulted in minor modifications to the formulation of the questions before the final version was disseminated. The final questionnaire starts with an assessment of the demographic characteristics of respondents (sex, age category, organization, department, specialization). This was followed by a clear description of what complementary medicine entails, with examples provided, to ensure that the respondents had similar definitions of CM in mind. The survey that followed consisted of 2 parts, with 14 questions in total. The first part relates to the status of integration. First, a general assessment was made of the degree and area of implementation of complementary medicine in the respondent’s oncology department (areas: healthcare program, research, education, organization policy, department policy, personal policy, or other). Then, respondents were queried about barriers to implementation (knowledge, experience, scientific evidence, financial sources, support of management or colleagues, or other). The subsequent four questions were focused on specific details of the integration of complementary medicine in their institution related to communication: discussing, advising, referring or offering complementary medicine. The second part aims to gain more insight into the attitudes of healthcare professionals toward this integration. With this intent, 7 statements about the integration of complementary medicine in oncology care were presented to the respondents. Descriptive statistics were used to summarize the demographic characteristics of respondents, the perspectives of healthcare providers on the integration of CM, and the respondents’ general attitudes and beliefs toward CM. A multivariable analysis was performed to gain a better understanding of the underlying influences for the attitudes and beliefs toward CM. Adopting the stepwise method conducted by Lee et al, chi-squared tests were performed to explore the variables relating to the attitudes and beliefs of respondents toward CM. The variables that had a P -value ≤.20 in the chi-squared tests were included in the binomial logistic regression models. An exception was made for the variables “type of healthcare providers” and their sex, due to the high level of multicollinearity (chi-squared P -value <.000). Therefore, when both variables were significant in the univariable analyses, the variable with the lowest P -value was chosen to include in the multivariable analysis. The independent variables included in the analyses were “type of healthcare provider,” “age” and “sex of respondents,” as well as whether or not CM activities have been implemented in the institution. Due to the low response rate of managers, only healthcare providers (medical specialists and nurses) were included in the model. The univariable analyses can be found in the . The nominal outcome variables included in the univariable analyses had to be transformed into dichotomous outcome variables for the multivariable logistic regression model. In line with for example, van Vliet et al., positive attitudes and beliefs were captured by (completely) disagreeing with “healthcare providers should not engage in CM” and “my institution should not offer CM internally,” or (completely) agreeing with “CM is important as supplement to oncological treatment,” “healthcare providers should routinely inquire about patients CM use,” “healthcare providers should be able to advise on the effectiveness and safety of CM,” and “my institution should have a referral network of external CM providers.” All statistical analyses were performed using Stata/SE software (version 16.1). The research project “COMMON” was exempted from formal approval under the Dutch Medical Research Involving Human Subjects Act by the Arnhem-Nijmegen Medical Ethics Committee (case number 2020-6917). Characteristics of Respondents Of the 260 people who started the survey, a total of 209 respondents (80.4%) provided complete information up to and including part 1; perspectives on the integration of CM . Of these, 159 healthcare professionals completed the entire survey, including the questions on attitudes and beliefs toward CM (61.2%). The majority of 209 respondents were women (85.9%) and 60.5% of the respondents were 50 years or younger . Mostly nurses (76.1%) and medical specialists (20.1%) completed the survey. The 159 respondents that finished the entire survey are generally very similar in terms of demographic distribution to the 209 respondents that just filled in part 1. Perspectives on the Integration of Complementary Medicine In , an overall impression of the perspectives on the integration of complementary medicine in oncology is depicted (n = 209). Two-thirds (68.4%) of the respondents indicated that their organization has implemented complementary medicine in oncology, or envisions implementation. Most of the implemented activities were implemented at department level. In the open question, activities mentioned were massages, tools for acupressure or colleagues specializing in counseling about complementary medicine. Of all 209 respondents, 49.3% stated they experienced 1 or more barriers to implement activities related to complementary medicine in oncology, of which knowledge was most commonly indicated to be lacking. This was followed by lack of experience, financial support and management support. Scientific evidence for the effectiveness of complementary medicine was least indicated, although it was still selected by 43.7% of respondents. In the category “other,” respondents added other lacking sources, such as time for implementation and execution, opportunities to learn from experienced colleagues, coordination with involved parties, and acceptance and commitment from colleagues. A notable 71.1% of respondents said that in their institution, patients can discuss complementary medicine with their healthcare provider. However, less than half of respondents believed that healthcare providers in their institution are able to advise patients about complementary medicine (45.3%), although 18.1% indicated this is in preparation. Moreover, only 43.3% reported the ability to refer to external complementary medicine providers or to offer CM internally (42.1%). Many did not expect a change in referral to and internal offering of complementary medicine (47.8% and 43.3%, respectively). General Attitudes and Beliefs Towards Complementary Medicine In , the general attitudes and beliefs of respondents toward complementary medicine are summarized (n = 159). In total, 86.8% (completely) agreed that complementary medicine is an important supplement to oncological treatment. Over three-quarters (76.7%) felt that healthcare providers should engage in complementary medicine for patients, but fewer people (69.1%) believed that healthcare providers should routinely inquire about the use of complementary medicine by patients. The majority (82.3%) of respondents believed that healthcare providers must be able to advise patients on the effectiveness and safety of complementary medicine, although 72.3% indicated that they would need support to discuss this topic with patients. Moreover, 19.5% thought that their institution should not offer CM internally, while 62.3% believed that their organization should have a referral network of external complementary healthcare providers. Multivariable Analysis of Attitudes and Beliefs Towards Complementary Medicine As can be seen in , being female makes it significantly more likely to (completely) agree with the belief that CM is an important supplement (OR, 5.10; 95% CI, 1.58-16.41) and to (completely) disagree with the statement that healthcare providers should not engage in CM (OR, 3.31; 95% CI, 1.17-9.37). Institutional implementation of CM also made it more likely for respondents to (completely) agree with the statement that healthcare providers should routinely inquire about patients’ CM use (OR, 2.25; 95% CI, 1.07-4.74) and to (completely) disagree with the statement that their organization should not offer CM internally (OR, 2.26; 95% CI, 1.09-4.69). Lastly, nurses were more likely than medical specialists to agree that there should be an external referral system (OR, 2.17; 95% CI, 1.02-4.60) and that CM should be offered internally (OR, 2.28; 95% CI, 1.01-5.13). The age of respondents was never significantly associated with the attitudes and beliefs of respondents. Of the 260 people who started the survey, a total of 209 respondents (80.4%) provided complete information up to and including part 1; perspectives on the integration of CM . Of these, 159 healthcare professionals completed the entire survey, including the questions on attitudes and beliefs toward CM (61.2%). The majority of 209 respondents were women (85.9%) and 60.5% of the respondents were 50 years or younger . Mostly nurses (76.1%) and medical specialists (20.1%) completed the survey. The 159 respondents that finished the entire survey are generally very similar in terms of demographic distribution to the 209 respondents that just filled in part 1. In , an overall impression of the perspectives on the integration of complementary medicine in oncology is depicted (n = 209). Two-thirds (68.4%) of the respondents indicated that their organization has implemented complementary medicine in oncology, or envisions implementation. Most of the implemented activities were implemented at department level. In the open question, activities mentioned were massages, tools for acupressure or colleagues specializing in counseling about complementary medicine. Of all 209 respondents, 49.3% stated they experienced 1 or more barriers to implement activities related to complementary medicine in oncology, of which knowledge was most commonly indicated to be lacking. This was followed by lack of experience, financial support and management support. Scientific evidence for the effectiveness of complementary medicine was least indicated, although it was still selected by 43.7% of respondents. In the category “other,” respondents added other lacking sources, such as time for implementation and execution, opportunities to learn from experienced colleagues, coordination with involved parties, and acceptance and commitment from colleagues. A notable 71.1% of respondents said that in their institution, patients can discuss complementary medicine with their healthcare provider. However, less than half of respondents believed that healthcare providers in their institution are able to advise patients about complementary medicine (45.3%), although 18.1% indicated this is in preparation. Moreover, only 43.3% reported the ability to refer to external complementary medicine providers or to offer CM internally (42.1%). Many did not expect a change in referral to and internal offering of complementary medicine (47.8% and 43.3%, respectively). In , the general attitudes and beliefs of respondents toward complementary medicine are summarized (n = 159). In total, 86.8% (completely) agreed that complementary medicine is an important supplement to oncological treatment. Over three-quarters (76.7%) felt that healthcare providers should engage in complementary medicine for patients, but fewer people (69.1%) believed that healthcare providers should routinely inquire about the use of complementary medicine by patients. The majority (82.3%) of respondents believed that healthcare providers must be able to advise patients on the effectiveness and safety of complementary medicine, although 72.3% indicated that they would need support to discuss this topic with patients. Moreover, 19.5% thought that their institution should not offer CM internally, while 62.3% believed that their organization should have a referral network of external complementary healthcare providers. As can be seen in , being female makes it significantly more likely to (completely) agree with the belief that CM is an important supplement (OR, 5.10; 95% CI, 1.58-16.41) and to (completely) disagree with the statement that healthcare providers should not engage in CM (OR, 3.31; 95% CI, 1.17-9.37). Institutional implementation of CM also made it more likely for respondents to (completely) agree with the statement that healthcare providers should routinely inquire about patients’ CM use (OR, 2.25; 95% CI, 1.07-4.74) and to (completely) disagree with the statement that their organization should not offer CM internally (OR, 2.26; 95% CI, 1.09-4.69). Lastly, nurses were more likely than medical specialists to agree that there should be an external referral system (OR, 2.17; 95% CI, 1.02-4.60) and that CM should be offered internally (OR, 2.28; 95% CI, 1.01-5.13). The age of respondents was never significantly associated with the attitudes and beliefs of respondents. This survey examined the attitudes and beliefs toward CM among a sample of Dutch healthcare professionals in oncology, as well as their perspectives on the current status of integration of CM in oncology. Over half of the respondents indicated that their institution has implemented CM in their oncology department, while 10% stated that implementation is envisioned. A direct comparison of the extent of integration to that of other countries is not possible, as such studies used different conceptualizations, methods and definitions. However, some findings are highlighted for the purpose of context. A European study stated that 47.5% of European oncology centers provided integrative oncological treatments, , while an Australian study found that 25.8% of their organizations offered integrative oncology. In the United States, 60% of National Cancer Institutes listed information on integrative therapies on their websites. Around half of the respondents reported they lack something for CM implementation in oncology. Congruent with previous findings, barriers to implementing CM activities indicated by the study sample were mainly a lack of knowledge , - and experience, , but also lacking financial support, , , support from management, , , and scientific evidence. , , The majority of respondents indicated a need for support in discussing CM use, despite the results showing that in most hospitals, patients are provided with the opportunity to discuss CM use with their healthcare providers. A survey conducted in Germany also indicated low confidence among healthcare providers in discussing CM. These limitations can hinder a healthcare providers’ ability to adequately advise their patients and to provide support, which may lead to patients making decisions on CM use that could potentially be harmful. - Providing education and training on CM for healthcare professionals could be part of the answer. , Moreover, the responses show that less than half of the institutions in this sample refer to external CM providers or offer CM internally to patients, which also seem to be less envisioned compared to discussing and advising patients on CM use. Lack of financial support and support from management and colleagues can potentially explain this finding. Change can be driven by professionals working in the field; however, healthcare managers are key stakeholders needed to facilitate the integration of complementary medicine in conventional care. Therefore, their attitudes, beliefs and perspectives on this topic are relevant, particularly since some of the crucial key points mentioned for effective integration are having a strong strategic plan, supportive leadership and a viable operating budget. , Little research has been done on investigating the managers’ perspectives on CM , and thus, further research into this topic is warranted. As this group of interest is relatively small, a more direct approach might be more suitable than convenience sampling. Given the limited number of responses, unfortunately, no conclusions can be drawn on the opinions of managers included in this survey. Overall, the attitudes of the respondents toward CM were positive. Most respondents agreed that CM is an important supplement to conventional cancer treatment and that healthcare providers need to engage and should have knowledge on the topic. Previous surveys in European countries also report a predominantly positive attitude among oncology professionals toward CM, , , although healthcare provider’ attitude is not measured in a consistent manner across studies. Respondents seemed more neutral when it comes to the routine inquiry of CM use, external referral systems or internally offering CM. Already having implemented CM in the institution meant that respondents were significantly more likely to believe that healthcare providers should routinely inquire about CM use and that their institution should offer CM internally. This suggests that healthcare providers are more uncertain how CM should be integrated into standard care when it has not been implemented yet. Examinations of different types of implementation show a preference for a more integrative form of oncology care, where patients receive guidance and have access to all information and care at a single location. , Compared to medical specialists, nurses were more likely to show a positive attitude toward referral to external CM providers and offering CM internally. This could be due to nursing being generally a more holistic and supportive role toward the patient than the more technical role of a medical specialist. Lastly, in line with previous research, , , , the results show that female healthcare providers are significantly more positive toward complementary medicine as part of cancer treatment compared to male healthcare providers. This could reflect a greater open-mindedness of females toward CM in general as some suggest, , backed up by studies showing that female patients are also more interested in CM than men. - However, the high response rate of nurses (who are mainly female, see ) might have influenced these results as well, given the high multicollinearity between the type of healthcare professionals and their sex. Some other limitations of this study are important to note when interpreting the findings. The method chosen is a convenience, volunteer sample, which is a poorer reflection of the population compared to random sampling. Due to this chosen method, the sample size is relatively low with limited statistical power to detect small differences. The survey was split into 2 parts and designed to be simple to encourage response. About 80.4% of the respondents completed only the first half of the survey and 61.2% filled it in completely, limiting the generalizability of the findings. This might reflect self-selection bias, as it can be assumed that respondents who fully completed the survey were generally more interested in and/or positive toward the topic of CM, thus influencing the results of attitudes and beliefs. Moreover, all data were self-reported, which can thus be subject to recall bias for the questions about integration, in addition to the possibility that the respondents may not be fully aware of what their institution provides. Lastly, the concept of implementation is open to many interpretations, and it is therefore not entirely certain how the respondents perceived the questions on the integration of CM. The answers should thus merely be interpreted as a signal that complementary medicine has been receiving more consideration. The findings of this study indicate that in a sample of Dutch healthcare providers, consisting mainly of nurses, attention is being paid to the integration of CM into oncology. Overall, the attitudes of respondents toward CM were positive. The main barriers for implementing CM activities were missing knowledge, experience, financial support, and support from management. To improve the ability of healthcare providers to guide patients in their use of complementary medicine, these issues should be delved into in future research.
Connecting environmental and evolutionary microbiology for the development of new agrobiotechnological tools
b123c899-c6b4-4064-b00b-8d989c8ed6c1
10087822
Microbiology[mh]
The author declares no conflict of interest.
Risk for intellectual disability populations in inpatient forensic settings in the United Kingdom: A literature review
c821de7e-be1f-4a81-b0b3-2487594115c2
10087896
Forensic Medicine[mh]
BACKGROUND Approximately 1.5 million people in the United Kingdom (UK) (2% of the population) have an intellectual disability, defined as “neurodevelopmental disorders that begin in childhood and are characterized by intellectual difficulties as well as difficulties in conceptual, social, and practical areas of living” (American Psychiatric Association, ). This is in line with an estimated global prevalence rate of 2.2% (World Health Organisation, ). As defined in the Forensic Disability Service report (2019) forensic disability services “span the conventional boundaries between disability, mental health and the criminal justice system” (QO, ; p. 6). In the criminal justice system (CJS), people with intellectual disabilities are overrepresented compared to the general population. However, estimates vary due to how intellectual disabilities are defined and recorded (Young et al., ). In the United Kingdom, forensic intellectual disability services are responsible for the assessment, treatment, rehabilitation, and care of patients diagnosed with an intellectual disability (Mental Welfare Commission, ). When a person with a mental health disorder or intellectual disability has committed an offence that could warrant a prison sentence or detention for public safety, a court can decide to detain them under the Mental Health Act (2007) (MHA). In these circumstances, the individual receives treatment in a secure hospital, based on their assessed level of risk. Therefore, this review defines individuals with an intellectual disability, detained under the Mental Health Act, and residing in a secure hospital setting as intellectual disability forensic inpatients. Following the reported abuse of patients with an intellectual disability at Winterbourne View Hospital from February 2008 to May 2011, a review of care across England for people with challenging behaviour was undertaken. The Transforming Care report (UK Department of Health, ) highlighted a failure to implement high‐quality care and deliver outcomes in line with best practice for people with learning disabilities or autism who exhibit challenging behaviour. For example, 73% of Winterbourne patients were detained under the Mental Health Act and faced long periods of detention. Subsequent independent reviews in both England and Scotland into the inclusion of intellectual disabilities and autism within the Mental Health Act raised concerns regarding how this legislation works for people with intellectual disabilities (UK Department of Health, ) (Scottish Government, ). Notably, people with intellectual disabilities face longer periods of detention than those who do not have an intellectual disability, which poses a risk to their human rights (Chester et al., ) (Mental Welfare Commission, ). There is also significant concern that secure ward environments can be unsuitable for people with an intellectual disability and pose a risk to a patient's physical or mental health (Mental Welfare Commission, ). Patients with intellectual disabilities can have difficulty accessing appropriate and prompt support while in acute settings (Marshall‐Tate et al., ), further exacerbating the risks to physical and mental health posed by longer periods of detention. Care planning is a key mechanism by which a patient's care and treatment can be developed, documented and shared. Well‐implemented care planning provides a participatory framework for reviewing the benefits of a treatment programme and enables person‐centred care (Mental Welfare Commission, ). Recent consultations on the reform of the Mental Health Act stress the need for patient preferences to be placed “front and centre” (DoH, , p. 7). People with intellectual disabilities can experience difficulties in learning new information, remembering and processing information, problem‐solving, and developing coping strategies for novel situations. These discrepancies in functioning can compound to create significant barriers to communication (Drainoni et al., ). Being able to contribute meaningfully to care planning requires skills in communicating, processing and weighing up information as well as being able to articulate personal points of view (National Institute for Health and Care Excellence, ). Risk assessment and risk management are major factors in care planning for forensic patients. The Department of Health ( ) Best Practice in Managing Risk defines risk as: ‘relating to the likelihood, imminence and severity of a negative event occurring (i.e., violence, self‐injury, self‐neglect)’ (p. 15). However, research has shown that people with an intellectual disability can perceive risk and protective factors differently, affecting their understanding, communication, and decision‐making around safety (Martí‐Agustí et al., ). This can lead to risky behaviours that may complicate the management of care. Risk assessment instruments in forensic psychiatry often combine actuarial and clinical data, and stress the dynamic nature of risk as well as the importance of situational triggers so that plans and procedures can be put in place to mitigate the possibility of an adverse event occurring. Where there is an identified risk of violence, possible interventions include increased use of de‐escalation techniques, adaptations to the environment, psychological and allied health professional led therapies, and the possibility of seclusion (Long et al., ). Despite the recognition and support among practitioners that patients should be involved in their risk assessment and management (Langan & Lindow, ), there is a significant research gap regarding effective means of engaging patients with an intellectual disability in risk assessment (Markham, ). Although there is some limited evidence drawn from the forensic mental health population (Reynolds et al., ), the experiences of forensic inpatients with an intellectual disability remain under‐researched, despite the communication challenges that this population can often experience. Although there is a growing body of research evaluating risk assessment tools and risk management strategies for people with an intellectual disability in forensic services such as the Historical Clinical Risk Management‐20, Version 3 (Douglas et al., ) and Risk for Sexual Violence Protocol (RSVP) (Hart & Boer, ) (Lindsay et al., ), no literature review has been undertaken to synthesise the evidence on experiences of risk among patients with intellectual disabilities detained under the MHA in forensic inpatient settings. AIM The aim of this review was to map and appraise academic evidence on the concept of risk in the context of U.K. forensic services for patients with an intellectual disability. The following objectives underpinned the aim: To identify the forensic and health risks experienced by staff and patients with an intellectual disability in U.K. forensic services. To understand nurses' perceptions of managing risk in U.K. forensic intellectual disability services. To assess the extent to which patients with an intellectual disability can inform their risk assessment and management, and identify factors which may help or hinder this. METHODS The literature review was conducted between 1st May 2020 and 26th March 2021. As noted by Turner ( ), interest in forensic risk assessment started to grow around the year 2000 due to the extensive deinstitutionalisation occurring at the time. Therefore, articles included in the literature review were published between 2000 and 2021. The PICOST framework provided a structured approach for developing a research question and search strategy by considering population, intervention, comparison, outcomes, situation, study design and timeframe (Cullum et al., ). The search strategy was further refined through preliminary searches conducted in Google Scholar and Scopus, and consultation with the wider research team. Key search terms included combinations of words such as Learning disab*, Intellectual disab* and Forensic se*, Forensic unit, Forensic facilit* and Risk, Safety. The full search strategy is presented in Table . The identified search terms were combined using Boolean operators and entered into relevant medical, psychology, and social science databases. The bibliographical databases used were CINAHL, OVID, PubMed, Scopus, SocINDEX and Web of Science. When the parameters of an academic database did not facilitate the use of the full search strategy, the strategy was adapted or condensed, and deviations from the original strategy were noted. For example, CINAHL did not allow the input of the full search strategy. Therefore, the term “forensic ward” was removed as it yielded the fewest results. To further supplement database searches, reference lists in publications included for full‐text review were screened, and a free‐hand search was conducted. Results from the databases were exported into Endnote reference management software. The lead reviewer screened titles and abstracts and selected studies for potential inclusion in the review, applying pre‐identified inclusion and exclusion criteria. The criteria for inclusion of articles in this literature review included peer‐reviewed academic papers reporting on issues of risk for people with an intellectual disability, residing in low, medium and high‐secure forensic settings. Any articles that did not directly mention individuals with intellectual disabilities, were not based in adult forensic inpatient settings, or did not report on issues of risk, were not included for review. In the event a study was conducted in more than one setting, for example, a mixture of forensic and community settings, literature was included if it was possible to extrapolate findings relevant to forensic inpatient settings. We did not discriminate based on methodology, providing that the study had been subject to peer review. Expert opinion pieces and relevant grey literature were retained but not included. This provided additional context for the research team but ensured only peer‐reviewed evidence was included in the review. Papers included for review were collated in a table (Table ) according to author, date, title, publication journal, location, aims, participants, interventions, study design, findings, outcomes related to risk, and National Institute for Health and Care Excellence (NICE) quality rating. The NICE appraisal checklist provides a list of criteria for determining the quality of academic publications. This specific tool was chosen as it provides a means of standardising quality ratings across a range of research designs (NICE, ). Papers are designated a quality rating based on the extent to which the NICE criteria are met: all criteria met (++), some criteria met (+) or criteria not met (−). Quality ratings are displayed in Table . Due to variance in study designs and insufficient information to enable meta‐analysis, data were synthesised using a thematic approach. A six‐stage process was used to categorise data into meaningful themes: (1) familiarisation, (2) initial coding, (3) identifying themes, (4) reviewing themes, (5) defining themes and (6) writing up (Braun & Clarke, ). RESULTS 4.1 Study characteristics Twenty‐two peer‐reviewed articles around issues of risk in U.K. inpatient forensic settings for people with an intellectual disability were deemed suitable for inclusion. Of these, seven were quantitative/experimental designs (Campbell & McCue, ; Chester et al., , ; Fitzgerald et al., ; Hogue et al., ; Morris et al., ; Novaco & Taylor, ), five were qualitative (Duperouzel & Fish, ; Lovell et al., ; Malda‐Castillo et al., ; Wright et al., ; Wood et al., ), four cohort studies (Russell et al., ; Lindsay, Steptoe, et al., ; Lindsay, Carson, et al., ; Lindsay et al., ), There were two service evaluations/audits (Alexander et al., ; Plant et al., ), two survey designs (Fish et al., ; Mason et al., ), one case study (Ashworth et al., ) and one study utilising the Delphi approach (Morrissey et al., ). 4.2 Findings on risk Three themes each with several sub‐themes related to risk emerged from the analysis of included studies; (1) Forensic risk, (2) Health and wellbeing risk and (3) Risk management. 4.3 Theme 1: Forensic risk In forensic inpatient settings, the most commonly assessed behaviours are risk of violence, risk of sexual offences towards staff, and risk of substance misuse. 4.3.1 Subtheme 1.1: Risk of violence Four studies reported the risk of violence. Chester et al. ( ) compared the characteristics of 401 long‐stay forensic patients in England, including 66 with intellectual disabilities, and 335 without. Chester et al. ( ) reported a higher number of patients with an intellectual disability were involved in serious physical assaults on staff and other patients in forensic settings: 47% compared to 21.5% of the non‐intellectual disability population. However, Malda‐Castillo et al.'s ( ) analysis of routinely collected information and incident reports found a higher rate; reporting that up to 70% of forensic inpatients had been involved in at least one violent incident over 12 months ( n = 138). Physical assaults were the most frequent type of incident with 779 incidences reported over 1 year. There were also a total of 39 sexual assaults directed towards staff reported. This suggests that sexual assault also poses something of a risk to staff, although incidences of physical violence are significantly more common (Malda‐Castillo et al., ). Physical assault, verbal abuse, harassment and psychological abuse were more likely to occur in shared spaces such as lounges, dining rooms and corridors. The authors conclude that staff working in intellectual disability secure services might have an increased risk for client‐inflicted injuries than staff working in non‐intellectual disability inpatient wards. Physical violence leads to longer periods of detention for patients, increases the risk of both harm to and burnout among staff, and increases distress among patients (Novaco & Taylor, ). Malda‐Castillo et al. ( ) argue that, given the potentially severe impact of physical aggression, there remains a lack of knowledge of how violence develops and how intellectual disability services respond to violence. Wright et al. ( ) conducted interviews with patients ( n = 8) and nursing staff ( n = 10) to explore their attitudes towards violence and aggression in a high‐security setting, as a medium for comparing views on ways to prevent violent behaviour. They noted that an ‘institutionalised’ physical environment exacerbates the risk of aggressive behaviour. Gender was seen to be a factor, with male patients suggesting that female nursing staff had a positive impact in reducing aggression, acting as a “calming influence” and spending more time with patients. Than their male counterparts. The authors conclude by arguing that patient involvement in risk management may be one way to minimise the risk of violence in forensic settings. 4.3.2 Subtheme 1.2: Risk of substance misuse Two papers were found concerning historical substance misuse among people with intellectual disabilities in the CJS. Alcohol‐related crime and history of alcohol use were recorded from 477 participants referred to U.K. forensic intellectual disability services (Lindsay, Carson, et al., ). They found 20.8% of inpatients had a history of alcohol misuse and 5.9% had committed an alcohol‐related crime, highlighting alcohol as a significant risk factor. Historical alcohol use may act as a precursor to psychiatric problems in adulthood and is associated with behavioural disturbances including physical aggression, sexual offences, theft and property damage (Lindsay, Carson, et al., ). Investigating substance misuse, Plant et al. ( ) conducted a retrospective baseline audit of case notes from an inpatient forensic service of 74 people with intellectual disabilities (54 males and 20 females) in the east of England. Results showed a significant number of patients ( n = 34, 47%) had a history of harmful use or dependence on alcohol or illegal drugs, with alcohol (41%) and cannabis (28%) being the most commonly used. 4.4 Theme 2: Health and wellbeing risk The review also found literature concerning the mental and physical health and wellbeing risks faced by individuals with intellectual disabilities due to residing in a forensic inpatient setting. 4.4.1 Subtheme 2.1: Physical health risks Two studies reported physical health risks. Russell et al. ( ) examined the weight and body mass index (BMI) data of 46 inpatients (15 women and 31 men) on and during admission to a specialist forensic service. Only six (13%) inpatients were normal weight at admission, whereas 40 (87%) were overweight or obese. During admission, 28 (61%) gained weight. However, 17 (37%) lost weight, although many of this group remained overweight/obese. The authors suggest that the high rates of obesity among this population may be linked to limited opportunities for exercise in forensic settings. Research by Chester et al. ( ) investigated vitamin D deficiency in inpatient forensic intellectual disability services ( n = 84), measuring 25 hydroxy‐vitamin D (25[OHD]) concentrations within the blood. At baseline, 87% of patients were deficient or insufficient, while 13% were sufficient or optimal. At six‐month follow‐up, following supplement treatment, 53% had sufficient or optimal levels. However, some patients remained deficient (13%) or insufficient (34%). The authors identified several risk factors for vitamin D deficiency among this population, which included being non‐ambulatory, poor dietary intake and having limited access to the outside, and consequently, poor sunlight exposure. 4.4.2 Subtheme 2.2: Mental health risk Five studies reported risks to mental health that seemed to stem from behaviours such as violence, self‐injury and self‐neglect. Chester et al. ( ) reported that significantly higher rates of self‐injury were recorded among forensic inpatients with an intellectual disability (77.3%) than the non‐intellectual disability population (61.2%). It should, however, be noted that no significant difference in levels of suicidal behaviour between the two populations was found. Forensic patients with a history of self‐injury reported believing that patients had a higher risk of self‐injury when placed under restrictive measures, with limited movement and privacy. Three qualitative studies reported that feelings of restriction and loss of freedom were associated with increased risk of harm to one's self and others (Wright et al., ) (Duperouzel & Fish, ). Wood et al. ( ) offer specific examples of small restrictions that were linked to individuals experiencing negative emotions including anxiety, anger and dejection. Restrictions could include being unable to decide when to turn lights on/off, deciding when to have a hot drink, deciding when to go out and managing personal finances. 4.5 Theme 3: Risk management Seven papers related to staff nurses and patient views on risk assessment and risk management processes within intellectual disability forensic services. Lovell et al. ( ) explored nurses' perceptions of competencies for working with patients with an intellectual disability. Decision‐making around risk was viewed as integral to the nursing role. Balancing risk was an issue encountered daily and was a source of tension, with nurses fearing the consequences arising from making incorrect decisions regarding patient risks management. Nurses were anxious about and sometimes avoidant of risk, there was ambivalence around testing new therapeutic interventions with individual patients due to fear of negative consequences for the patient in the event something went wrong. One example given was the decision to allow patients to leave the ward alone. While this may be positive for the individual's independence, nurses reported concern about whether a patients' mental state could pose a risk to themselves or others while out in the community. Team decision‐making was viewed as a key part of risk management as it was a way of insuring against repercussions in the case of adverse events arising from an incorrect decision. (Lovell et al., ). The perceived focus on the ‘high consequence/low frequency’ end of the risk spectrum in forensic settings was explored by Higgins et al. ( ) who reported patients were concerned that restrictive measures implemented as part of risk management might, in themselves, pose a risk as they could hinder recovery. Those with moderate intellectual disability who self‐injured reported secure settings lead to a restricted life, with little opportunity to manage day‐to‐day stresses (Duperouzel & Fish, ). In a follow‐up study, Fish et al. ( ) surveyed staff views on the introduction of a harm minimisation risk management policy in a forensic service in England. The authors argue that hoping for the cessation of harm in patients who self‐injure repetitively might be an unrealistic aim because self‐injuring is often a way of coping or surviving distress. 85% of staff in the survey ( n = 71) supported the introduction of a harm minimisation policy that taught patients about wound care, and 72% supported involving patients in care planning and risk management. Finally, Morris et al. ( ) compared the feasibility of two approaches to co‐production (MDT assisted and non‐assisted) in the completion of risk assessments and management plans in a medium secure setting ( n = 54 patients). Patients were invited to review their risk assessments and risk management plans. Thirty‐five (65%) participants rated their risk assessments and 25 (47%) completed risk management plans. Participants who rated their risk assessments separately from the MDT were significantly more likely to complete the risk management plans. This demonstrates that service users are willing to be involved in this key area of care planning. Study characteristics Twenty‐two peer‐reviewed articles around issues of risk in U.K. inpatient forensic settings for people with an intellectual disability were deemed suitable for inclusion. Of these, seven were quantitative/experimental designs (Campbell & McCue, ; Chester et al., , ; Fitzgerald et al., ; Hogue et al., ; Morris et al., ; Novaco & Taylor, ), five were qualitative (Duperouzel & Fish, ; Lovell et al., ; Malda‐Castillo et al., ; Wright et al., ; Wood et al., ), four cohort studies (Russell et al., ; Lindsay, Steptoe, et al., ; Lindsay, Carson, et al., ; Lindsay et al., ), There were two service evaluations/audits (Alexander et al., ; Plant et al., ), two survey designs (Fish et al., ; Mason et al., ), one case study (Ashworth et al., ) and one study utilising the Delphi approach (Morrissey et al., ). Findings on risk Three themes each with several sub‐themes related to risk emerged from the analysis of included studies; (1) Forensic risk, (2) Health and wellbeing risk and (3) Risk management. Theme 1: Forensic risk In forensic inpatient settings, the most commonly assessed behaviours are risk of violence, risk of sexual offences towards staff, and risk of substance misuse. 4.3.1 Subtheme 1.1: Risk of violence Four studies reported the risk of violence. Chester et al. ( ) compared the characteristics of 401 long‐stay forensic patients in England, including 66 with intellectual disabilities, and 335 without. Chester et al. ( ) reported a higher number of patients with an intellectual disability were involved in serious physical assaults on staff and other patients in forensic settings: 47% compared to 21.5% of the non‐intellectual disability population. However, Malda‐Castillo et al.'s ( ) analysis of routinely collected information and incident reports found a higher rate; reporting that up to 70% of forensic inpatients had been involved in at least one violent incident over 12 months ( n = 138). Physical assaults were the most frequent type of incident with 779 incidences reported over 1 year. There were also a total of 39 sexual assaults directed towards staff reported. This suggests that sexual assault also poses something of a risk to staff, although incidences of physical violence are significantly more common (Malda‐Castillo et al., ). Physical assault, verbal abuse, harassment and psychological abuse were more likely to occur in shared spaces such as lounges, dining rooms and corridors. The authors conclude that staff working in intellectual disability secure services might have an increased risk for client‐inflicted injuries than staff working in non‐intellectual disability inpatient wards. Physical violence leads to longer periods of detention for patients, increases the risk of both harm to and burnout among staff, and increases distress among patients (Novaco & Taylor, ). Malda‐Castillo et al. ( ) argue that, given the potentially severe impact of physical aggression, there remains a lack of knowledge of how violence develops and how intellectual disability services respond to violence. Wright et al. ( ) conducted interviews with patients ( n = 8) and nursing staff ( n = 10) to explore their attitudes towards violence and aggression in a high‐security setting, as a medium for comparing views on ways to prevent violent behaviour. They noted that an ‘institutionalised’ physical environment exacerbates the risk of aggressive behaviour. Gender was seen to be a factor, with male patients suggesting that female nursing staff had a positive impact in reducing aggression, acting as a “calming influence” and spending more time with patients. Than their male counterparts. The authors conclude by arguing that patient involvement in risk management may be one way to minimise the risk of violence in forensic settings. 4.3.2 Subtheme 1.2: Risk of substance misuse Two papers were found concerning historical substance misuse among people with intellectual disabilities in the CJS. Alcohol‐related crime and history of alcohol use were recorded from 477 participants referred to U.K. forensic intellectual disability services (Lindsay, Carson, et al., ). They found 20.8% of inpatients had a history of alcohol misuse and 5.9% had committed an alcohol‐related crime, highlighting alcohol as a significant risk factor. Historical alcohol use may act as a precursor to psychiatric problems in adulthood and is associated with behavioural disturbances including physical aggression, sexual offences, theft and property damage (Lindsay, Carson, et al., ). Investigating substance misuse, Plant et al. ( ) conducted a retrospective baseline audit of case notes from an inpatient forensic service of 74 people with intellectual disabilities (54 males and 20 females) in the east of England. Results showed a significant number of patients ( n = 34, 47%) had a history of harmful use or dependence on alcohol or illegal drugs, with alcohol (41%) and cannabis (28%) being the most commonly used. Subtheme 1.1: Risk of violence Four studies reported the risk of violence. Chester et al. ( ) compared the characteristics of 401 long‐stay forensic patients in England, including 66 with intellectual disabilities, and 335 without. Chester et al. ( ) reported a higher number of patients with an intellectual disability were involved in serious physical assaults on staff and other patients in forensic settings: 47% compared to 21.5% of the non‐intellectual disability population. However, Malda‐Castillo et al.'s ( ) analysis of routinely collected information and incident reports found a higher rate; reporting that up to 70% of forensic inpatients had been involved in at least one violent incident over 12 months ( n = 138). Physical assaults were the most frequent type of incident with 779 incidences reported over 1 year. There were also a total of 39 sexual assaults directed towards staff reported. This suggests that sexual assault also poses something of a risk to staff, although incidences of physical violence are significantly more common (Malda‐Castillo et al., ). Physical assault, verbal abuse, harassment and psychological abuse were more likely to occur in shared spaces such as lounges, dining rooms and corridors. The authors conclude that staff working in intellectual disability secure services might have an increased risk for client‐inflicted injuries than staff working in non‐intellectual disability inpatient wards. Physical violence leads to longer periods of detention for patients, increases the risk of both harm to and burnout among staff, and increases distress among patients (Novaco & Taylor, ). Malda‐Castillo et al. ( ) argue that, given the potentially severe impact of physical aggression, there remains a lack of knowledge of how violence develops and how intellectual disability services respond to violence. Wright et al. ( ) conducted interviews with patients ( n = 8) and nursing staff ( n = 10) to explore their attitudes towards violence and aggression in a high‐security setting, as a medium for comparing views on ways to prevent violent behaviour. They noted that an ‘institutionalised’ physical environment exacerbates the risk of aggressive behaviour. Gender was seen to be a factor, with male patients suggesting that female nursing staff had a positive impact in reducing aggression, acting as a “calming influence” and spending more time with patients. Than their male counterparts. The authors conclude by arguing that patient involvement in risk management may be one way to minimise the risk of violence in forensic settings. Subtheme 1.2: Risk of substance misuse Two papers were found concerning historical substance misuse among people with intellectual disabilities in the CJS. Alcohol‐related crime and history of alcohol use were recorded from 477 participants referred to U.K. forensic intellectual disability services (Lindsay, Carson, et al., ). They found 20.8% of inpatients had a history of alcohol misuse and 5.9% had committed an alcohol‐related crime, highlighting alcohol as a significant risk factor. Historical alcohol use may act as a precursor to psychiatric problems in adulthood and is associated with behavioural disturbances including physical aggression, sexual offences, theft and property damage (Lindsay, Carson, et al., ). Investigating substance misuse, Plant et al. ( ) conducted a retrospective baseline audit of case notes from an inpatient forensic service of 74 people with intellectual disabilities (54 males and 20 females) in the east of England. Results showed a significant number of patients ( n = 34, 47%) had a history of harmful use or dependence on alcohol or illegal drugs, with alcohol (41%) and cannabis (28%) being the most commonly used. Theme 2: Health and wellbeing risk The review also found literature concerning the mental and physical health and wellbeing risks faced by individuals with intellectual disabilities due to residing in a forensic inpatient setting. 4.4.1 Subtheme 2.1: Physical health risks Two studies reported physical health risks. Russell et al. ( ) examined the weight and body mass index (BMI) data of 46 inpatients (15 women and 31 men) on and during admission to a specialist forensic service. Only six (13%) inpatients were normal weight at admission, whereas 40 (87%) were overweight or obese. During admission, 28 (61%) gained weight. However, 17 (37%) lost weight, although many of this group remained overweight/obese. The authors suggest that the high rates of obesity among this population may be linked to limited opportunities for exercise in forensic settings. Research by Chester et al. ( ) investigated vitamin D deficiency in inpatient forensic intellectual disability services ( n = 84), measuring 25 hydroxy‐vitamin D (25[OHD]) concentrations within the blood. At baseline, 87% of patients were deficient or insufficient, while 13% were sufficient or optimal. At six‐month follow‐up, following supplement treatment, 53% had sufficient or optimal levels. However, some patients remained deficient (13%) or insufficient (34%). The authors identified several risk factors for vitamin D deficiency among this population, which included being non‐ambulatory, poor dietary intake and having limited access to the outside, and consequently, poor sunlight exposure. 4.4.2 Subtheme 2.2: Mental health risk Five studies reported risks to mental health that seemed to stem from behaviours such as violence, self‐injury and self‐neglect. Chester et al. ( ) reported that significantly higher rates of self‐injury were recorded among forensic inpatients with an intellectual disability (77.3%) than the non‐intellectual disability population (61.2%). It should, however, be noted that no significant difference in levels of suicidal behaviour between the two populations was found. Forensic patients with a history of self‐injury reported believing that patients had a higher risk of self‐injury when placed under restrictive measures, with limited movement and privacy. Three qualitative studies reported that feelings of restriction and loss of freedom were associated with increased risk of harm to one's self and others (Wright et al., ) (Duperouzel & Fish, ). Wood et al. ( ) offer specific examples of small restrictions that were linked to individuals experiencing negative emotions including anxiety, anger and dejection. Restrictions could include being unable to decide when to turn lights on/off, deciding when to have a hot drink, deciding when to go out and managing personal finances. Subtheme 2.1: Physical health risks Two studies reported physical health risks. Russell et al. ( ) examined the weight and body mass index (BMI) data of 46 inpatients (15 women and 31 men) on and during admission to a specialist forensic service. Only six (13%) inpatients were normal weight at admission, whereas 40 (87%) were overweight or obese. During admission, 28 (61%) gained weight. However, 17 (37%) lost weight, although many of this group remained overweight/obese. The authors suggest that the high rates of obesity among this population may be linked to limited opportunities for exercise in forensic settings. Research by Chester et al. ( ) investigated vitamin D deficiency in inpatient forensic intellectual disability services ( n = 84), measuring 25 hydroxy‐vitamin D (25[OHD]) concentrations within the blood. At baseline, 87% of patients were deficient or insufficient, while 13% were sufficient or optimal. At six‐month follow‐up, following supplement treatment, 53% had sufficient or optimal levels. However, some patients remained deficient (13%) or insufficient (34%). The authors identified several risk factors for vitamin D deficiency among this population, which included being non‐ambulatory, poor dietary intake and having limited access to the outside, and consequently, poor sunlight exposure. Subtheme 2.2: Mental health risk Five studies reported risks to mental health that seemed to stem from behaviours such as violence, self‐injury and self‐neglect. Chester et al. ( ) reported that significantly higher rates of self‐injury were recorded among forensic inpatients with an intellectual disability (77.3%) than the non‐intellectual disability population (61.2%). It should, however, be noted that no significant difference in levels of suicidal behaviour between the two populations was found. Forensic patients with a history of self‐injury reported believing that patients had a higher risk of self‐injury when placed under restrictive measures, with limited movement and privacy. Three qualitative studies reported that feelings of restriction and loss of freedom were associated with increased risk of harm to one's self and others (Wright et al., ) (Duperouzel & Fish, ). Wood et al. ( ) offer specific examples of small restrictions that were linked to individuals experiencing negative emotions including anxiety, anger and dejection. Restrictions could include being unable to decide when to turn lights on/off, deciding when to have a hot drink, deciding when to go out and managing personal finances. Theme 3: Risk management Seven papers related to staff nurses and patient views on risk assessment and risk management processes within intellectual disability forensic services. Lovell et al. ( ) explored nurses' perceptions of competencies for working with patients with an intellectual disability. Decision‐making around risk was viewed as integral to the nursing role. Balancing risk was an issue encountered daily and was a source of tension, with nurses fearing the consequences arising from making incorrect decisions regarding patient risks management. Nurses were anxious about and sometimes avoidant of risk, there was ambivalence around testing new therapeutic interventions with individual patients due to fear of negative consequences for the patient in the event something went wrong. One example given was the decision to allow patients to leave the ward alone. While this may be positive for the individual's independence, nurses reported concern about whether a patients' mental state could pose a risk to themselves or others while out in the community. Team decision‐making was viewed as a key part of risk management as it was a way of insuring against repercussions in the case of adverse events arising from an incorrect decision. (Lovell et al., ). The perceived focus on the ‘high consequence/low frequency’ end of the risk spectrum in forensic settings was explored by Higgins et al. ( ) who reported patients were concerned that restrictive measures implemented as part of risk management might, in themselves, pose a risk as they could hinder recovery. Those with moderate intellectual disability who self‐injured reported secure settings lead to a restricted life, with little opportunity to manage day‐to‐day stresses (Duperouzel & Fish, ). In a follow‐up study, Fish et al. ( ) surveyed staff views on the introduction of a harm minimisation risk management policy in a forensic service in England. The authors argue that hoping for the cessation of harm in patients who self‐injure repetitively might be an unrealistic aim because self‐injuring is often a way of coping or surviving distress. 85% of staff in the survey ( n = 71) supported the introduction of a harm minimisation policy that taught patients about wound care, and 72% supported involving patients in care planning and risk management. Finally, Morris et al. ( ) compared the feasibility of two approaches to co‐production (MDT assisted and non‐assisted) in the completion of risk assessments and management plans in a medium secure setting ( n = 54 patients). Patients were invited to review their risk assessments and risk management plans. Thirty‐five (65%) participants rated their risk assessments and 25 (47%) completed risk management plans. Participants who rated their risk assessments separately from the MDT were significantly more likely to complete the risk management plans. This demonstrates that service users are willing to be involved in this key area of care planning. DISCUSSION 5.1 Discussion of principal findings This review was undertaken to map and appraise the current evidence base around risk in relation to people with an intellectual disability in forensic settings. It is important to highlight that the discussion of findings pertains specifically to studies conducted in the United Kingdom. A distinction was drawn between forensic risks (risks that have a legal or criminal justice element) and health and wellbeing risks (relating to the patient's physical and mental health). The results of the review show that forensic risk and patterns of offending among inpatients with an intellectual disability have received greater attention in the literature than aspects of health and wellbeing. The academic evidence suggests that the risk of violence is a primary concern in UK forensic inpatient settings. Evidence was also found to suggest that a history of substance misuse could be a risk factor for aggressive behaviours and sexually inappropriate behaviour. The focus on the risk of violence is unsurprising given the significant body of work testing the efficacy of risk assessment tools with forensic inpatients with intellectual disabilities. Examples of these tools include the Historical Clinical Risk Management‐20, Version 3 (Douglas et al., ) and Risk for Sexual Violence Protocol (RSVP) (Hart & Boer, ). with inpatients with an intellectual disability. The strength of the evidence base pertaining to the efficacy of risk assessment tools may indicate a focus on the ‘high consequence/low frequency’ aspects of risk (Higgins et al., ): professionals focusing on mitigating the risk of serious but infrequent events (violence, inappropriate sexual behaviour) over less noticeable long term risks (Obesity, vitamin D deficiency). A notable finding was the evidence regarding risks posed to physical health by long term residency in a forensic setting, with just two studies found highlighting the increased risk of vitamin D deficiency, obesity and diabetes which could negatively impact long‐term health outcomes for people with an intellectual disability. Although concerns have been expressed about the suitability of forensic environments for inpatients with intellectual disabilities (Mental Welfare Commission, ), the review findings show a paucity of evidence in this area. Some qualitative evidence suggested that a perceived lack of control and overly restrictive measures were a factor in higher rates of depression, anxiety, and increased risk of self‐injury in U.K. forensic inpatient settings. Interviews conducted with non‐intellectually disabled forensic inpatients in England ( n = 18) reported feelings of boredom, humiliation, and anger among those experiencing restrictions, but did not find evidence of the anxious and depressive symptomology reported by patients with an intellectual disability (Tomlin et al., ). This seems to indicate that if measures to reduce forensic risk are introduced, there may be negative consequences for the health and wellbeing of people with intellectual disabilities. Learning disability nurses reported difficulty in balancing the cost/benefit of risks, often relying on a multidisciplinary team approach to decision making around risk to minimise the impact of adverse events on inpatients with an intellectual disability. Multidisciplinary approaches to risk management are common in other CJS settings including community settings, with non‐intellectually disabled forensic inpatients (Haines et al., ) and internationally (Orovwuje, ). Haines et al. ( ) reported that MDT decision making around forensic risk is shaped by the values, knowledge, and power dynamics within MDT, with the views of service users often being side‐lined. There as one paper on the topic of using coproduction to enable patient input on issues of risk, and whether this is effective in improving the management of risk and care provided for people with intellectual disabilities in UK forensic settings (Morris et al., ). When given the opportunity, the majority of patients with an intellectual disability actively engaged in risk assessments and the drafting of risk management plans, indicating a willingness among patients to discuss risk and the potential value of coproduction in facilitating this. This small evidence base aligns with wider research regarding the involvement of people with intellectual disabilities in care planning in health services. A recent review found that the experiences of adults with intellectual disability involvement in care planning within health services are mostly absent within literature, with existing guidance using ambiguous and confusing language (Doody et al., ). Given the communication challenges people with an intellectual disability can face, including issues learning new information, remembering and processing, problem‐solving and developing coping strategies for novel situations (Drainoni et al., ) this avenue of research may benefit from further exploration. 5.2 Strengths and weaknesses of the review Despite the overrepresentation of people with an intellectual disability in the forensic population, no previous reviews have appraised and synthesised evidence around issues of risk for forensic inpatients with an intellectual disability. The search strategy for the review was extensive and included multiple databases, screening reference lists and freehand searching. The research team had input from forensic nursing, medical, and allied health professional staff to refine the terminology within the search strategy. Due to time limitations, there were constraints to the search strategy. Only papers in the English language and published since 2000 were included. However, the research team felt this was justified given that research with the intellectually disabled forensic population started to receive major attention in the late 1990s. While relevant grey literature was retained, it was not formally included and did not form part of the results and discussion of the review. Hence, some pertinent information may have been missed due to the exclusion of grey literature. The diversity of study designs and populations also made it challenging to compare results and synthesise some findings. In addition, the review focused on the United Kingdom as its geographical setting so that other countries and regions of the world are missing, meaning the results should be interpreted with caution. 5.3 Recommendations for education, policy, practice and research This review has identified several knowledge gaps and recommendations are made for future research. There is some evidence to suggest that inpatients with intellectual disabilities in forensic settings face significant health risks, including a higher chance of developing obesity, diabetes, and vitamin D deficiency. There is a range of research within the general CJS population, concerning the efficacy of interventions to improve health outcomes (South et al., ) Therefore, further research is required regarding the appropriateness and effectiveness of these interventions for forensic inpatients with intellectual disabilities. The risk of violence and inappropriate sexual behaviour within this population is somewhat established in current literature, and robust violence and sexual behaviour risk assessment tools exist, although management strategies to address these need further attention. The apparent increased risk to health and wellbeing if overly restrictive measures are put in place in forensic settings needs further exploration so that assessment and management are proportionate to the level of risk, and considers the impact on individual health and wellbeing alongside forensic risk. While the rights of patients to direct their care planning and risk management are enshrined in some U.K. health policies, and evidence suggests that staff support the inclusion of forensic patients with intellectual disabilities in their risk management, further research is required to establish the effectiveness of interventions for facilitating patient‐led discussions about risk in forensic settings – coproduction methods have shown some promise in this area. Additional work is also required to assess the extent to which patient involvement in risk management helps to mitigate future risks, and if patient input into risk management can lead to a reduction in restrictive measures. Discussion of principal findings This review was undertaken to map and appraise the current evidence base around risk in relation to people with an intellectual disability in forensic settings. It is important to highlight that the discussion of findings pertains specifically to studies conducted in the United Kingdom. A distinction was drawn between forensic risks (risks that have a legal or criminal justice element) and health and wellbeing risks (relating to the patient's physical and mental health). The results of the review show that forensic risk and patterns of offending among inpatients with an intellectual disability have received greater attention in the literature than aspects of health and wellbeing. The academic evidence suggests that the risk of violence is a primary concern in UK forensic inpatient settings. Evidence was also found to suggest that a history of substance misuse could be a risk factor for aggressive behaviours and sexually inappropriate behaviour. The focus on the risk of violence is unsurprising given the significant body of work testing the efficacy of risk assessment tools with forensic inpatients with intellectual disabilities. Examples of these tools include the Historical Clinical Risk Management‐20, Version 3 (Douglas et al., ) and Risk for Sexual Violence Protocol (RSVP) (Hart & Boer, ). with inpatients with an intellectual disability. The strength of the evidence base pertaining to the efficacy of risk assessment tools may indicate a focus on the ‘high consequence/low frequency’ aspects of risk (Higgins et al., ): professionals focusing on mitigating the risk of serious but infrequent events (violence, inappropriate sexual behaviour) over less noticeable long term risks (Obesity, vitamin D deficiency). A notable finding was the evidence regarding risks posed to physical health by long term residency in a forensic setting, with just two studies found highlighting the increased risk of vitamin D deficiency, obesity and diabetes which could negatively impact long‐term health outcomes for people with an intellectual disability. Although concerns have been expressed about the suitability of forensic environments for inpatients with intellectual disabilities (Mental Welfare Commission, ), the review findings show a paucity of evidence in this area. Some qualitative evidence suggested that a perceived lack of control and overly restrictive measures were a factor in higher rates of depression, anxiety, and increased risk of self‐injury in U.K. forensic inpatient settings. Interviews conducted with non‐intellectually disabled forensic inpatients in England ( n = 18) reported feelings of boredom, humiliation, and anger among those experiencing restrictions, but did not find evidence of the anxious and depressive symptomology reported by patients with an intellectual disability (Tomlin et al., ). This seems to indicate that if measures to reduce forensic risk are introduced, there may be negative consequences for the health and wellbeing of people with intellectual disabilities. Learning disability nurses reported difficulty in balancing the cost/benefit of risks, often relying on a multidisciplinary team approach to decision making around risk to minimise the impact of adverse events on inpatients with an intellectual disability. Multidisciplinary approaches to risk management are common in other CJS settings including community settings, with non‐intellectually disabled forensic inpatients (Haines et al., ) and internationally (Orovwuje, ). Haines et al. ( ) reported that MDT decision making around forensic risk is shaped by the values, knowledge, and power dynamics within MDT, with the views of service users often being side‐lined. There as one paper on the topic of using coproduction to enable patient input on issues of risk, and whether this is effective in improving the management of risk and care provided for people with intellectual disabilities in UK forensic settings (Morris et al., ). When given the opportunity, the majority of patients with an intellectual disability actively engaged in risk assessments and the drafting of risk management plans, indicating a willingness among patients to discuss risk and the potential value of coproduction in facilitating this. This small evidence base aligns with wider research regarding the involvement of people with intellectual disabilities in care planning in health services. A recent review found that the experiences of adults with intellectual disability involvement in care planning within health services are mostly absent within literature, with existing guidance using ambiguous and confusing language (Doody et al., ). Given the communication challenges people with an intellectual disability can face, including issues learning new information, remembering and processing, problem‐solving and developing coping strategies for novel situations (Drainoni et al., ) this avenue of research may benefit from further exploration. Strengths and weaknesses of the review Despite the overrepresentation of people with an intellectual disability in the forensic population, no previous reviews have appraised and synthesised evidence around issues of risk for forensic inpatients with an intellectual disability. The search strategy for the review was extensive and included multiple databases, screening reference lists and freehand searching. The research team had input from forensic nursing, medical, and allied health professional staff to refine the terminology within the search strategy. Due to time limitations, there were constraints to the search strategy. Only papers in the English language and published since 2000 were included. However, the research team felt this was justified given that research with the intellectually disabled forensic population started to receive major attention in the late 1990s. While relevant grey literature was retained, it was not formally included and did not form part of the results and discussion of the review. Hence, some pertinent information may have been missed due to the exclusion of grey literature. The diversity of study designs and populations also made it challenging to compare results and synthesise some findings. In addition, the review focused on the United Kingdom as its geographical setting so that other countries and regions of the world are missing, meaning the results should be interpreted with caution. Recommendations for education, policy, practice and research This review has identified several knowledge gaps and recommendations are made for future research. There is some evidence to suggest that inpatients with intellectual disabilities in forensic settings face significant health risks, including a higher chance of developing obesity, diabetes, and vitamin D deficiency. There is a range of research within the general CJS population, concerning the efficacy of interventions to improve health outcomes (South et al., ) Therefore, further research is required regarding the appropriateness and effectiveness of these interventions for forensic inpatients with intellectual disabilities. The risk of violence and inappropriate sexual behaviour within this population is somewhat established in current literature, and robust violence and sexual behaviour risk assessment tools exist, although management strategies to address these need further attention. The apparent increased risk to health and wellbeing if overly restrictive measures are put in place in forensic settings needs further exploration so that assessment and management are proportionate to the level of risk, and considers the impact on individual health and wellbeing alongside forensic risk. While the rights of patients to direct their care planning and risk management are enshrined in some U.K. health policies, and evidence suggests that staff support the inclusion of forensic patients with intellectual disabilities in their risk management, further research is required to establish the effectiveness of interventions for facilitating patient‐led discussions about risk in forensic settings – coproduction methods have shown some promise in this area. Additional work is also required to assess the extent to which patient involvement in risk management helps to mitigate future risks, and if patient input into risk management can lead to a reduction in restrictive measures. CONCLUSION Risk is a broad and dynamic concept encompassing health, wellbeing and forensic risk. This review of risk associated with U.K. forensic inpatient settings found that individuals with an intellectual disability are at higher risk of violence, sexually risky behaviour, and self‐injury. The evidence base around health risks indicated an increased likelihood of experiencing mental health issues, sensory problems, and obesity compared to the general CJS population. Analysis of nurses' perceptions of risk assessment and management showed that balancing risks and managing patient and staff safety is a source of tension encountered daily. Furthermore, patients reported that restrictive measures and lack of freedom were significant factors in exacerbating certain mental health and wellbeing risks such as anxiety. The authors declare no conflict of interest.
Achievement of learning outcomes in non‐traditional (online) versus traditional (face‐to‐face) anatomy teaching in medical schools: A mixed method systematic review
61753620-b842-426b-9a22-e70193534190
10087909
Anatomy[mh]
INTRODUCTION Anatomy is regarded as a foundational component of medical student education, regardless of the student's future specialty or subspecialty (Davis et al., ). Throughout their preclinical and clinical educational experiences there is a repeated return to basic concepts and principles underlying the structure and function of the human body (Sbayeh et al., ). Traditionally, initial exposure to these concepts and principles has been mediated by face‐to‐face lectures and supportive learning through cadaveric dissection. Cadaver dissection remains important to many students to augment their knowledge by utilizing methods involving cadaveric specimens (Azer & Eizenberg, ; Patel et al., ). Human anatomy courses and cadaver dissection laboratories provide a golden opportunity for medical students to attain three‐dimensional understanding of anatomical structures and variability, and to learn and practice on a human cadaver as their first patient, which can mimic working with a live human body during their health care career. Dissection also reveals the relationships among organs and tissues, fostering familiarity with the different textures and physical characteristics of human bodies, which provides a basis for physical examination, interpreting medical imaging and performing clinical procedures (Davis et al., ; Johnson et al., ; Rizzolo et al., ). A frequent point of discussion is how to approach anatomy teaching and facilitate students' comprehension of difficult concepts and memorization of vast amounts of new information. Six techniques for anatomy education have been proposed: in‐person lectures, cadaveric dissection, inspection of prosected specimens, models, radiological and living anatomy teaching, and computer‐assisted learning (Iwanaga et al., ). Cadaver dissection, considered the gold standard for teaching anatomy (Hildebrandt, ), remains widely used. Also, over recent decades, different teaching technologies have been proposed for online anatomy teaching either synchronously or asynchronously. These include: online live lectures via Zoom or Microsoft Teams or other web‐based resources, pre‐recorded lectures, YouTube videos (Mustafa et al., ), uploaded PowerPoint presentations, dissection videos, a prosection laboratory using Blackboard Collaborate (BBC), 12 video conferencing software (Blackboard Inc., Washington, DC) and the Netter 3D Anatomy computer model (Netter, ), three dimensional printing (3DP) digital models (Baguley, ), augmented reality (AR) (Azuma, ), and virtual reality (VR) (Izard et al., ; Kilteni et al., ). Over the past 30 years, anatomical education has suffered from different challenges, which makes these teaching technologies a supplemental technique for teaching anatomy that can improve student knowledge and experience (Curlewis et al., ; Nicholson et al., ; Wilson et al., ). Major challenges to teaching anatomy in medical schools have been identified since the middle of the 20th and during the 21st century. First is the lack of available space and qualified anatomy faculty together with an increasing number of medical students, making it difficult to incorporate rapidly expanding medical knowledge into the curricula. This has required a reorganization of existing curricula; anatomy in particular was under pressure to reduce teaching hours and the student load (Collins et al., ; Cottam, ; Craig et al., ; Patel et al., ). The second challenge concerns the costs of dissection labs; there is pressure to replace cadaver work. In many medical schools, the authorities have advocated replacing dissection by other learning approaches with identical final outcomes (McLachlan et al., ). The third major challenge is the Covid‐19 pandemic, especially during the last 2 years, which has elicited unexpected and rapid changes in anatomy teaching methods and presented an opportunity for serious remodeling of the medical curriculum design and teaching of human anatomy. The fourth challenge is the decline in the number of cadavers following the Covid‐19 pandemic, putting further stress on face‐to‐face anatomy teaching and dissection laboratories. Globally, the primary concern for cadaveric dissection during this pandemic is the sourcing and availability of bodies that can be used for dissection (Onigbinde et al., ). The International Federation of Associations of Anatomists (IFAA) acknowledges that body sourcing is usually challenging during pandemics such as Covid‐19 (International Federation of Association of Anatomists, 2020). The possibility of contracting diseases can exacerbate this problem during such outbreaks (Singal et al., ). In countries that rely principally on using unclaimed bodies for cadaveric dissection the problem is moot, as most bodies lack medical histories by which their cause of death can be ascertained (Onigbinde et al., ). The aforementioned challenges, together with advances in technology and increased access to online learning, have put significant pressure on anatomy teaching, illustrated by the shift from face‐to‐face to online teaching. This pressure has become particularly notable in recent years and has been exacerbated by advances in new digital technologies such as augmented and virtual reality (Henssen et al., ). Consequently, universities worldwide have adopted different teaching approaches. For example, some have implemented curricular changes, especially since the time allotted to anatomy education in Europe, the United States, and Australia has declined considerably (Pais et al., ). So, they have switched from a completely traditional cadaver‐based curriculum toward more interactive customized approaches that fit the learning strategies of new generations better, since they appreciate the use of technologies such as augmented and virtual reality, social networks, and imaging for improved understanding (Davis et al., ; Richardson et al., ; Trelease, ). The traditional method of face‐to‐face anatomy teaching is becoming outpaced by the online teaching modality, which lacks haptic and kinesthetic learning but increases active learning (Palmer & Holt, ; Shachar & Neumann, ). Human anatomy and cadaver dissection are beneficial for the future clinical practice of medical students (Ghazanfar et al., ; Ghosh, ; Habbal, ), and online anatomy dissection courses have created concern about the comparability of their learning outcomes with face‐to‐face teaching. One vital and objective method for assessing the students' level of knowledge more deeply is an exam to measure academic performance and to determine whether students have achieved a particular standard by demonstrating their anatomy knowledge and dissection skills (Shephard, ). Another tool for evaluating the effectiveness of online teaching is students' satisfaction, which reflects a different perspective from objective measures (Allen et al., ; Eom et al., ; Herrington & Parker, ). The student will be satisfied and engaged in learning if the online teaching facilitates his/her learning and increases his/her accomplishments and abilities. For example, in 2013, Ke et al. identified five elements of student satisfaction: learner relevance, active learning, authentic learning, learner autonomy, and technology competence. Additionally, in 2012, Keengwe et al. argued that students' expectations influence the instructor's design of effective technology tools in online courses and are the key to understanding the satisfaction construct. The authors concluded that satisfaction was most impacted by learning convenience combined with the effectiveness of e‐learning tools. Similarly, Richardson and Swan concluded that students with high overall perceptions of social presence scored high in terms of perceived learning and perceived satisfaction with the instructor. Several articles have addressed the transition from traditional to online modalities in anatomy education in medical schools. For example, in 2014, Davis et al. studied 370 medical students to compare the learning outcomes between online anatomy labs and face‐to‐face dissection laboratories. They found that 91% disagreed that they learned more from the online labs than traditional face‐to‐face anatomy laboratories. Students were adamant about having cadavers in their anatomy teaching; over 90% agreed that seeing dissected anatomy specimens is key to learning anatomy. The authors concluded that over 33% of students were dissatisfied with the anatomical knowledge achievement of online teaching. In contrast, in 2021, Yoo et al. studied 212 medical students in Republic of South Korea to examine the learning outcomes of a newly adopted approach to anatomy education. They found that 79% agreed that online learning gave them opportunities to review the recorded lecture videos repeatedly and tailor their learning to each individual's pace to enhance the efficacy of self‐directed study. Also, they found that the total mean examination scores (mean ± SD), including the lecture and dissection laboratory, were significantly higher in online teaching (76.79 ± 9.47) than traditional teaching (71.33 ± 12.19). As far as the present authors are concerned, the effects of this shift to online teaching on the achievement of anatomical knowledge by medical students, and their perception and satisfaction with the courses and the methods of human anatomy teaching, have not been completely evaluated. Therefore, the purpose of this systematic review was to explore the educational effectiveness of online anatomy teaching in comparison with traditional (face‐to‐face) teaching methods. Students' academic performances were investigated along with student satisfaction, which is a significant predictor of learning outcomes and for a tool to evaluate the effectiveness of online instruction. The authors will discuss the strategies and challenges that anatomical education has faced owing to this shift and will answer and debate the following questions: Are students' academic performances, as measured by grades on exams, higher or lower using online teaching rather than traditional (face‐to‐face) anatomy teaching? Are students' satisfaction levels higher or lower being taught via online modality rather than traditional (face‐to‐face) anatomy teaching? Can the online modality deliver the required anatomical knowledge to medical students efficiently? Can the online modality replace the traditional modality in anatomy teaching? METHODS A systematic review was conducted during the academic year 2021–2022 following the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) method and the current methodological literature describing the updated Joanna Briggs Institute (JBI) guidance for a mixed methods systematic review (Stern et al., ). The included records were screened, extracted and assessed by data synthesis and statistical analysis. Moreover, when available, non‐narrative data were extracted and collected. Since this study did not involve any type of human material (cells, tissues, organs, patients, or other), institutional review board approval was not required and the manuscript was not eligible for ethical review. 2.1 Search strategy The search was conducted from February 2022 through April 2022. It was based on key search terms in the PubMed (US National Library of Medicine, National Institutes of Health, Bethesda, MD), Embase (Excerpta Medica database, Elsevier, Netherlands), ERIC (Education Resources Information Center, Institute of Education Sciences of the US Department of Education, Washington, DC), and Google Scholar (Google Inc., Mountain View, CA) search engines. These databases were browsed from 1990 to 2022. The search strategy was designed by (H.A.) and validated by the librarian‐systemic review specialist. 2.2 Search terms The search terms used were combinations of (“anatomy teaching” OR “anatomy education”) AND (online OR virtual OR remote OR distance) AND (“medical school” OR “medical student”) AND (traditional OR “hands‐on dissection” OR “in‐person” OR “face to face” OR “cadaver dissection”) AND (“academic performance” OR grade OR “test scores” OR satisfaction). The aforementioned search engines were searched using those terms. 2.3 Eligibility criteria Eligibility criteria were defined (see Table ) using the PICOS approach, which defines the population, intervention, comparator and outcomes relevant to the review. Thus, this review included articles published in English in peer‐reviewed journals, and included research articles, research reports, original research, letters and original communication from many countries in the world. The focus was on comparing online with traditional anatomy teaching in medical schools in terms of their effects on students' academic performances and student satisfaction. Review articles, conference papers and dissertations were excluded. Keywords related to the aforementioned terms were identified. The authors also searched the reference lists of articles identified through this search strategy and selected additional publications that fit in the eligibility criteria and deemed relevant. 2.4 Study outcomes The primary outcome for this review was the difference in students' academic performances, which are exam grades or test scores of different examinations according to each medical school, between online and traditional (face‐to‐face) teaching methods; the secondary outcome included the difference in student satisfaction between online and traditional teaching methods. 2.5 Data extraction and synthesis One investigator (H.A.) imported all retrieved studies into the reference and citation management software EndNote 20 (EndNote, Clarivate Analytics, Philadelphia). All the studies were then imported into Covidence, a screening and data extraction tool for systematic review, for abstract screening, full text review and data extraction (Covidence systematic review software, ). Two reviewers (H.A. and L.X.) independently screened all abstracts and completed full‐text reviews of potentially relevant studies that fit in the eligibility criteria (see Table ). After that, data were extracted from the written text and numbers represented by the included records, including the study ID (first author and publication year), title, type of study, design of study, keywords, country of the study, population, number of participating medical schools, demographic information if available, cause of online shift, methods of online teaching, outcomes measured in each study and how they were measured, and the numbers and sentences comparing these outcomes between online and traditional (face‐to‐face) teaching. Each reviewer then cross‐checked all data and any disagreements between them were discussed and resolved by consensus. The extracted data were then categorized and imported to Excel (Microsoft Corp., Redmond, WA) for further examination. Afterwards, members of the team independently assigned the studies to two categories: studies of students' academic performances and studies of students' satisfaction levels. All studies were analyzed using mixed methods analysis to combine the qualitative and quantitative findings to address the overlapping or complementary questions of this review (Harden, ). The analyzed data in each category revealed the difference between outcomes in online and traditional teaching. The sample in this analysis was represented by different medical schools around the world that have taught anatomy/gross anatomy modules using online and traditional methods. 2.6 Assessment of study quality Two reviewers independently conducted a quality assessment and critical appraisal of eligible studies using the JBI Critical Appraisal Checklists for study designs through https://jbi.global/critical-appraisal-tools (Joanna Briggs Institute, ). These checklists contain various questions that assess specific domains of studies to determine the potential risk of bias and can be answered with “yes,” “no,” “unclear,” or “not applicable” (see Tables , , , ). The risk of bias of individual studies was determined with the following cutoffs: low risk of bias if 70% of questions scored yes, moderate risk if 50%–69% questions scored yes, and high risk if yes scores were below 50% (Munn et al., ). Risk of bias assessment was used to ascertain whether the research study was devoid of selective outcome reporting and whether the outcomes were described adequately. Credibility and reliability were ensured by triangulation and peer debriefing. 2.7 Data analysis Different online teaching methods had been implemented by different medical schools to accommodate the shift from traditional to online teaching. This shift was due either to the Covid‐19 pandemic, change in curricula to decrease anatomy teaching hours, or lack of cadavers. It had a considerable effect on the learning outcomes of medical students and represented a challenge and opportunity to rethink future anatomical education in medical schools. The authors included different types of studies (see Table ) and synthesized their findings using both qualitative and quantitative methods of analysis to give greater strength to the review. When a review is restricted to the statistical combination of numerical effects it can be criticized as lacking context and explanation (Harden & Thomas, ). The studies were analyzed qualitatively by one of the authors using comparative analysis, that went back and forth between the grades of the exams and the narrative descriptions of the interventions and learning outcomes provided in studies. This method defined the proportion and percentage of medical schools in which online teaching outperformed face‐to‐face teaching, or vice versa, in terms of academic performance and satisfaction level. This analysis then compared the percentage of medical schools with high students' academic performances in online with the percentage of medical schools with high students' academic performances in traditional (face‐to‐face) anatomy teaching. Additionally, the percentage of medical schools with high student's satisfaction in online and the percentage of medical schools with high student's satisfaction in traditional (face‐to‐face) anatomy teaching was compared. Two main themes were generated: the change in academic performance due to the shift from traditional anatomy teaching to online teaching, and the change in level of student satisfaction due to this shift. Data were analyzed quantitatively using the SPSS statistical package, version 25 (IBM Corp., Armonk, NY). A Kolmogorov–Smirno test was conducted to check normality of distribution for both learning methods. Also, independent samples t tests were used to compare students' academic performances between the online and face‐to‐face methods, and the satisfaction levels of students between those methods. Descriptive statistics were also included for online and face‐to‐face teaching methods. Search strategy The search was conducted from February 2022 through April 2022. It was based on key search terms in the PubMed (US National Library of Medicine, National Institutes of Health, Bethesda, MD), Embase (Excerpta Medica database, Elsevier, Netherlands), ERIC (Education Resources Information Center, Institute of Education Sciences of the US Department of Education, Washington, DC), and Google Scholar (Google Inc., Mountain View, CA) search engines. These databases were browsed from 1990 to 2022. The search strategy was designed by (H.A.) and validated by the librarian‐systemic review specialist. Search terms The search terms used were combinations of (“anatomy teaching” OR “anatomy education”) AND (online OR virtual OR remote OR distance) AND (“medical school” OR “medical student”) AND (traditional OR “hands‐on dissection” OR “in‐person” OR “face to face” OR “cadaver dissection”) AND (“academic performance” OR grade OR “test scores” OR satisfaction). The aforementioned search engines were searched using those terms. Eligibility criteria Eligibility criteria were defined (see Table ) using the PICOS approach, which defines the population, intervention, comparator and outcomes relevant to the review. Thus, this review included articles published in English in peer‐reviewed journals, and included research articles, research reports, original research, letters and original communication from many countries in the world. The focus was on comparing online with traditional anatomy teaching in medical schools in terms of their effects on students' academic performances and student satisfaction. Review articles, conference papers and dissertations were excluded. Keywords related to the aforementioned terms were identified. The authors also searched the reference lists of articles identified through this search strategy and selected additional publications that fit in the eligibility criteria and deemed relevant. Study outcomes The primary outcome for this review was the difference in students' academic performances, which are exam grades or test scores of different examinations according to each medical school, between online and traditional (face‐to‐face) teaching methods; the secondary outcome included the difference in student satisfaction between online and traditional teaching methods. Data extraction and synthesis One investigator (H.A.) imported all retrieved studies into the reference and citation management software EndNote 20 (EndNote, Clarivate Analytics, Philadelphia). All the studies were then imported into Covidence, a screening and data extraction tool for systematic review, for abstract screening, full text review and data extraction (Covidence systematic review software, ). Two reviewers (H.A. and L.X.) independently screened all abstracts and completed full‐text reviews of potentially relevant studies that fit in the eligibility criteria (see Table ). After that, data were extracted from the written text and numbers represented by the included records, including the study ID (first author and publication year), title, type of study, design of study, keywords, country of the study, population, number of participating medical schools, demographic information if available, cause of online shift, methods of online teaching, outcomes measured in each study and how they were measured, and the numbers and sentences comparing these outcomes between online and traditional (face‐to‐face) teaching. Each reviewer then cross‐checked all data and any disagreements between them were discussed and resolved by consensus. The extracted data were then categorized and imported to Excel (Microsoft Corp., Redmond, WA) for further examination. Afterwards, members of the team independently assigned the studies to two categories: studies of students' academic performances and studies of students' satisfaction levels. All studies were analyzed using mixed methods analysis to combine the qualitative and quantitative findings to address the overlapping or complementary questions of this review (Harden, ). The analyzed data in each category revealed the difference between outcomes in online and traditional teaching. The sample in this analysis was represented by different medical schools around the world that have taught anatomy/gross anatomy modules using online and traditional methods. Assessment of study quality Two reviewers independently conducted a quality assessment and critical appraisal of eligible studies using the JBI Critical Appraisal Checklists for study designs through https://jbi.global/critical-appraisal-tools (Joanna Briggs Institute, ). These checklists contain various questions that assess specific domains of studies to determine the potential risk of bias and can be answered with “yes,” “no,” “unclear,” or “not applicable” (see Tables , , , ). The risk of bias of individual studies was determined with the following cutoffs: low risk of bias if 70% of questions scored yes, moderate risk if 50%–69% questions scored yes, and high risk if yes scores were below 50% (Munn et al., ). Risk of bias assessment was used to ascertain whether the research study was devoid of selective outcome reporting and whether the outcomes were described adequately. Credibility and reliability were ensured by triangulation and peer debriefing. Data analysis Different online teaching methods had been implemented by different medical schools to accommodate the shift from traditional to online teaching. This shift was due either to the Covid‐19 pandemic, change in curricula to decrease anatomy teaching hours, or lack of cadavers. It had a considerable effect on the learning outcomes of medical students and represented a challenge and opportunity to rethink future anatomical education in medical schools. The authors included different types of studies (see Table ) and synthesized their findings using both qualitative and quantitative methods of analysis to give greater strength to the review. When a review is restricted to the statistical combination of numerical effects it can be criticized as lacking context and explanation (Harden & Thomas, ). The studies were analyzed qualitatively by one of the authors using comparative analysis, that went back and forth between the grades of the exams and the narrative descriptions of the interventions and learning outcomes provided in studies. This method defined the proportion and percentage of medical schools in which online teaching outperformed face‐to‐face teaching, or vice versa, in terms of academic performance and satisfaction level. This analysis then compared the percentage of medical schools with high students' academic performances in online with the percentage of medical schools with high students' academic performances in traditional (face‐to‐face) anatomy teaching. Additionally, the percentage of medical schools with high student's satisfaction in online and the percentage of medical schools with high student's satisfaction in traditional (face‐to‐face) anatomy teaching was compared. Two main themes were generated: the change in academic performance due to the shift from traditional anatomy teaching to online teaching, and the change in level of student satisfaction due to this shift. Data were analyzed quantitatively using the SPSS statistical package, version 25 (IBM Corp., Armonk, NY). A Kolmogorov–Smirno test was conducted to check normality of distribution for both learning methods. Also, independent samples t tests were used to compare students' academic performances between the online and face‐to‐face methods, and the satisfaction levels of students between those methods. Descriptive statistics were also included for online and face‐to‐face teaching methods. RESULTS 3.1 Study selection A total of 162 primary research studies were identified initially. Eighteen were duplicates and were removed. The remaining 144 papers were further processed. After title and keywords, abstract screening and full text review, 31 studies were finally agreed on by the authors and included for mixed methods qualitative and quantitative analyses. The PRISMA flow‐chart was used for reporting the findings (see Figure for details). The records included in this study are listed in Table . Both authors agreed on the final number of studies included. 3.2 Features of included studies Overall, 31 studies met the inclusion requirements. These studies were of different designs; 2 randomized controlled trials (6.455%), 10 cohort studies (32.26%), 4 case control studies (12.9%), and 15 cross sectional studies (48.39%). They were published between 2006 and 2022. Thirteen were classed as original research (41.94%), six as reports (19.35%), two as letters to the editor (6.45%), one as original communication (3.23%), and six as research articles (19.35%). Four were performed in United States, three in India, three in Greece, two in Saudi Arabia, two in Canada, two in Turkey, and one each in Croatia, Argentina, Australia, Malta, Brazil, France, Taiwan, Pakistan, Jordan, Bahrain, Nepal, Singapore, China, Republic of South Korea, and Italy. Thirty‐six medical schools were included in these studies with 1776 students participating in total. Only two studies enrolled medical residents along with the students: Eansor et al. recruited medical and radiological residents and Fang et al. included both medical and surgical residents. Twenty studies reported demographic information about the participants and 11 did not. Twenty‐six studies assessed the satisfaction levels and 19 reported academic performance. The shift to online teaching was attributed to different reasons. Of those, Covid‐19 was reported in 22 of the studies (70.97%), change in curriculum to decrease anatomy teaching hours in 8 (22.58%), lack of cadavers in 2 (6.45%) and natural disaster due to earthquake in 1 (3.22%). A series of online educational methods were evaluated, including: Pre‐recorded videos of anatomy labs and lectures, links to animations, digital cadaveric images, dissection‐audio visual resources (DAVR), Zoom video‐conferencing, Microsoft Teams, YouTube videos, online PowerPoint presentations, interactive stereoscopic virtual reality, 4D dynamic virtual anatomical dissection from scanned human cadavers, electronic anatomy drawing using CS software, virtual reality dissection simulation, dissection educational videos (DEV), 3D virtual models and animations through electronic anatomy software such as complete anatomy software, and 3D bio‐digital human anatomy software. Interventions in the control group ranged from traditional learning via in‐person lectures and textbooks to dissection labs (Table shows the features of the included studies). 3.3 Risk of bias assessment Risk of bias was assessed according to the JBI Critical Appraisal Checklists for randomized controlled studies, cross‐sectional studies, cohort, and case control studies. It is reported in Tables , , , . The randomized controlled studies (Biasutto et al., ; Zibis et al., ) were assessed as having moderate risk of bias owing to confounding and non‐blinding of participants and teachers of the intervention. In view of the nature of the intervention, blinding of students and educators during the study was not practical. In all cases, the control was students in the same year or same medical school who had not taken the anatomy course before. These two studies had low risk of reporting bias. The students were allowed to choose their participation voluntarily. For cross‐sectional studies, one study was graded as moderate risk of bias owing to confounding variables and missing data for the control group (Duraes et al., ). Fourteen studies were low risk (Banovac et al., ; Cuschieri & Calleja, ; Fang et al., ; Hanafy et al., ; Khan et al., ; Khasawneh, ; Natsis et al., ; Ortadeveci et al., ; Özen et al., ; Potu et al., ; Sharma et al., ; Singal et al., ; Srinivasan, ; Totlis et al., ). The articles did not provide enough information about the characteristics of the participants or their demographic information. Two studies had selection bias owing to low sample sizes—fewer than 20 participants (Fang et al., ; Srinivasan, ). For measurement of outcomes, two studies (Duraes et al., ; Srinivasan, ) had no quantitative data for the outcomes of the control group and were classed as having an unclear risk of reporting bias. All the cohort studies were graded as low risk of bias owing to confounding variables (Choi‐Lundberg et al., ; El Sadik & Al Abdulmonem, ; Harrell et al., ; Nagaraj et al., ; Nathaniel et al., ; Stunden et al., ; Thom et al., ; Tucker & Anderson, ; Yoo et al., ; Zarcone & Saverino, ). Two studies had low risk of reporting bias owing to missing data (Harrell et al., ; Tucker & Anderson, ) as the number of students in the control group was not reported. For measurement of outcomes, two studies (Harrell et al., ; Nagaraj et al., ) lacked quantitative data for the outcomes for the control group and were classed as having an unclear risk of reporting bias. The case control studies were classed as low risk (De Faria et al., ; Eansor et al., ; Ikram & Rabbani, ; Yang et al., ). 3.4 Students' academic performance 3.4.1 Qualitative analysis Table shows 19 studies compared students' academic performances between online and traditional (face‐to‐face, F2F) anatomy teaching. Nineteen medical schools were identified as having qualitative data to make the comparison: 47.37% ( n = 9) of them showed higher performances in online than face‐to face, 47.37% ( n = 9) reported higher performances in face‐to face than online teaching, and 5.26% ( n = 1) reported comparable performances in online and face‐to face teaching (Figure ). For example, in Biasutto et al. , 330 medical students attended online teaching whereas 698 attended the face‐to‐face teaching. In online teaching, 60.41% of the students had good performances as their test scores were equal or higher than the mean test score; the percentage of good performances in face‐to‐face teaching was 82.30%. In contrast, Zarcone and Saverino compared 284 medical students attending online teaching with 249 attending face‐to‐face teaching. Good performances were achieved by 95.16% of the students in online teaching; the corresponding percentage in face‐to‐face teaching was 83.53%. Nathaniel et al. compared 102 medical students attending online teaching with 102 attending face‐to‐face teaching. The mean test score for online teaching (86.07) was comparable that for the face‐to‐face method (86.08). 3.4.2 Quantitative analysis Table shows 16 studies with quantitative data for online and face‐to‐face academic performances in 19 medical schools (sample size N = 19). Reported test scores of combined institutional written and practical anatomy examinations are used as an objective measure of level of academic performance in both methods of teaching. Table shows some descriptive statistics for the online and face‐to‐face teaching methods. Online teaching has mean 70.25 with SD 21.07, while face‐to‐face teaching has mean 71.57 with SD 15.24. To check if these values differ significantly ( α < 0.05), independent samples t tests were conducted (see Table ). Since the significance is greater than 0.05, there were no differences in students' academic performances between the two methods of teaching. A Kolmogorov–Smirnov test showed that the data for both methods were normally distributed. 3.5 Students' satisfaction 3.5.1 Qualitative analysis Table shows 26 studies covering 31 medical schools compared students' satisfaction levels between online and traditional (face‐to‐face) anatomy teaching. Of those medical schools, 16% ( n = 5) showed higher satisfaction in online than face‐to‐face while 77.42% ( n = 24) reported higher satisfaction in face‐to‐face than online teaching; 6.45% ( n = 2) reported comparable satisfaction between the two methods (see Figure ). Satisfaction responses were subjective and collected using a five‐point Likert scale questionnaire. Responses above 3 were defined as good satisfaction levels. For example, Khasawneh surveyed 2263 medical students in different medical schools who attended face‐to‐face teaching before the Covid‐19 pandemic and online teaching during 1 year of the pandemic. The study revealed that 47.40% of medical students reported good satisfaction with online teaching and 78.12% with face‐to‐face teaching; 56.51% of students reported poor satisfaction with the online modality owing to the lack of high quality internet service or good electronic devices. Totlis et al. also surveyed 151 medical students who attended face‐to‐face teaching before the Covid‐19 pandemic and online teaching during 1 year of the pandemic. They found that 56% reported good satisfaction with online teaching and 73.5% with face‐to‐face teaching. In contrast, Yang et al. surveyed 89 medical students, 45 attending online teaching and 44 face‐to‐face teaching. They found that 95.6% reported greater satisfaction with online rather than face‐to‐face teaching. Ortadeveci et al. surveyed 239 medical students who attended face‐to‐face teaching before the Covid‐19 pandemic and online teaching during 1 year of the pandemic. They found that 12.1% reported good satisfaction with online teaching and 86.6% with face‐to‐face teaching. Also, 68.2% reported poor satisfaction with the online modality owing to technical issues and poor quality of cadaveric images and anatomical specimens in the recorded videos. Banovac et al. found the mean satisfaction score for online and face‐to‐face teaching to be equal (4.04). Also, Harrell et al. found that laboratory dissections were rated as comparable in effectiveness to demonstration videos and video conference sessions. 3.5.2 Quantitative analysis Table shows eight studies providing quantitative data about student satisfaction levels with online and face‐to‐face teaching in eight medical schools (sample size N = 8). Satisfaction responses were subjective and collected using a five‐point Likert scale questionnaire. Responses above 3 were defined as good satisfaction levels. Table shows some descriptive statistics for the online and face‐to‐face teaching methods. Online teaching has mean 3.95 with SD 0.5, while face‐to‐face teaching has mean 4.11 with SD 0.502. To check if these values differ significantly ( α < 0.05) from the mean value (3), one sample t test was conducted with test value = 3 (see Table ). A Kolmogorov–Smirnov test proved that the data for both methods were normally distributed. From Table , since the significance is less than statistical level ( α < 0.05), the students were satisfied with both methods of learning, but satisfaction with face‐to‐face teaching was greater since the mean (4.11) was greater than the online mean (3.94). Study selection A total of 162 primary research studies were identified initially. Eighteen were duplicates and were removed. The remaining 144 papers were further processed. After title and keywords, abstract screening and full text review, 31 studies were finally agreed on by the authors and included for mixed methods qualitative and quantitative analyses. The PRISMA flow‐chart was used for reporting the findings (see Figure for details). The records included in this study are listed in Table . Both authors agreed on the final number of studies included. Features of included studies Overall, 31 studies met the inclusion requirements. These studies were of different designs; 2 randomized controlled trials (6.455%), 10 cohort studies (32.26%), 4 case control studies (12.9%), and 15 cross sectional studies (48.39%). They were published between 2006 and 2022. Thirteen were classed as original research (41.94%), six as reports (19.35%), two as letters to the editor (6.45%), one as original communication (3.23%), and six as research articles (19.35%). Four were performed in United States, three in India, three in Greece, two in Saudi Arabia, two in Canada, two in Turkey, and one each in Croatia, Argentina, Australia, Malta, Brazil, France, Taiwan, Pakistan, Jordan, Bahrain, Nepal, Singapore, China, Republic of South Korea, and Italy. Thirty‐six medical schools were included in these studies with 1776 students participating in total. Only two studies enrolled medical residents along with the students: Eansor et al. recruited medical and radiological residents and Fang et al. included both medical and surgical residents. Twenty studies reported demographic information about the participants and 11 did not. Twenty‐six studies assessed the satisfaction levels and 19 reported academic performance. The shift to online teaching was attributed to different reasons. Of those, Covid‐19 was reported in 22 of the studies (70.97%), change in curriculum to decrease anatomy teaching hours in 8 (22.58%), lack of cadavers in 2 (6.45%) and natural disaster due to earthquake in 1 (3.22%). A series of online educational methods were evaluated, including: Pre‐recorded videos of anatomy labs and lectures, links to animations, digital cadaveric images, dissection‐audio visual resources (DAVR), Zoom video‐conferencing, Microsoft Teams, YouTube videos, online PowerPoint presentations, interactive stereoscopic virtual reality, 4D dynamic virtual anatomical dissection from scanned human cadavers, electronic anatomy drawing using CS software, virtual reality dissection simulation, dissection educational videos (DEV), 3D virtual models and animations through electronic anatomy software such as complete anatomy software, and 3D bio‐digital human anatomy software. Interventions in the control group ranged from traditional learning via in‐person lectures and textbooks to dissection labs (Table shows the features of the included studies). Risk of bias assessment Risk of bias was assessed according to the JBI Critical Appraisal Checklists for randomized controlled studies, cross‐sectional studies, cohort, and case control studies. It is reported in Tables , , , . The randomized controlled studies (Biasutto et al., ; Zibis et al., ) were assessed as having moderate risk of bias owing to confounding and non‐blinding of participants and teachers of the intervention. In view of the nature of the intervention, blinding of students and educators during the study was not practical. In all cases, the control was students in the same year or same medical school who had not taken the anatomy course before. These two studies had low risk of reporting bias. The students were allowed to choose their participation voluntarily. For cross‐sectional studies, one study was graded as moderate risk of bias owing to confounding variables and missing data for the control group (Duraes et al., ). Fourteen studies were low risk (Banovac et al., ; Cuschieri & Calleja, ; Fang et al., ; Hanafy et al., ; Khan et al., ; Khasawneh, ; Natsis et al., ; Ortadeveci et al., ; Özen et al., ; Potu et al., ; Sharma et al., ; Singal et al., ; Srinivasan, ; Totlis et al., ). The articles did not provide enough information about the characteristics of the participants or their demographic information. Two studies had selection bias owing to low sample sizes—fewer than 20 participants (Fang et al., ; Srinivasan, ). For measurement of outcomes, two studies (Duraes et al., ; Srinivasan, ) had no quantitative data for the outcomes of the control group and were classed as having an unclear risk of reporting bias. All the cohort studies were graded as low risk of bias owing to confounding variables (Choi‐Lundberg et al., ; El Sadik & Al Abdulmonem, ; Harrell et al., ; Nagaraj et al., ; Nathaniel et al., ; Stunden et al., ; Thom et al., ; Tucker & Anderson, ; Yoo et al., ; Zarcone & Saverino, ). Two studies had low risk of reporting bias owing to missing data (Harrell et al., ; Tucker & Anderson, ) as the number of students in the control group was not reported. For measurement of outcomes, two studies (Harrell et al., ; Nagaraj et al., ) lacked quantitative data for the outcomes for the control group and were classed as having an unclear risk of reporting bias. The case control studies were classed as low risk (De Faria et al., ; Eansor et al., ; Ikram & Rabbani, ; Yang et al., ). Students' academic performance 3.4.1 Qualitative analysis Table shows 19 studies compared students' academic performances between online and traditional (face‐to‐face, F2F) anatomy teaching. Nineteen medical schools were identified as having qualitative data to make the comparison: 47.37% ( n = 9) of them showed higher performances in online than face‐to face, 47.37% ( n = 9) reported higher performances in face‐to face than online teaching, and 5.26% ( n = 1) reported comparable performances in online and face‐to face teaching (Figure ). For example, in Biasutto et al. , 330 medical students attended online teaching whereas 698 attended the face‐to‐face teaching. In online teaching, 60.41% of the students had good performances as their test scores were equal or higher than the mean test score; the percentage of good performances in face‐to‐face teaching was 82.30%. In contrast, Zarcone and Saverino compared 284 medical students attending online teaching with 249 attending face‐to‐face teaching. Good performances were achieved by 95.16% of the students in online teaching; the corresponding percentage in face‐to‐face teaching was 83.53%. Nathaniel et al. compared 102 medical students attending online teaching with 102 attending face‐to‐face teaching. The mean test score for online teaching (86.07) was comparable that for the face‐to‐face method (86.08). 3.4.2 Quantitative analysis Table shows 16 studies with quantitative data for online and face‐to‐face academic performances in 19 medical schools (sample size N = 19). Reported test scores of combined institutional written and practical anatomy examinations are used as an objective measure of level of academic performance in both methods of teaching. Table shows some descriptive statistics for the online and face‐to‐face teaching methods. Online teaching has mean 70.25 with SD 21.07, while face‐to‐face teaching has mean 71.57 with SD 15.24. To check if these values differ significantly ( α < 0.05), independent samples t tests were conducted (see Table ). Since the significance is greater than 0.05, there were no differences in students' academic performances between the two methods of teaching. A Kolmogorov–Smirnov test showed that the data for both methods were normally distributed. Qualitative analysis Table shows 19 studies compared students' academic performances between online and traditional (face‐to‐face, F2F) anatomy teaching. Nineteen medical schools were identified as having qualitative data to make the comparison: 47.37% ( n = 9) of them showed higher performances in online than face‐to face, 47.37% ( n = 9) reported higher performances in face‐to face than online teaching, and 5.26% ( n = 1) reported comparable performances in online and face‐to face teaching (Figure ). For example, in Biasutto et al. , 330 medical students attended online teaching whereas 698 attended the face‐to‐face teaching. In online teaching, 60.41% of the students had good performances as their test scores were equal or higher than the mean test score; the percentage of good performances in face‐to‐face teaching was 82.30%. In contrast, Zarcone and Saverino compared 284 medical students attending online teaching with 249 attending face‐to‐face teaching. Good performances were achieved by 95.16% of the students in online teaching; the corresponding percentage in face‐to‐face teaching was 83.53%. Nathaniel et al. compared 102 medical students attending online teaching with 102 attending face‐to‐face teaching. The mean test score for online teaching (86.07) was comparable that for the face‐to‐face method (86.08). Quantitative analysis Table shows 16 studies with quantitative data for online and face‐to‐face academic performances in 19 medical schools (sample size N = 19). Reported test scores of combined institutional written and practical anatomy examinations are used as an objective measure of level of academic performance in both methods of teaching. Table shows some descriptive statistics for the online and face‐to‐face teaching methods. Online teaching has mean 70.25 with SD 21.07, while face‐to‐face teaching has mean 71.57 with SD 15.24. To check if these values differ significantly ( α < 0.05), independent samples t tests were conducted (see Table ). Since the significance is greater than 0.05, there were no differences in students' academic performances between the two methods of teaching. A Kolmogorov–Smirnov test showed that the data for both methods were normally distributed. Students' satisfaction 3.5.1 Qualitative analysis Table shows 26 studies covering 31 medical schools compared students' satisfaction levels between online and traditional (face‐to‐face) anatomy teaching. Of those medical schools, 16% ( n = 5) showed higher satisfaction in online than face‐to‐face while 77.42% ( n = 24) reported higher satisfaction in face‐to‐face than online teaching; 6.45% ( n = 2) reported comparable satisfaction between the two methods (see Figure ). Satisfaction responses were subjective and collected using a five‐point Likert scale questionnaire. Responses above 3 were defined as good satisfaction levels. For example, Khasawneh surveyed 2263 medical students in different medical schools who attended face‐to‐face teaching before the Covid‐19 pandemic and online teaching during 1 year of the pandemic. The study revealed that 47.40% of medical students reported good satisfaction with online teaching and 78.12% with face‐to‐face teaching; 56.51% of students reported poor satisfaction with the online modality owing to the lack of high quality internet service or good electronic devices. Totlis et al. also surveyed 151 medical students who attended face‐to‐face teaching before the Covid‐19 pandemic and online teaching during 1 year of the pandemic. They found that 56% reported good satisfaction with online teaching and 73.5% with face‐to‐face teaching. In contrast, Yang et al. surveyed 89 medical students, 45 attending online teaching and 44 face‐to‐face teaching. They found that 95.6% reported greater satisfaction with online rather than face‐to‐face teaching. Ortadeveci et al. surveyed 239 medical students who attended face‐to‐face teaching before the Covid‐19 pandemic and online teaching during 1 year of the pandemic. They found that 12.1% reported good satisfaction with online teaching and 86.6% with face‐to‐face teaching. Also, 68.2% reported poor satisfaction with the online modality owing to technical issues and poor quality of cadaveric images and anatomical specimens in the recorded videos. Banovac et al. found the mean satisfaction score for online and face‐to‐face teaching to be equal (4.04). Also, Harrell et al. found that laboratory dissections were rated as comparable in effectiveness to demonstration videos and video conference sessions. 3.5.2 Quantitative analysis Table shows eight studies providing quantitative data about student satisfaction levels with online and face‐to‐face teaching in eight medical schools (sample size N = 8). Satisfaction responses were subjective and collected using a five‐point Likert scale questionnaire. Responses above 3 were defined as good satisfaction levels. Table shows some descriptive statistics for the online and face‐to‐face teaching methods. Online teaching has mean 3.95 with SD 0.5, while face‐to‐face teaching has mean 4.11 with SD 0.502. To check if these values differ significantly ( α < 0.05) from the mean value (3), one sample t test was conducted with test value = 3 (see Table ). A Kolmogorov–Smirnov test proved that the data for both methods were normally distributed. From Table , since the significance is less than statistical level ( α < 0.05), the students were satisfied with both methods of learning, but satisfaction with face‐to‐face teaching was greater since the mean (4.11) was greater than the online mean (3.94). Qualitative analysis Table shows 26 studies covering 31 medical schools compared students' satisfaction levels between online and traditional (face‐to‐face) anatomy teaching. Of those medical schools, 16% ( n = 5) showed higher satisfaction in online than face‐to‐face while 77.42% ( n = 24) reported higher satisfaction in face‐to‐face than online teaching; 6.45% ( n = 2) reported comparable satisfaction between the two methods (see Figure ). Satisfaction responses were subjective and collected using a five‐point Likert scale questionnaire. Responses above 3 were defined as good satisfaction levels. For example, Khasawneh surveyed 2263 medical students in different medical schools who attended face‐to‐face teaching before the Covid‐19 pandemic and online teaching during 1 year of the pandemic. The study revealed that 47.40% of medical students reported good satisfaction with online teaching and 78.12% with face‐to‐face teaching; 56.51% of students reported poor satisfaction with the online modality owing to the lack of high quality internet service or good electronic devices. Totlis et al. also surveyed 151 medical students who attended face‐to‐face teaching before the Covid‐19 pandemic and online teaching during 1 year of the pandemic. They found that 56% reported good satisfaction with online teaching and 73.5% with face‐to‐face teaching. In contrast, Yang et al. surveyed 89 medical students, 45 attending online teaching and 44 face‐to‐face teaching. They found that 95.6% reported greater satisfaction with online rather than face‐to‐face teaching. Ortadeveci et al. surveyed 239 medical students who attended face‐to‐face teaching before the Covid‐19 pandemic and online teaching during 1 year of the pandemic. They found that 12.1% reported good satisfaction with online teaching and 86.6% with face‐to‐face teaching. Also, 68.2% reported poor satisfaction with the online modality owing to technical issues and poor quality of cadaveric images and anatomical specimens in the recorded videos. Banovac et al. found the mean satisfaction score for online and face‐to‐face teaching to be equal (4.04). Also, Harrell et al. found that laboratory dissections were rated as comparable in effectiveness to demonstration videos and video conference sessions. Quantitative analysis Table shows eight studies providing quantitative data about student satisfaction levels with online and face‐to‐face teaching in eight medical schools (sample size N = 8). Satisfaction responses were subjective and collected using a five‐point Likert scale questionnaire. Responses above 3 were defined as good satisfaction levels. Table shows some descriptive statistics for the online and face‐to‐face teaching methods. Online teaching has mean 3.95 with SD 0.5, while face‐to‐face teaching has mean 4.11 with SD 0.502. To check if these values differ significantly ( α < 0.05) from the mean value (3), one sample t test was conducted with test value = 3 (see Table ). A Kolmogorov–Smirnov test proved that the data for both methods were normally distributed. From Table , since the significance is less than statistical level ( α < 0.05), the students were satisfied with both methods of learning, but satisfaction with face‐to‐face teaching was greater since the mean (4.11) was greater than the online mean (3.94). DISCUSSION This is a systematic review on learning outcomes in online and face‐to‐face anatomy teaching. The design and quality of included studies was varied, ranging from cross sectional, cohort and case control series to randomized controlled trials, making the systematic and analysis by PICO (Population, Intervention, Comparator, Outcome) possible (see Table ). Additionally, the ranges of approaches in these studies added a significant value to this review. These studies evaluated the learning outcomes of online and face‐to‐face anatomy teaching in different medical schools of different backgrounds and cultures from a variety of countries throughout the world (see Table ). The characteristics of participants such as academic year, social status, age and gender across the intervention (online teaching) and comparator (face‐to‐face teaching) groups, were not indicated in some studies (see Table ). This may give rise to confounding factors that contribute to low or moderate risk of bias in those studies (see Tables , , , ). This review examined students' academic performances in online and traditional (face‐to‐face) anatomy teaching in medical schools, and student satisfaction with the two teaching methods. The analyzed records help to answer the research questions as follows: Are the students' academic performances higher or lower using online teaching rather than traditional (face‐to‐face) anatomy teaching? Nineteen studies showed test scores and evaluated the students' academic performances in online and face‐to‐face teaching. Test scores were objective and assessed by anatomy examinations specific to each medical school (see Tables and ). Ultimately, quantitative and qualitative analyses of these scores revealed no significant differences between students' academic performances in both teaching methods. This is supported by the comparative study by Russell , a frequently cited source showing no significant differences between online and face‐to‐face teaching. This research included a fully indexed, comprehensive bibliography of 355 research reports, summaries and papers that document no significant differences in student performance between the alternative modes of education delivery. This finding is further supported by the work of Sussman and Dutter , which also showed no significant difference in levels of performance between face‐to‐face and online teaching. Ameta‐analysis study by Ma and Nickerson further supported no significant difference in academic performance when comparing online science laboratories and face‐to‐face science laboratories. Collectively, the findings of these studies offer more validation supporting no significant difference in academic performance when comparing online learning and fact‐to‐face learning. Are the students' satisfaction levels higher or lower using online teaching rather than traditional (face‐to‐face) teaching? Twenty‐six studies evaluated the students' satisfaction responses in online and face‐to‐face teaching. Satisfaction responses were subjective and collected using a five‐point Likert scale questionnaire. The qualitative data revealed that face‐to‐face teaching provides greater satisfaction than online teaching in most medical schools (see Table ). Quantitative analysis of satisfaction scores also showed greater student satisfaction with face‐to‐face than online teaching (see Tables and ). The authors of this review found that several factors influenced the higher student satisfaction reports for traditional (face‐to‐face) teaching. First, there were less technical issues in face‐to‐face teaching, in contrast to online teaching. Face‐to‐face teaching was perceived as more effective and accessible with fewer technical difficulties (Hanafy et al., ). In online teaching, several technical issues have been reported including lack of proper devices and poor quality of internet connection and speed (Khasawneh, ). For example, Singal et al. found that 83% of medical students in India felt that lack of proper gadgets, high band width and strong internet connections constituted a potential barrier to their digital learning. Second, face‐to‐face teaching provided more teacher‐student interaction, which cannot be achieved by online teaching approaches such as pre‐recorded videos. For example, Singal et al. found that 65% of medical students in India agreed that they missed their traditional anatomy learning experiences such as dissection courses, face‐to‐face lectures and interaction with mentors. Also, according to Khan et al. , 32 female and 71 male medical students in India reported that lack of face‐to‐face interaction, non‐experiential learning, and adaptation to newer technology were the main barriers to online practical laboratory teaching. Third, anatomical structures were better visualized in face‐to‐face teaching than online, where the images were often of poor quality. For example, Srinivasan in Singapore found that although Zoom has basic annotation tools that a teacher can use to guide medical students around a visual display or explain a concept, it is not easy during e‐learning to perceive the 3D relationships among structures that are necessary for learning anatomy. Fourth, face‐to‐face teaching does not entail the eye strain caused by prolonged screen time in online teaching. Eye strain and headache suffered by the students during online classes interrupted their e‐learning (Sharma et al., ). These factors indicated that, compared with traditional teaching methods, online teaching in medical schools requires more planning as well as continuous and combined efforts to improve teaching quality, especially for anatomy, but could be a valuable response to unforeseen situations such as the Covid‐19 pandemic or natural disasters (Khasawneh, ). Can the online modality deliver the required anatomical knowledge to medical students efficiently? Can the online modality replace the traditional modality in anatomy teaching? Undoubtedly, human anatomy courses are described as “difficult to get and easy to forget” (Khasawneh, ). The best method for teaching anatomy therefore continues to be widely debated among medical educators. To date, no single teaching modality has been found to meet all requirements of the curriculum (Kerby et al., ). While analysis of the reviewed literature supports no significant difference in academic performance between online and face‐to‐face anatomy teaching, there is ample supportive evidence that face‐to‐face teaching cannot be replaced by online teaching. In Banovac et al. , 90% of medical students in Croatia found anatomical dissection and practical work in general to be the most important aspect of teaching, which could not be replaced by online learning. Also, in Cuschieri and Calleja , 36.05% of medical students in Malta claimed not to be sure if they had achieved their desired anatomy learning outcomes through online teaching. Additionally, they reported that the traditional dissection session offered unique opportunities, one major asset being the ability to observe and appreciate the 3D relationships of structures in cadavers and prosections. This was very difficult to illustrate in a dissection session viewed virtually, so this was not recommended for the post‐Covid‐19 era. In Duraes et al. , most medical students in France enjoyed the experience and thought the dynamic virtual dissection could be useful for anatomy learning; however, they did not think it could replace traditional dissection. Likewise, in Totlis et al. , medical students in Greece ranked online anatomy lectures and pre‐recorded anatomy lectures second in terms of effectiveness and preference. In Ortadeveci et al. , medical students in Turkey stated that cadavers and anatomical specimens in traditional anatomy education could not be replaced by distance anatomy education. Generally, medical students are mature and considered self‐directed learning and online learning are useful. Thus, they do their best to adapt to different conditions and alternative methods so their exam scores and performances are not affected. However, in medicine, scores are not the ultimate goal. Strong basic knowledge prepares medical students for their clinical practice and future professions. For example, Khan et al. found that medical students in India stated that they missed cadaveric dissection and could not practice certain procedures such as percussion and auscultation properly, so they did not really understand how to perform them. Human anatomy is a foundational course that provides the first steps toward the clinical years and even to different specialties (Sbayeh et al., ; Turney, ). In addition, the interaction and communication skills during face‐to‐face dissection labs build professionalism and teamwork skills, and improve awareness of ethics in medicine (Palmer et al., ). Finally, this systematic review clearly revealed that academic performances in online and face‐to‐face teaching are comparable, but face‐to‐face teaching provided greater satisfaction. In addition, many medical students reported that cadaveric dissection could not be replaced by online methods as it is still the core of anatomy teaching. It teaches a multidimensional understanding of the organization of the human body and trains students in spatial orientation and in the use of instruments, which foster their knowledge, practical skills and ultimately their clinical and surgical skills. Nevertheless, modern students find value in both online and face‐to‐face teaching methods, which could favor multi‐modal teaching for human anatomy courses in medical schools. The online modality can deliver anatomical knowledge and improve students' performances if it is combined with face‐to‐face teaching (Elizondo‐Omaña et al., ; Green & Whitburn, ; Roopesh Johnson et al., ), but it cannot substitute for it. Additionally, online teaching modality frees more time for practical work and interaction in the anatomical dissection face‐to‐face laboratories, and interactive sessions such as clinical case based approach, peer assisted learning, model demonstrations, and student‐led seminars that foster the integration of basic sciences into the clinical applications (Roopesh Johnson et al., ). These findings are beneficial for anatomy course designers and instructors. However, equivalent levels of achievement of learning outcomes in online and face‐to‐face teaching are important for course consistency and transferability, so many changes could be made to online anatomy teaching to improve the learning outcome levels. 4.1 Strengths and limitations of study Online anatomy teaching is still adopted in some countries across the world post Covid‐19 (Bashir et al., ; Memon et al., ). The broad detrimental effect of this on clinical practice has not yet been determined fully; further research and evaluation is needed. Major strengths of this review included the detailed search for different types of studies, mostly large‐scale studies with long term follow‐up of more than 1 year and sample size more than 100 medical students. Also, the data were extracted by two reviewers independently. Because of the variability among studies, we also assessed the risk of bias on outcomes from articles. Also, the current review included test scores as an objective measure of academic performance, which notably strengthened the comparison between the two methods of teaching. Further research could compare such objective outcomes with subjective experiences of students' perceptions. Another strength of this review was the use of mixed method analysis to provide a more complex basis for decision making than that currently offered by a single method review, thereby maximizing its usefulness for policy decision makers (Stern et al., ). This review also has several limitations. The included studies mainly reported test scores from examinations peculiar to each school and did not use a standardized world‐wide exam format. The validity of the different assessments used in the included studies could constitute a bias. Additionally, gender information was not easy to obtain in all the studies reviewed, but it is an important factor influencing the effectiveness of teaching (Bleakley, ). As more women than men enter medical schools worldwide, in time the medical workforce will comprise a majority of women doctors (Carvajal, ; Wolfe, ). This has been termed the “feminizing” of medicine (McKinstry, ). Such a feminizing of medicine can extend to medical education (Bleakley, ). Medical education currently suffers from male biases, and women are not represented adequately in medical education, although female medical students outperform male students (Bleakley, ). In future studies, the gender ratios in the experimental and control groups could be reported for analysis. Moreover, this systematic review included English publications only; important publications in other languages could have been overlooked. One more limitation was that none of the studies assessed the cost of setup and maintenance of the online teaching methods. Further research should compare the effectiveness of online anatomy education with traditional (face‐to‐face) in a variety of settings and evaluate outcomes such as attitudes, adverse effects, and cost‐effectiveness in medical schools. Strengths and limitations of study Online anatomy teaching is still adopted in some countries across the world post Covid‐19 (Bashir et al., ; Memon et al., ). The broad detrimental effect of this on clinical practice has not yet been determined fully; further research and evaluation is needed. Major strengths of this review included the detailed search for different types of studies, mostly large‐scale studies with long term follow‐up of more than 1 year and sample size more than 100 medical students. Also, the data were extracted by two reviewers independently. Because of the variability among studies, we also assessed the risk of bias on outcomes from articles. Also, the current review included test scores as an objective measure of academic performance, which notably strengthened the comparison between the two methods of teaching. Further research could compare such objective outcomes with subjective experiences of students' perceptions. Another strength of this review was the use of mixed method analysis to provide a more complex basis for decision making than that currently offered by a single method review, thereby maximizing its usefulness for policy decision makers (Stern et al., ). This review also has several limitations. The included studies mainly reported test scores from examinations peculiar to each school and did not use a standardized world‐wide exam format. The validity of the different assessments used in the included studies could constitute a bias. Additionally, gender information was not easy to obtain in all the studies reviewed, but it is an important factor influencing the effectiveness of teaching (Bleakley, ). As more women than men enter medical schools worldwide, in time the medical workforce will comprise a majority of women doctors (Carvajal, ; Wolfe, ). This has been termed the “feminizing” of medicine (McKinstry, ). Such a feminizing of medicine can extend to medical education (Bleakley, ). Medical education currently suffers from male biases, and women are not represented adequately in medical education, although female medical students outperform male students (Bleakley, ). In future studies, the gender ratios in the experimental and control groups could be reported for analysis. Moreover, this systematic review included English publications only; important publications in other languages could have been overlooked. One more limitation was that none of the studies assessed the cost of setup and maintenance of the online teaching methods. Further research should compare the effectiveness of online anatomy education with traditional (face‐to‐face) in a variety of settings and evaluate outcomes such as attitudes, adverse effects, and cost‐effectiveness in medical schools. CONCLUSION The medical students of today are the health leaders of tomorrow. This should make medical education leaders invest more effort in identifying the best approaches to students' learning processes in order to achieve better learning outcomes and prepare them effectively for their future careers. In view of that, this review compared the levels of achievement in learning outcomes between two methods of teaching, online and face‐to‐face. Notable challenges have impacted anatomy education in medical schools, in addition to the huge digital transition in medical education, which has forced providers to consider online anatomy teaching as an alternative for medical students. Both the qualitative and the quantitative analyses undertaken in this study indicated that academic performance after online teaching is comparable to face‐to‐face teaching. On the other hand, students are more satisfied by face‐to‐face than online teaching. Consequently, the authors conclude that online teaching can deliver anatomy knowledge but it cannot replace the traditional method of teaching, especially cadaveric dissection. So, there is not a preference for one type of modality over the other. Online anatomy teaching enables convenience and flexibility for learning, while traditional (face‐to‐face) anatomy teaching enables interaction, touching and manipulation of the cadavers. A multi‐modal learning approach that combines online with face‐to‐face could therefore be efficient and successful. Online teaching can supplement face‐to‐face teaching as it frees more time for practical work in the traditional cadaveric dissection laboratories, and interactive learning sessions such as case‐based and problem‐based learning sessions that foster the integration of basic sciences into the clinical applications. Technological difficulties, lack of students' engagement, poor image quality, headache, and eye strain are of the commonest problems encountered in online teaching. In light of this, medical schools need to apply strategies to overcome difficulties that have emerged with online teaching to improve students' satisfaction and perception.
Deriving mechanism‐based pharmacodynamic models by reducing quantitative systems pharmacology models: An application to warfarin
7b088f71-ef51-4f09-b8d3-cc20ce86e498
10088086
Pharmacology[mh]
A good understanding of the determinants of drug‐effect size is essential for optimal drug dosing on an individual level. This is of particular interest for the widely used anticoagulant warfarin, as it has a narrow therapeutic window and large interindividual variability (IIV) in drug concentration and effect. A standard measure for the effect of warfarin therapy is the international normalized ratio (INR), a normalized coagulation time. A higher‐than‐desired INR is associated with an increased risk of major bleeding events, whereas with a lower‐than‐desired INR, thromboembolic events cannot effectively be prevented. The large IIV complicates optimal individual dosing, causing more than 10‐fold differences in the dose requirement. Current approaches to dose individualization include regression‐based algorithms (predicting the maintenance dosing) , and pharmacokinetic (PK)/pharmacodynamic (PD) model‐based approaches (predicting the warfarin effect to optimize the dose). , , , , However, a large proportion of the variability observed in warfarin dose requirements is not yet explained by the identified covariates in current approaches. Therefore, identification of further covariates or better dose adaptation after early INR measurements is required, for which PK/PD model‐based approaches are better suited than regression‐based approaches. In PK/PD model‐based approaches, dose adaptation typically relies on updating the model parameters based on INR measurements. How to use biomarkers, such as concentrations of coagulation factor, to further improve the model predictions is not always apparent. In contrast, quantitative systems pharmacology (QSP) models are well suited to identify possible drug targets or useful biomarkers. For warfarin, two QSP models have previously been used to study the treatment effect on the INR. , QSP models can often be used to simulate different scenarios, for example, warfarin treatment as well as envenomation after a snake bite. In the context of analyzing clinical data, however, the complexity of QSP models prevents straightforward parameter estimation. To leverage the knowledge in large‐scale QSP models, it would be desirable to systematically derive small‐scale, mechanism‐based PD models suitable for the analysis of clinical trials in a nonlinear mixed effect or Bayesian statistical context. In this article, we extended the model‐reduction method in Knöchel et al. from pure state elimination to state and parameter elimination, simplification of reactions, and analytic solution of model parts. In addition, model reduction is performed for a diverse virtual population, accounting for the expected variability in real‐world data. To this end, we leverage concepts from parameter identifiability, reaction simplification, and robust model reduction. , The proposed model‐reduction approach maintains a user‐specified threshold on the approximation error of the response for at least 95% of the individuals in a diverse virtual population. In application to a blood coagulation QSP model, we obtained a small‐scale warfarin‐INR model that predicts the INR in terms of the product of three coagulation factors, which are indirectly inhibited by warfarin. Under random variability and genotype heterogeneity, the small‐scale warfarin/INR model maintains a prespecified approximation quality to the original QSP model. First, we describe the biological background and the blood coagulation QSP model, how it can be used to simulate the INR under warfarin treatment, and how we augmented the model to include variability. Then, we introduce the workflow to reduce the specific scenarios in the warfarin application and finally the general model‐reduction process and how it builds on different reduction methods. The model reduction was implemented in MATLAB 2021a and is accessible from https://doi.org/10.5281/zenodo.7417886 . Biological background Warfarin acts by inhibiting vitamin K epoxide reductase complex 1 ( VKORC1 ), thereby decreasing the rate at which vitamin K hydroquinone (VKH 2 ) is synthesized in the vitamin K cycle. The reduction in VKH 2 decreases the synthesis of important coagulation factors (e.g., II, VII, IX, and X) and thereby the coagulability of the blood. The warfarin effect can be measured by taking a blood sample and performing a prothrombin time (PT) test. The PT test is a typical way to measure an anticoagulant effect, in which coagulation is induced artificially in a blood sample by adding a defined, high amount of tissue factor (TF). The duration until the blood coagulation starts is denoted as PT. The INR is then defined as (1) INR = PT PT ref , where PT ref denotes the PT of a control sample. Simulating the INR with a QSP model The starting point for our analysis was the blood coagulation QSP model from Wajima et al. (see Figure for an illustration). In the QSP model, a one‐compartment PK model with oral absorption is used to simulate the warfarin concentration after multiple dosing. The QSP model does not consider the enantiomers in the racemic mixture separately. The warfarin effect on VKH 2 is modeled via a maximal effect ( E max ) model of the warfarin concentration. The PT, from which the INR is calculated by normalization, is defined by a threshold on the area under the curve (AUC) of fibrin ( F ) (2) PT = min τ ≥ 0 ∫ 0 τ F t d t ≥ δ . The threshold δ = 1500 s·nmol/L had been determined empirically to correspond to a 30% reduction in fibrinogen in Wajima et al., which is physiologically plausible for clotting. It results in a reasonable response of PT msub ≈ 11 s for the reference parameterization in the absence of warfarin. To model the INR under warfarin therapy, the QSP model is used in two scenarios: (i) the in vivo scenario to predict the action of warfarin on the coagulation factors and (ii) the in vitro scenario to predict the PT. The state vector from the in vivo scenario at a given time, divided by three to account for dilution, serves as the initial value for the in vitro scenario; this corresponds to taking a blood sample. The different scenarios can be simulated with different sets of parameter values in the QSP model. The simulation of the INR under treatment of 4 mg daily is illustrated in Figure . The need to repeatedly simulate the QSP model in two disconnected scenarios makes the computation costly, which is especially relevant for parameter estimation. Inclusion of IIV in the QSP model We aim for a PD model that can describe a population's variability and approximates the QSP model also for individuals deviating from the reference (reference denoting the parameterization reported in the original QSP model). To consider the model reduction under variability, we augmented the blood coagulation QSP model to include IIV. Because the reduced model is designed to guarantee error thresholds only for the considered population, we need to generate a diverse enough virtual population to cover a realistic variability. We first considered variability introduced by the genotypes of VKORC1 and cytochrome P450 isoenzyme 2C9 ( CYP2C9 ), by which warfarin is partly metabolized. Relative differences of the warfarin clearance parameter CL dependent on CYP2C9 genotype and of the warfarin sensitivity parameter IC 50 (the half maximal inhibitory concentration) dependent on VKORC1 genotype were adopted from a published PK/PD warfarin model ; see Supplementary Material Section for details. In addition, we considered random IIV on all parameters and initial values q i , independently distributed according to a log‐normal distribution around the reference values q ref (3) q i ∼ log N q ref i , 0.4 2 . A virtual population was generated by deterministically choosing genotypes such that the allele frequencies matched those reported in Hamberg et al. for the Warfarin Genetics study and randomly sampling the parameter values according to Equation . Mathematical model notation The blood coagulation QSP model is defined by a system of ordinary differential equations (ODEs): (4) d x t d t = f x t p , x 0 = x 0 + u . Here, x t ∈ ℝ n denotes the vector of state variables at time t ∈ [ 0,t end ], and p ∈ ℝ d denotes the vector of parameters. The initial value consists of the prestimulus state vector x 0 and the input/stimulus u . The input is the warfarin dose history in the in vivo scenario and the addition of TF in the in vitro scenario. We later also consider the extended parameter vector q = x 0 p ∈ ℝ n + d . The model comprises a system of n = 62 ODEs defined by the function f ; different sets of parameter values allow to simulate in vivo or in vitro settings. The response of interest is defined by either (5) y t = h x t or (6) y = h x ⋅ = min t ≥ 0 x t fulfills some specific condition . Equation  is useful for the in vivo setting, where the function h would be the determination of the INR dependent on the current state vector x ( t ) (via the in vitro scenario). Equation  is useful for the in vitro setting, where the response is the PT as defined in Equation . Workflow of reducing the scenarios considering IIV Ultimately, we are interested in a model reduction for the combined scenario, as seen in Figure on the right. To this end, we first reduced the in vitro scenario separately (but with an ensemble of virtual blood samples obtained from the in vivo scenario) and then reduced the in vivo scenario with the reduced in vitro model as response h . The reduction workflow specific to the combined scenario for warfarin treatment is summarized in Figure . A virtual population including covariate‐explained and random IIV was generated for the in vivo scenario as described in . We simulated with a fixed warfarin regimen of 4 mg daily for 30 days. To constrain the INR values to a clinically relevant range, we used a reduced dose of 1 mg daily for individuals for which the fixed dosing would lead to a steady‐state INR above 4. From the virtual population, we generated a diverse ensemble of blood samples for the in vitro scenario by virtually sampling blood at days t = 1, 4, 11, 30. The timepoints were chosen to give a good representation of the different stages of warfarin treatment in addition to reflecting the IIV. General automatic model‐reduction procedure The goal of the model reduction is to yield a model as simple as possible while ensuring that its response y red approximates the response y QSP of the original QSP model with a user‐defined maximal approximation error. We determined the approximation quality dependent on a randomly sampled virtual population. To be more robust regarding the realization of the random parameters and to account for possible unphysiological individuals, we propose to require the error threshold to hold for only 95% of the population. Therefore, we accept a reduced model if for at least 95% of the population, the maximal relative error is below a predefined threshold (here chosen to be 10%), that is, (7) where Q 0.95 denotes the 95% sample quantile, and the superscript ( i ) refers to the i th individual. We have chosen the maximal relative error, but other error measures can also be used. As a postprocessing step, we suggest to analyze the individuals for which the threshold is not attained. If they appear physiologic but have an error only slightly larger than the threshold, the reduced model might still be deemed acceptable for the population. If the model is not deemed acceptable, a higher quantile can be used to ensure that the previously excluded but critical individuals are accounted for. In our model‐reduction procedure, we differentiate between (i) model order reduction , in which the number of states/ODEs is reduced; and (ii) simplification of the functional form of reaction rates/ODEs; see Figure for an illustration. Both are fully automatized using the MATLAB Symbolic Toolbox in the model simplification. Next, we describe both steps in detail. Model order reduction For model order reduction , we employed the method proposed in Knöchel et al. In the model order reduction, each state variable is either classified as Environmental (env), that is, its dynamic is deemed unimportant, and it is approximated by a constant equal to its initial value; Negligible (neg), that is, considered completely unimportant and set constant to zero; or Dynamic (dyn), that is, its dynamics are modeled by an ODE, as in the original QSP model. An iterative approach is used to determine the classification of the states, with classifications of already assessed states constituting an intermediate reduced model that is updated after each further state is assessed. Each state variable is classified depending on the impact this would have on the approximation quality of the reduced model. If setting the state negligible or environmental meets the error criterion, the classification with the smaller error is accepted, otherwise, the state is classified dynamic. The states were ordered from lowest to highest importance using the sensitivity‐based input–response (ir) indices from Knöchel and considered for reduction in that order. If the r th state is the response and the i th state the input, the ir‐index (8) ir k t * = 1 t end ∫ t * t end S r , k t , t * 2 d t 1 2 ⋅ ∣ S k , i t * , t 0 ∣ is defined as the product of two terms: the left term represents the impact of the k th state on the output and is averaged over the remaining time interval, and the right term represents the impact of the input on the k th state. The larger the ir‐index, the more important the dynamics of the k th state variable for the ir relationship. If the response is not a state of the system but given by a response function h , the first sensitivity needs to be replaced by a term dependent on h . For the general definition and the sensitivity S n , m t 2 t 1 of the n th state at time t 2 to the m th state at time t 1 , see equations  and in Supplementary Material Section . We performed the model order reduction with two important extensions compared with Knöchel et al. : (i) for the error criterion, we considered a virtual population (see Equation ) instead of a single reference parametrization; and (ii) we extended the definition of the ir‐indices to event‐type response functions. For the application to the in vitro scenario, the ir‐indices of the k th state with event‐type response are defined as (9) ir k t * = f x ref PT ref r − 1 ∣ S r , k PT ref t * ∣ ⋅ ∣ S k , i t * t 0 ∣ , where the subscript r refers to the AUC of fibrin and the subscript i to TF. For details of the derivation, see Supplementary Material Section . Model simplification In the model simplification step, we further simplified the ODEs of the dynamic states. This included parameter reduction and simplification of the functional form of the reaction rates. For parameter reduction, we measured the importance of a parameter [ p ] j for the response by the parameter sensitivity (10) P j = 1 t end ∫ 0 t end ∂ h x ⋅ t ∂ p j 2 d t 1 2 . We first ordered the parameters from the lowest to the highest parameter sensitivity. Then, a parameter was neglected if after the neglection the threshold in Equation  was still attained. This procedure is similar to the model order reduction procedure, in which the states were iteratively considered for neglection or elimination, ordered by their ir‐indices. Neglection is tested by setting the parameter to zero, or if zero‐neglection does not meet the error bound, to infinity. The parameter reduction can simplify the reactions of the remaining state variables significantly, as reaction rates drop out if containing a multiplicative factor that is neglected by setting to zero or a divisor that is neglected by setting to infinity. Note that the same holds for reaction rates in which a neglected state is a multiplicative factor. This applies to all ODEs in which reaction rates with the respective parameter or state are part. A typical source of nonlinearity in QSP models are Michaelis–Menten reaction rates (11) r = V max ⋅ S K M + S , that include linear dependence on [S] or constant behavior. If S ≫ K M over the time span of interest, then K M can be set to zero in the parameter reduction as this introduces only a slight error, and the reaction rate thus becomes constant, simplifying the ODEs in which the rate is part. To simplify ODEs with Michaelis–Menten kinetics also in the case that S ≪ K M , we considered the remaining reaction rates for Taylor approximation in x 0 , which linearizes the Michaelis‐Menten kinetics if S 0 = 0 . Note that if one reaction rate is part of multiple ODEs, they are considered separately, ODE‐wise; however, this did not occur in our application. As with the other reduction steps, the simplification by Taylor approximation was only realized if this did not violate the error criterion (Equation ). After the model‐reduction procedure, we evaluated if a reduced model could be solved analytically, and in the in vitro scenario we conducted a postprocessing step to obtain an analytic solution for the INR. Warfarin acts by inhibiting vitamin K epoxide reductase complex 1 ( VKORC1 ), thereby decreasing the rate at which vitamin K hydroquinone (VKH 2 ) is synthesized in the vitamin K cycle. The reduction in VKH 2 decreases the synthesis of important coagulation factors (e.g., II, VII, IX, and X) and thereby the coagulability of the blood. The warfarin effect can be measured by taking a blood sample and performing a prothrombin time (PT) test. The PT test is a typical way to measure an anticoagulant effect, in which coagulation is induced artificially in a blood sample by adding a defined, high amount of tissue factor (TF). The duration until the blood coagulation starts is denoted as PT. The INR is then defined as (1) INR = PT PT ref , where PT ref denotes the PT of a control sample. INR with a QSP model The starting point for our analysis was the blood coagulation QSP model from Wajima et al. (see Figure for an illustration). In the QSP model, a one‐compartment PK model with oral absorption is used to simulate the warfarin concentration after multiple dosing. The QSP model does not consider the enantiomers in the racemic mixture separately. The warfarin effect on VKH 2 is modeled via a maximal effect ( E max ) model of the warfarin concentration. The PT, from which the INR is calculated by normalization, is defined by a threshold on the area under the curve (AUC) of fibrin ( F ) (2) PT = min τ ≥ 0 ∫ 0 τ F t d t ≥ δ . The threshold δ = 1500 s·nmol/L had been determined empirically to correspond to a 30% reduction in fibrinogen in Wajima et al., which is physiologically plausible for clotting. It results in a reasonable response of PT msub ≈ 11 s for the reference parameterization in the absence of warfarin. To model the INR under warfarin therapy, the QSP model is used in two scenarios: (i) the in vivo scenario to predict the action of warfarin on the coagulation factors and (ii) the in vitro scenario to predict the PT. The state vector from the in vivo scenario at a given time, divided by three to account for dilution, serves as the initial value for the in vitro scenario; this corresponds to taking a blood sample. The different scenarios can be simulated with different sets of parameter values in the QSP model. The simulation of the INR under treatment of 4 mg daily is illustrated in Figure . The need to repeatedly simulate the QSP model in two disconnected scenarios makes the computation costly, which is especially relevant for parameter estimation. IIV in the QSP model We aim for a PD model that can describe a population's variability and approximates the QSP model also for individuals deviating from the reference (reference denoting the parameterization reported in the original QSP model). To consider the model reduction under variability, we augmented the blood coagulation QSP model to include IIV. Because the reduced model is designed to guarantee error thresholds only for the considered population, we need to generate a diverse enough virtual population to cover a realistic variability. We first considered variability introduced by the genotypes of VKORC1 and cytochrome P450 isoenzyme 2C9 ( CYP2C9 ), by which warfarin is partly metabolized. Relative differences of the warfarin clearance parameter CL dependent on CYP2C9 genotype and of the warfarin sensitivity parameter IC 50 (the half maximal inhibitory concentration) dependent on VKORC1 genotype were adopted from a published PK/PD warfarin model ; see Supplementary Material Section for details. In addition, we considered random IIV on all parameters and initial values q i , independently distributed according to a log‐normal distribution around the reference values q ref (3) q i ∼ log N q ref i , 0.4 2 . A virtual population was generated by deterministically choosing genotypes such that the allele frequencies matched those reported in Hamberg et al. for the Warfarin Genetics study and randomly sampling the parameter values according to Equation . The blood coagulation QSP model is defined by a system of ordinary differential equations (ODEs): (4) d x t d t = f x t p , x 0 = x 0 + u . Here, x t ∈ ℝ n denotes the vector of state variables at time t ∈ [ 0,t end ], and p ∈ ℝ d denotes the vector of parameters. The initial value consists of the prestimulus state vector x 0 and the input/stimulus u . The input is the warfarin dose history in the in vivo scenario and the addition of TF in the in vitro scenario. We later also consider the extended parameter vector q = x 0 p ∈ ℝ n + d . The model comprises a system of n = 62 ODEs defined by the function f ; different sets of parameter values allow to simulate in vivo or in vitro settings. The response of interest is defined by either (5) y t = h x t or (6) y = h x ⋅ = min t ≥ 0 x t fulfills some specific condition . Equation  is useful for the in vivo setting, where the function h would be the determination of the INR dependent on the current state vector x ( t ) (via the in vitro scenario). Equation  is useful for the in vitro setting, where the response is the PT as defined in Equation . IIV Ultimately, we are interested in a model reduction for the combined scenario, as seen in Figure on the right. To this end, we first reduced the in vitro scenario separately (but with an ensemble of virtual blood samples obtained from the in vivo scenario) and then reduced the in vivo scenario with the reduced in vitro model as response h . The reduction workflow specific to the combined scenario for warfarin treatment is summarized in Figure . A virtual population including covariate‐explained and random IIV was generated for the in vivo scenario as described in . We simulated with a fixed warfarin regimen of 4 mg daily for 30 days. To constrain the INR values to a clinically relevant range, we used a reduced dose of 1 mg daily for individuals for which the fixed dosing would lead to a steady‐state INR above 4. From the virtual population, we generated a diverse ensemble of blood samples for the in vitro scenario by virtually sampling blood at days t = 1, 4, 11, 30. The timepoints were chosen to give a good representation of the different stages of warfarin treatment in addition to reflecting the IIV. The goal of the model reduction is to yield a model as simple as possible while ensuring that its response y red approximates the response y QSP of the original QSP model with a user‐defined maximal approximation error. We determined the approximation quality dependent on a randomly sampled virtual population. To be more robust regarding the realization of the random parameters and to account for possible unphysiological individuals, we propose to require the error threshold to hold for only 95% of the population. Therefore, we accept a reduced model if for at least 95% of the population, the maximal relative error is below a predefined threshold (here chosen to be 10%), that is, (7) where Q 0.95 denotes the 95% sample quantile, and the superscript ( i ) refers to the i th individual. We have chosen the maximal relative error, but other error measures can also be used. As a postprocessing step, we suggest to analyze the individuals for which the threshold is not attained. If they appear physiologic but have an error only slightly larger than the threshold, the reduced model might still be deemed acceptable for the population. If the model is not deemed acceptable, a higher quantile can be used to ensure that the previously excluded but critical individuals are accounted for. In our model‐reduction procedure, we differentiate between (i) model order reduction , in which the number of states/ODEs is reduced; and (ii) simplification of the functional form of reaction rates/ODEs; see Figure for an illustration. Both are fully automatized using the MATLAB Symbolic Toolbox in the model simplification. Next, we describe both steps in detail. Model order reduction For model order reduction , we employed the method proposed in Knöchel et al. In the model order reduction, each state variable is either classified as Environmental (env), that is, its dynamic is deemed unimportant, and it is approximated by a constant equal to its initial value; Negligible (neg), that is, considered completely unimportant and set constant to zero; or Dynamic (dyn), that is, its dynamics are modeled by an ODE, as in the original QSP model. An iterative approach is used to determine the classification of the states, with classifications of already assessed states constituting an intermediate reduced model that is updated after each further state is assessed. Each state variable is classified depending on the impact this would have on the approximation quality of the reduced model. If setting the state negligible or environmental meets the error criterion, the classification with the smaller error is accepted, otherwise, the state is classified dynamic. The states were ordered from lowest to highest importance using the sensitivity‐based input–response (ir) indices from Knöchel and considered for reduction in that order. If the r th state is the response and the i th state the input, the ir‐index (8) ir k t * = 1 t end ∫ t * t end S r , k t , t * 2 d t 1 2 ⋅ ∣ S k , i t * , t 0 ∣ is defined as the product of two terms: the left term represents the impact of the k th state on the output and is averaged over the remaining time interval, and the right term represents the impact of the input on the k th state. The larger the ir‐index, the more important the dynamics of the k th state variable for the ir relationship. If the response is not a state of the system but given by a response function h , the first sensitivity needs to be replaced by a term dependent on h . For the general definition and the sensitivity S n , m t 2 t 1 of the n th state at time t 2 to the m th state at time t 1 , see equations  and in Supplementary Material Section . We performed the model order reduction with two important extensions compared with Knöchel et al. : (i) for the error criterion, we considered a virtual population (see Equation ) instead of a single reference parametrization; and (ii) we extended the definition of the ir‐indices to event‐type response functions. For the application to the in vitro scenario, the ir‐indices of the k th state with event‐type response are defined as (9) ir k t * = f x ref PT ref r − 1 ∣ S r , k PT ref t * ∣ ⋅ ∣ S k , i t * t 0 ∣ , where the subscript r refers to the AUC of fibrin and the subscript i to TF. For details of the derivation, see Supplementary Material Section . Model simplification In the model simplification step, we further simplified the ODEs of the dynamic states. This included parameter reduction and simplification of the functional form of the reaction rates. For parameter reduction, we measured the importance of a parameter [ p ] j for the response by the parameter sensitivity (10) P j = 1 t end ∫ 0 t end ∂ h x ⋅ t ∂ p j 2 d t 1 2 . We first ordered the parameters from the lowest to the highest parameter sensitivity. Then, a parameter was neglected if after the neglection the threshold in Equation  was still attained. This procedure is similar to the model order reduction procedure, in which the states were iteratively considered for neglection or elimination, ordered by their ir‐indices. Neglection is tested by setting the parameter to zero, or if zero‐neglection does not meet the error bound, to infinity. The parameter reduction can simplify the reactions of the remaining state variables significantly, as reaction rates drop out if containing a multiplicative factor that is neglected by setting to zero or a divisor that is neglected by setting to infinity. Note that the same holds for reaction rates in which a neglected state is a multiplicative factor. This applies to all ODEs in which reaction rates with the respective parameter or state are part. A typical source of nonlinearity in QSP models are Michaelis–Menten reaction rates (11) r = V max ⋅ S K M + S , that include linear dependence on [S] or constant behavior. If S ≫ K M over the time span of interest, then K M can be set to zero in the parameter reduction as this introduces only a slight error, and the reaction rate thus becomes constant, simplifying the ODEs in which the rate is part. To simplify ODEs with Michaelis–Menten kinetics also in the case that S ≪ K M , we considered the remaining reaction rates for Taylor approximation in x 0 , which linearizes the Michaelis‐Menten kinetics if S 0 = 0 . Note that if one reaction rate is part of multiple ODEs, they are considered separately, ODE‐wise; however, this did not occur in our application. As with the other reduction steps, the simplification by Taylor approximation was only realized if this did not violate the error criterion (Equation ). After the model‐reduction procedure, we evaluated if a reduced model could be solved analytically, and in the in vitro scenario we conducted a postprocessing step to obtain an analytic solution for the INR. For model order reduction , we employed the method proposed in Knöchel et al. In the model order reduction, each state variable is either classified as Environmental (env), that is, its dynamic is deemed unimportant, and it is approximated by a constant equal to its initial value; Negligible (neg), that is, considered completely unimportant and set constant to zero; or Dynamic (dyn), that is, its dynamics are modeled by an ODE, as in the original QSP model. An iterative approach is used to determine the classification of the states, with classifications of already assessed states constituting an intermediate reduced model that is updated after each further state is assessed. Each state variable is classified depending on the impact this would have on the approximation quality of the reduced model. If setting the state negligible or environmental meets the error criterion, the classification with the smaller error is accepted, otherwise, the state is classified dynamic. The states were ordered from lowest to highest importance using the sensitivity‐based input–response (ir) indices from Knöchel and considered for reduction in that order. If the r th state is the response and the i th state the input, the ir‐index (8) ir k t * = 1 t end ∫ t * t end S r , k t , t * 2 d t 1 2 ⋅ ∣ S k , i t * , t 0 ∣ is defined as the product of two terms: the left term represents the impact of the k th state on the output and is averaged over the remaining time interval, and the right term represents the impact of the input on the k th state. The larger the ir‐index, the more important the dynamics of the k th state variable for the ir relationship. If the response is not a state of the system but given by a response function h , the first sensitivity needs to be replaced by a term dependent on h . For the general definition and the sensitivity S n , m t 2 t 1 of the n th state at time t 2 to the m th state at time t 1 , see equations  and in Supplementary Material Section . We performed the model order reduction with two important extensions compared with Knöchel et al. : (i) for the error criterion, we considered a virtual population (see Equation ) instead of a single reference parametrization; and (ii) we extended the definition of the ir‐indices to event‐type response functions. For the application to the in vitro scenario, the ir‐indices of the k th state with event‐type response are defined as (9) ir k t * = f x ref PT ref r − 1 ∣ S r , k PT ref t * ∣ ⋅ ∣ S k , i t * t 0 ∣ , where the subscript r refers to the AUC of fibrin and the subscript i to TF. For details of the derivation, see Supplementary Material Section . In the model simplification step, we further simplified the ODEs of the dynamic states. This included parameter reduction and simplification of the functional form of the reaction rates. For parameter reduction, we measured the importance of a parameter [ p ] j for the response by the parameter sensitivity (10) P j = 1 t end ∫ 0 t end ∂ h x ⋅ t ∂ p j 2 d t 1 2 . We first ordered the parameters from the lowest to the highest parameter sensitivity. Then, a parameter was neglected if after the neglection the threshold in Equation  was still attained. This procedure is similar to the model order reduction procedure, in which the states were iteratively considered for neglection or elimination, ordered by their ir‐indices. Neglection is tested by setting the parameter to zero, or if zero‐neglection does not meet the error bound, to infinity. The parameter reduction can simplify the reactions of the remaining state variables significantly, as reaction rates drop out if containing a multiplicative factor that is neglected by setting to zero or a divisor that is neglected by setting to infinity. Note that the same holds for reaction rates in which a neglected state is a multiplicative factor. This applies to all ODEs in which reaction rates with the respective parameter or state are part. A typical source of nonlinearity in QSP models are Michaelis–Menten reaction rates (11) r = V max ⋅ S K M + S , that include linear dependence on [S] or constant behavior. If S ≫ K M over the time span of interest, then K M can be set to zero in the parameter reduction as this introduces only a slight error, and the reaction rate thus becomes constant, simplifying the ODEs in which the rate is part. To simplify ODEs with Michaelis–Menten kinetics also in the case that S ≪ K M , we considered the remaining reaction rates for Taylor approximation in x 0 , which linearizes the Michaelis‐Menten kinetics if S 0 = 0 . Note that if one reaction rate is part of multiple ODEs, they are considered separately, ODE‐wise; however, this did not occur in our application. As with the other reduction steps, the simplification by Taylor approximation was only realized if this did not violate the error criterion (Equation ). After the model‐reduction procedure, we evaluated if a reduced model could be solved analytically, and in the in vitro scenario we conducted a postprocessing step to obtain an analytic solution for the INR. We applied our model‐reduction approach to a QSP model of the effect of warfarin on blood coagulation (see also Figures and for illustrations). Model‐reduction results for in vitro and in vivo scenarios The first step to obtaining a reduced model for the original QSP model with 62 ODEs and 174 parameters is to apply the model order reduction. The resulting reduced order models for the in vitro and in vivo scenarios, each involving six ODEs, are shown in Figure . The states of the reduced in vitro model are colored blue, and the states of the reduced in vivo model are colored orange, with the dynamic states (those modeled by an ODE) in darker shades and the environmental states (set constant to their initial value) in lighter shades. As a result of the parameter reduction, 8 and 13 parameters remained in the reduced in vitro model and reduced in vivo model, respectively, compared with 174 parameters each in the original QSP model. Of the 13 parameters in the in vivo model, three are synthesis rates and are determined via parameter interdependencies by the prestimulus concentrations as stated in Supplementary Material Section , leaving 10 actual parameters. After checking for linearization, only linear reactions remained in the reduced in vitro model, whereas the reduced in vivo model was not further simplified. In the virtual ensemble of blood samples, the reduced in vitro model approximates the original QSP model with the INR as response in accordance with the error criterion (Equation ) in ≥99.8% of cases. In the virtual population, the reduced in vivo model approximates the QSP model with the INR equation as response in accordance with the error criterion (Equation ) in 100% of the cases. As the reduced in vitro model consisted only of linear ODEs, it allowed for an analytic solution; see Supplementary Material Section for solutions for all states. Specifically, the analytic solution for the concentration of fibrin, on which the PT definition (Equation ) directly depends, is given by (12) F t = II 0 ⋅ VII 0 ⋅ X 0 ⋅ Fg 0 ⋅ c 1 ⋅ q t + p t ⋅ exp ‐ c 2 t , where c 1 , c 2 are positive constants, Fg 0 is the initial fibrinogen concentration, and p ( t ) and q ( t ) are cubic and linear polynomials as a function of the in vitro time t . Recall that the initial concentrations II 0 , VII 0 , and X 0 in a blood sample were obtained from an individual's concentrations, II i t * , VII i t * , X i t * at some in vivo time t *. Notably, the fibrin concentration depends on the three warfarin‐dependent coagulation factor concentrations only via their product II 0 ⋅ VII 0 ⋅ X 0 . The results presented in this section were obtained using the automatic model‐reduction procedure. The insights gave rise to the manual approximation presented in the next section. An algebraic INR equation As the INR depends on fibrin via Equation , it can only depend on the coagulation factors via their product II 0 ⋅ VII 0 ⋅ X 0 . We can thus represent the individually normalized INR γ as a function of the individually normalized factor concentration product to characterize the effect of warfarin. Plotting the normalized INR against the normalized product in a log–log plot in Figure , we find that the individual INR is well approximated in the most relevant INR range of 2 to 3 by (13) INR i t INR i 0 = II i t II i 0 ⋅ VII i t VII i 0 ⋅ X i t X i 0 γ , with the exponent chosen to be γ = −0.1975. The reduced in vivo model and the INR equation can be combined to yield a small‐scale warfarin/INR model as the reduced model for the combined scenario. Small‐scale warfarin/ INR model has good approximation quality under IIV The small‐scale warfarin/INR model (see Figure ) simulates the combined scenario by accounting for the effect of warfarin on the coagulation Factors II, VII, and X via inhibition of VKH 2 and translating the relative reduction in the product of the coagulation factor concentrations into an increase in the INR. The small‐scale warfarin/INR model consists of 11 parameters (10 from the in vivo model and the exponent in the INR equation). We evaluated the INR approximation quality of the small‐scale warfarin/INR model for a virtual population including genotype‐induced and unexplained random IIV. While during the model reduction only either the in vitro or the in vivo scenario was considered at a time, we now evaluate the approximation quality of the small‐scale warfarin/INR model to the QSP model for the combined scenario. Figure (top) shows the INR simulated with the QSP model versus the small‐scale warfarin/INR model for the virtual population. The very good approximation quality shows that the small‐scale warfarin/INR model robustly predicts the INR in a heterogeneous population. In addition, we assessed the approximation in INR‐time profiles for different genotypes of CYP2C9 and VKORC1 . The comparisons in Figure (bottom) show simulations of fixed warfarin dosing (4 mg daily) for different genotypes. The small‐scale warfarin/INR model approximates the QSP model well for all genotypes. The poorest approximation quality is observed for larger INRs ( CYP2C9*3/*3 simulation in Figure [bottom left]); however, the relative error is still below 10%. Moreover, in clinical practice, the dose would be reduced when such a high INR is observed, and simulating the same individual with a smaller dose resulted in a much improved approximation. The small‐scale warfarin/INR model allows to determine the warfarin effect on the INR computationally much more efficiently, as all additional simulations of the QSP model in the in vitro scenario are avoided. Biomarker proposal to predict steady‐state INR early In the small‐scale warfarin/INR model, the INR is calculated from the relative concentrations of the coagulation Factors II, VII, and X; therefore, it is natural to consider them in the search of useful biomarkers. Assume that the relative reductions from their pretreatment values II 0 , VII 0 , X 0 to their steady‐state values II ss , VII ss , X ss are related via (14) This allows to predict the steady‐state INR ss from the INR Equation  by (15) INR ss = INR 0 ⋅ a ⋅ b ⋅ r ss ⋅ r ss ⋅ r ss γ . Of note, in the QSP model, it is assumed that a = b = 1. Any of the three factors can be used to determine r ss once it is in steady state. Because Factor VII has the shortest half‐life (~ 6 h), it will adapt fastest, much faster than Factors II (~ 69 h) and X (~ 39 h), suggesting to use (16) r ss = VII ss VII 0 to predict the steady‐state INR value from measurements of VII . Importantly, Factor VII measurements already account for the interindividual differences in the vitamin K cycle and in the PK and thus allow to assess their impact on steady‐state INR. At early timepoints, measuring the Factor VII concentrations and calculating the expected steady‐state INR using Equations  and should be more informative for adapting the warfarin dose than only INR measurements. The first step to obtaining a reduced model for the original QSP model with 62 ODEs and 174 parameters is to apply the model order reduction. The resulting reduced order models for the in vitro and in vivo scenarios, each involving six ODEs, are shown in Figure . The states of the reduced in vitro model are colored blue, and the states of the reduced in vivo model are colored orange, with the dynamic states (those modeled by an ODE) in darker shades and the environmental states (set constant to their initial value) in lighter shades. As a result of the parameter reduction, 8 and 13 parameters remained in the reduced in vitro model and reduced in vivo model, respectively, compared with 174 parameters each in the original QSP model. Of the 13 parameters in the in vivo model, three are synthesis rates and are determined via parameter interdependencies by the prestimulus concentrations as stated in Supplementary Material Section , leaving 10 actual parameters. After checking for linearization, only linear reactions remained in the reduced in vitro model, whereas the reduced in vivo model was not further simplified. In the virtual ensemble of blood samples, the reduced in vitro model approximates the original QSP model with the INR as response in accordance with the error criterion (Equation ) in ≥99.8% of cases. In the virtual population, the reduced in vivo model approximates the QSP model with the INR equation as response in accordance with the error criterion (Equation ) in 100% of the cases. As the reduced in vitro model consisted only of linear ODEs, it allowed for an analytic solution; see Supplementary Material Section for solutions for all states. Specifically, the analytic solution for the concentration of fibrin, on which the PT definition (Equation ) directly depends, is given by (12) F t = II 0 ⋅ VII 0 ⋅ X 0 ⋅ Fg 0 ⋅ c 1 ⋅ q t + p t ⋅ exp ‐ c 2 t , where c 1 , c 2 are positive constants, Fg 0 is the initial fibrinogen concentration, and p ( t ) and q ( t ) are cubic and linear polynomials as a function of the in vitro time t . Recall that the initial concentrations II 0 , VII 0 , and X 0 in a blood sample were obtained from an individual's concentrations, II i t * , VII i t * , X i t * at some in vivo time t *. Notably, the fibrin concentration depends on the three warfarin‐dependent coagulation factor concentrations only via their product II 0 ⋅ VII 0 ⋅ X 0 . The results presented in this section were obtained using the automatic model‐reduction procedure. The insights gave rise to the manual approximation presented in the next section. INR equation As the INR depends on fibrin via Equation , it can only depend on the coagulation factors via their product II 0 ⋅ VII 0 ⋅ X 0 . We can thus represent the individually normalized INR γ as a function of the individually normalized factor concentration product to characterize the effect of warfarin. Plotting the normalized INR against the normalized product in a log–log plot in Figure , we find that the individual INR is well approximated in the most relevant INR range of 2 to 3 by (13) INR i t INR i 0 = II i t II i 0 ⋅ VII i t VII i 0 ⋅ X i t X i 0 γ , with the exponent chosen to be γ = −0.1975. The reduced in vivo model and the INR equation can be combined to yield a small‐scale warfarin/INR model as the reduced model for the combined scenario. INR model has good approximation quality under IIV The small‐scale warfarin/INR model (see Figure ) simulates the combined scenario by accounting for the effect of warfarin on the coagulation Factors II, VII, and X via inhibition of VKH 2 and translating the relative reduction in the product of the coagulation factor concentrations into an increase in the INR. The small‐scale warfarin/INR model consists of 11 parameters (10 from the in vivo model and the exponent in the INR equation). We evaluated the INR approximation quality of the small‐scale warfarin/INR model for a virtual population including genotype‐induced and unexplained random IIV. While during the model reduction only either the in vitro or the in vivo scenario was considered at a time, we now evaluate the approximation quality of the small‐scale warfarin/INR model to the QSP model for the combined scenario. Figure (top) shows the INR simulated with the QSP model versus the small‐scale warfarin/INR model for the virtual population. The very good approximation quality shows that the small‐scale warfarin/INR model robustly predicts the INR in a heterogeneous population. In addition, we assessed the approximation in INR‐time profiles for different genotypes of CYP2C9 and VKORC1 . The comparisons in Figure (bottom) show simulations of fixed warfarin dosing (4 mg daily) for different genotypes. The small‐scale warfarin/INR model approximates the QSP model well for all genotypes. The poorest approximation quality is observed for larger INRs ( CYP2C9*3/*3 simulation in Figure [bottom left]); however, the relative error is still below 10%. Moreover, in clinical practice, the dose would be reduced when such a high INR is observed, and simulating the same individual with a smaller dose resulted in a much improved approximation. The small‐scale warfarin/INR model allows to determine the warfarin effect on the INR computationally much more efficiently, as all additional simulations of the QSP model in the in vitro scenario are avoided. INR early In the small‐scale warfarin/INR model, the INR is calculated from the relative concentrations of the coagulation Factors II, VII, and X; therefore, it is natural to consider them in the search of useful biomarkers. Assume that the relative reductions from their pretreatment values II 0 , VII 0 , X 0 to their steady‐state values II ss , VII ss , X ss are related via (14) This allows to predict the steady‐state INR ss from the INR Equation  by (15) INR ss = INR 0 ⋅ a ⋅ b ⋅ r ss ⋅ r ss ⋅ r ss γ . Of note, in the QSP model, it is assumed that a = b = 1. Any of the three factors can be used to determine r ss once it is in steady state. Because Factor VII has the shortest half‐life (~ 6 h), it will adapt fastest, much faster than Factors II (~ 69 h) and X (~ 39 h), suggesting to use (16) r ss = VII ss VII 0 to predict the steady‐state INR value from measurements of VII . Importantly, Factor VII measurements already account for the interindividual differences in the vitamin K cycle and in the PK and thus allow to assess their impact on steady‐state INR. At early timepoints, measuring the Factor VII concentrations and calculating the expected steady‐state INR using Equations  and should be more informative for adapting the warfarin dose than only INR measurements. We introduce a model‐reduction method that allows deriving a small‐scale warfarin/INR model from a blood coagulation QSP model. The small‐scale warfarin/INR model calculates the INR via a linear relationship in the log–log space from concentrations of Factors II, VII, and X. The INR prediction of the small‐scale warfarin/INR model approximates the INR prediction of the QSP model to within 10% for more than 99% of a diverse virtual population. Without clinical data, but solely based on the small‐scale warfarin/INR model, we identified Factor VII as a possible biomarker. The model‐reduction method, including model order reduction and model simplification, is fully automatic and can be automatically applied to other QSP models as well. The INR equation was derived manually using the automatically obtained analytic equation for fibrin. Application of the method requires defining an input (e.g., the dosing history) and an output function (e.g., the effect) for the model as well as generating a virtual population; together they define the scenario in which the model can be applied. Different scenarios typically require different reduced models , ; in this article, we concentrated on a scenario modeling warfarin treatment and subsequent INR measurements. Importantly, it is based on the common PT test and a range of INR values below 4; the virtual population includes polymorphisms in CYP2C9 ( *1 , *2 , *3 ) as well as polymorphisms in VKORC1 (A, G). Many structurally different warfarin PK/PD models are available in the literature, , , , , , which makes it difficult to judge which of them to use. A QSP model as a starting point, however, makes the underlying processes and assumptions explicit, so they can be discussed and tested. Similar to our model, existing empirical warfarin PK/PD models account for factor concentration‐time courses (although not always explicitly) mostly via an E max model for inhibition. They subsequently translate decreased factor concentrations into increased INRs via an INR equation. Our mechanism‐based INR representation reinforces the interpretation that the model components in Hamberg et al. (the two parallel transit chains) represent relative concentrations of Factors II and VII. Moreover, our INR equation derivation shows that the INR equations represent the in vitro processes in the PT test. The small‐scale warfarin/INR model is also comparable with empirical PK/PD models in terms of numbers of structural parameters (11 parameters compared with 12 parameters in Hamberg et al., 11 parameters in Xue et al., and nine parameters in Ohara et al. ). For a better comparison, we excluded parameters related to covariate effects, as the models include a different number of covariates. A linear relationship in the log‐space between INR and coagulation factor concentrations, as we have identified in our model, has previously been observed under stable therapy of acenocoumarol, another vitamin K antagonist. Factor VII has been proposed as a biomarker in warfarin treatment before. , Also for the vitamin K antagonist acenocoumarol, the variability in steady‐state INR was well explained by only Factor VII concentrations. Based on our low‐scale warfarin‐INR PD model, we can offer a simple equation to improve the early prediction of steady‐state INR based on steady‐state Factor VII concentrations. Notably, the simple equation helps to make the assumptions (e.g., about the factor's relative reduction from pretreatment to steady‐state values) explicit, under which we expect this approach to give good results. The impact of violation of assumptions can be easily examined, and assumptions might be weakened, for example, we do not require the factor's relative reduction to be the same, but only that the ratio is known. To assess the impact of dose individualization based on the biomarker relationship identified in this analysis, the feasibility of performing these measurements in clinical practice and cost‐effectiveness remains to be evaluated. To reduce the QSP model to a low‐scale PD model enabling parameter estimation and accounting for parameter variability, we combined multiple reduction approaches: state and parameter reduction, , reaction reduction, and robust model reduction. , The combination of different reduction approaches is essential to reduce QSP models with different properties. Lumping , , and time scale separation can easily be included in our approach but were not used in this analysis as they did not substantially improve the model reduction. Another method to reduce complex rational rate expressions is term‐based identifiability analysis. Because the warfarin example included only linear or Michaelis–Menten–type reaction rates, the simpler approach of reaction reduction by first‐order Taylor approximation was used and resulted in a very good approximation. In contrast to reducing the blood coagulation QSP model using a neural network approach, the model‐reduction approach presented in this article is fully parametric, enabling biomarker identification. Variability was introduced into the QSP model by considering different CYP2C9 and VKORC1 genotypes and randomly distributed parameters according to a log‐normal distribution with a 40% coefficient of variation. We assumed parameter variability to be uncorrelated due to the lack of more detailed knowledge on the correlation structure. This, however, is not a required feature of the virtual population, and correlated parameter variability can be included dependent on the state of knowledge. The population and input should be chosen such that the parameter sets represent therapeutically relevant scenarios because the reduced model guarantees an error threshold only for the considered population. If the population includes unrealistic parameter sets or inputs (e.g., dosing history), this might unnecessarily impair the reducibility. In the model reduction, the approximation quality was assessed in a virtual population of 1000 individuals. The resulting virtual population‐based reduction approach to account for variability is similar as in Dokoumetzidis and Aarons. However, instead of restricting the approximation error only for the population mean, we focused on individual prediction and chose to ensure an acceptable approximation for at least 95% of the virtual population. Using the 95% threshold, we addressed the existence of possible unphysiological parameter combinations and thus unphysiological responses in the virtual population, which is a known problem in the automatic generation of virtual populations (see, e.g., Duffull and Gulati ). Consequently, we suggest to a posteriori examine the characteristics of the excluded virtual individuals to avoid excluding critical but uncommon individuals. In our case, the obtained small‐scale warfarin/INR model actually attains the error threshold for more than 99% of the population. We judged the excluded individuals (5 of 1000) as uncritical because they are still relatively well approximated, with errors between 10% and 13%. QSP and physiologically based PK models have previously been used to predict individual outcomes, for example, the INR or drug exposure. , A small‐scale model that predicts the response well for a diverse population also enables dose adaptation using full Bayesian updating. The reduced model, together with the included random unexplained IIV, could either directly act as a prior to estimate individual posterior parameters or first have parameters reestimated for given data. By systematically deriving mechanism‐based PD models from QSP models, we bridge the gap between mechanistic and empirical modeling and make a step toward exploiting QSP models to guide dose adaptation within model‐informed precision dosing. U.F., J.K., C.K., and W.H. designed the research. U.F. performed the research. U.F., J.K., and W.H. analyzed the data. U.F., J.K., C.K., and W.H. wrote the manuscript. Funding provided by the graduate research training program PharMetrX: Pharmacometrics & Computational Disease Modeling, Berlin/Potsdam, Germany. Funded by the Deutsche Forschungsgemeinschaft (German Research Foundation)–Projektnummer 491466077. C.K. and W.H. report research grants from an industry consortium (AbbVie Deutschland GmbH & Co. K.G., AstraZeneca, Boehringer Ingelheim Pharma GmbH & Co. K.G., Grúnenthal GmbH, F. Hoffmann‐La Roche Ltd., Merck KGaA, and SANOFI) for the PharMetrX program. In addition, C.K. reports research grants from the Innovative Medicines Initiative‐Joint Undertaking (“DDMoRe”) and Diurnal Ltd. C.K. report grants from the Federal Ministry of Education and Research within the Joint Programming Initiative on Antimicrobial Resistance Initiative. J.K. is an employee of AstraZeneca and owns stock in AstraZeneca. U.F. declared no competing interests for this work. Appendix S1. Click here for additional data file.
Utility of AMACR immunohistochemical staining in differentiating Arias-Stella reaction from clear cell carcinoma of ovary and endometrium
02c33a69-142f-4333-a43e-81e1022c0108
10088205
Anatomy[mh]
Arias-Stella reaction (ASR) is a reactive phenomenon in the endometrium, which occurs due to exposure to high-dose estrogen or progesterone during pregnancy, gestational trophoblastic disease or secondary to hormone administration . This phenomenon was first described by Javier Arias-Stella in 1945. He described this reaction as a pseudoneoplastic glandular response of the female genital tract to excess sex hormones. ASR has five well known variant patterns, including minimal atypia, early secretory, secretory or hypersecretory; and regenerative or proliferative, also known as nonsecretory, as well as monstrous cell pattern. ASR can present with varying degrees of cytomegaly along with cytoplasmic clearing, vacuolization, nuclear enlargement, hyperchromasia, and changes in intraglandular papillary, as well as hob-nailing . Endometrial clear cell carcinoma (CCC) is an uncommon variant of endometrial carcinoma. CCC resembles clear cell carcinoma in ovary and cervix. Based on the World Health Organization (WHO) classification of gynecological neoplasms, CCC should be diagnosed mainly based on histomorphological criteria. The typical presentation of CCC include cuboidal, polygonal or hobnail cells that have a clear-to-eosinophilic cytoplasm, which have a tubulo-cystic, papillary, or solid architecture . Morphological features of CCC may considerably overlap with those of ASR, which may make challenges in their distinction. ASR diagnosis is usually uncomplicated in young pregnant patients. In contrast ASR diagnosis can be difficult in postmenopausal patients, and in patients who receive exogenous progestins due to known endometrial hyperplasia . Immunohistochemistry (IHC) can be helpful in such difficult cases. Currently, HNF1β, Napsin A and Alpha-methylacyl-CoA racemase (AMACR) are the main suggested immunohistochemical markers in differentiating endometrial and ovarian CCCs . The expression of Napsin A and HNF-1β is high in ASR. These markers were not helpful in separating ASR from CCC . AMACR or p504s, is an evolutionarily conserved enzyme, which is important in branched-chain fatty acids metabolism . AMACR was first detected based on cDNA library subtraction combined with high throughput microarray screening performed on normal and cancerous prostate tissues . Later, anti-AMACR antibody soon was found to be a sensitive and specific tumor marker in prostate cancers . The most common application of anti-AMACR is detecting prostatic adenocarcinoma in routine practice. However, AMACR expression has been reported in extraprostatic neoplasms and benign prostatic processes . AMACR is also overexpressed in ovarian CCC, which is higher compared to other types of epithelial tumors . AMACR expression has not been fully evaluated in ASR, but few studies have reported that ASR was associated with negative or low AMACR expression. Therefore, the objective of this study was to investigate AMACR expression among ASR and CCC, and evaluate its potential, as an IHC marker, in distinguishing ASR from CCC. Study design This cross-sectional study was approved by the Ethics Committee of the Tehran University of Medical Sciences (IR.TUMS.IKHC.REC.1400.385). The study was conducted in Imam Khomeini Hospital, Tehran, Iran from March 2015 to March 2021. Study population The electronic records of the Imam Khomeini Hospital Pathology Departments were screened to identify eligible patients. Patients with pathological diagnosis of CCC, preferably from endometrium, and ASR were included in this study. The diagnosis was confirmed by two pathologists by evaluating all Hematoxylin and Eosin slides with corresponding pervious IHC studies. Then, representative slides were selected and IHC study was performed on related paraffin blocks. The clinicopathological variables were obtained from corresponding histopathology reports either via the Laboratory Information System or surgical department records. Each patient was given a unique code to ensure the anonymity of the patient data. Blocks with inadequate tissue for IHC and those with incomplete medical records were excluded from the study. IHC study IHC staining was performed using monoclonal rabbit anti-human AMACR/p504s rabbit monoclonal antibody + anti-human p63 mouse monoclonal antibody prepared in 10mM PBS, pH 7.4, with 0.2% BSA and 0.09% sodium azide. Acinar adenocarcinoma of the prostate was considered as positive control. After deparaffinization and rehydration, the sections were subjected to heat antigen retrieval technique. Immunostaining was performed based on the manufacturer standard protocol (Master Diagnostica, Spain). IHC staining interpretation Semiquantitative scoring was performed to evaluate IHC staining using a high magnification (400x) light microscope on the unidentified samples using a 4-tiered system. Two pathologists (F.A and M.S) who were blinded to the clinicopathologic parameters and outcome of the patients independently evaluated the samples. Scoring was based on overall stain intensity and the percentage of stained lesional cells. The intensity score was based on the estimated staining intensity. Intensity score include 0 (no staining), 1 (weak), 2 (moderate), and 3 (strong). The percentage score was based on the estimated fraction of positive-stained lesional cells. Percentage score was categorized as 0 (none), 1 (1-5%), 2 (6-49%), and 3 (50-100%). The total intensity score + percentage score defined as immunoreactive score (IRS) ranged from 0 to 6. Positive expression was determined as a total IRS exceeding 2 . In case of discordance in staining degree between the pathologists, the issue was resolved by consensus between two pathologists. Statistical analysis Data analysis was conducted using the statistical package for social sciences (SPSS) software version 16. Normality of continuous data was evaluated using the Kolmogorov-Smirnov test. Descriptive statistics were reported using mean and standard deviation (SD) for normally distributed or median and interquartile range (IQR) for non-normally distributed variables. Frequency and percentage were used to report categorical variables. The Fisher’s exact or Monte Carlo tests were used to compare the distribution pattern of categorical variables between diagnosis categories. The independent t-test or Mann-Whitney tests were used to compare mean or median value of continuous variables between diagnosis categories based on the normality of data. Binary logistic regression was used to evaluate the relationship between study variables and diagnosis with diagnosis categories as dependent variable and other study variables as independent variables. The receiver operating characteristics (ROC) curve analysis was used to evaluate the ability of AMACR in differentiating CCC from ASR. The level of statistical significance was considered as p < 0.05. This cross-sectional study was approved by the Ethics Committee of the Tehran University of Medical Sciences (IR.TUMS.IKHC.REC.1400.385). The study was conducted in Imam Khomeini Hospital, Tehran, Iran from March 2015 to March 2021. The electronic records of the Imam Khomeini Hospital Pathology Departments were screened to identify eligible patients. Patients with pathological diagnosis of CCC, preferably from endometrium, and ASR were included in this study. The diagnosis was confirmed by two pathologists by evaluating all Hematoxylin and Eosin slides with corresponding pervious IHC studies. Then, representative slides were selected and IHC study was performed on related paraffin blocks. The clinicopathological variables were obtained from corresponding histopathology reports either via the Laboratory Information System or surgical department records. Each patient was given a unique code to ensure the anonymity of the patient data. Blocks with inadequate tissue for IHC and those with incomplete medical records were excluded from the study. IHC staining was performed using monoclonal rabbit anti-human AMACR/p504s rabbit monoclonal antibody + anti-human p63 mouse monoclonal antibody prepared in 10mM PBS, pH 7.4, with 0.2% BSA and 0.09% sodium azide. Acinar adenocarcinoma of the prostate was considered as positive control. After deparaffinization and rehydration, the sections were subjected to heat antigen retrieval technique. Immunostaining was performed based on the manufacturer standard protocol (Master Diagnostica, Spain). Semiquantitative scoring was performed to evaluate IHC staining using a high magnification (400x) light microscope on the unidentified samples using a 4-tiered system. Two pathologists (F.A and M.S) who were blinded to the clinicopathologic parameters and outcome of the patients independently evaluated the samples. Scoring was based on overall stain intensity and the percentage of stained lesional cells. The intensity score was based on the estimated staining intensity. Intensity score include 0 (no staining), 1 (weak), 2 (moderate), and 3 (strong). The percentage score was based on the estimated fraction of positive-stained lesional cells. Percentage score was categorized as 0 (none), 1 (1-5%), 2 (6-49%), and 3 (50-100%). The total intensity score + percentage score defined as immunoreactive score (IRS) ranged from 0 to 6. Positive expression was determined as a total IRS exceeding 2 . In case of discordance in staining degree between the pathologists, the issue was resolved by consensus between two pathologists. Data analysis was conducted using the statistical package for social sciences (SPSS) software version 16. Normality of continuous data was evaluated using the Kolmogorov-Smirnov test. Descriptive statistics were reported using mean and standard deviation (SD) for normally distributed or median and interquartile range (IQR) for non-normally distributed variables. Frequency and percentage were used to report categorical variables. The Fisher’s exact or Monte Carlo tests were used to compare the distribution pattern of categorical variables between diagnosis categories. The independent t-test or Mann-Whitney tests were used to compare mean or median value of continuous variables between diagnosis categories based on the normality of data. Binary logistic regression was used to evaluate the relationship between study variables and diagnosis with diagnosis categories as dependent variable and other study variables as independent variables. The receiver operating characteristics (ROC) curve analysis was used to evaluate the ability of AMACR in differentiating CCC from ASR. The level of statistical significance was considered as p < 0.05. A total of 107 samples (57 CCC and 50 ASR samples) were evaluated. All ASR samples were endometrial curettage, but CCC group included 28 endometrial CCC (ECCC: biopsy and 19 hysterectomy specimens) and 29 ovarian CCC (OCCC: all oophorectomy specimens). The mean age of the patients was 46.37 ± 15.52 years old. The mean age of the patients in the ASR and CCC groups were 33.34 ± 6.36 and 57.81 ± 11.64 years old, respectively. There was a significant difference in age between CCC and ASR groups (p < 0.001). The mean age of patients with OCCC (49.9 years) was significantly lower than ECCC (61.7 years) (P-value = 0.00). The prevalence of endometrial and ovarian CCC was 49.12% and 50.88%, respectively. Description of tumor characteristics of the samples in total population and their comparison between CCC and ASR groups are presented in Tables and . The distribution patterns of percentage, intensity and total scores were significantly different between CCC and ASR groups (Table ; Figs. , , and ). The mean total score in ASR and CCC groups were 0.30 and 1.59 respectively. There was a significant difference in IRS between the two groups (p = 0.003) indicating that IRS was significantly higher among CCC group compared to ASR group. The results of binary logistic regression to identify the predictors of CCC are presented in Table . Among the study variables only age was significantly related to diagnosis (p = 0.013). The ROC curve analysis was performed to evaluate the area under curve (AUC) for AMACR expression in distinguishing CCC from ASR cells. The AUC was 0.652 (95% CI for AUC: 0.565–0.738) indicating that AMACR expression could detect 65.2% of CCC cases. At the cut-off value of 2.0, AMACR expression could detect CCC with 47.4% sensitivity and 78.0% specificity. The positive and negative predictive values for AMACR expression in detecting CCC were 81.1% and 57%, respectively. The ROC curve is presented in Fig. . In our study, we examined AMACR immunohistochemistry in a series of 50 ASR from endometrial site and 57 endometrial and ovarian CCC samples. Our results showed a significant difference in AMACR expression between CCC and ARS (p = 0.003) indicating that the IRS for AMACR expression was significantly higher among CCC compared to ASR groups. In 2020, Ji et al. evaluated IHC markers to differentiate endometrial CCC from diagnoses that mimic its morphology, including ASR. The findings of their study added to the previous literature regarding the usefulness of Napsin A, HNF-1β, and estrogen receptor (ER) in CCC diagnosis. They demonstrated that arginosuccinate synthase (ASS1) and ER were the only markers that could potentially discriminate CCC from ASR. They reported that Napsin A and HNF-1β were highly expressed in ASR, which was similar to CC, but ER had 100% sensitivity and 88.2% specificity and ASS1 had 63.6% sensitivity and 95.1% specificity for diagnosing ASR . Pors et al. (2019) surveyed AMACR expression in a series of 55 endometrial/cervical CCC and reported that 75% of CCC cases were AMACR stained and the staining intensity was more likely to be strong and diffuse . Fadare et al. also showed that the sensitivity and specificity of AMACR expression in classifying CCC were 75% (95% CI: 0.61–0.86) and 79% (95% CI: 0.66–0.88), respectively, with an odds ratio of 11.62 (95% CI: 5–28, p < 0.001), and an AUC of 0.79 (95% CI: 0.68 to 0.88). These findings indicated a strong association between AMACR expression and CCC that make AMACR as a relatively robust diagnostic test . However, the practical utility of AMACR expression evaluation may be limited by the focal nature of its expression, as focal expression is seen n in 32% of AMACR-positive CCC cases, as well as its expression in 15–22% of the non-CCC histotypes. AMACR expression of AMACR was reported to be negative in ASR cases . Russell Vang et al. (2004) suggested that IHC staining for Ki-67 and p53 may help distinguish endometrial ASR from CCC and other of high-grade carcinoma types . To the best of the author’s knowledge, no study has been conducted to investigate AMACR expression in ASR cells and its comparison to CCC. Our study showed that the AMACR can be a potentially useful marker for distinguishing ASR from endometrial CCC. However, in view of low expression of AMACR in CCC cases in our study compared with previous studies, different clones of this antibody should be investigated to identify the best colon. Among the study variables only age was significantly related to CCC diagnosis (p = 0.013). This finding indicated that with increase in age the risk of CCC diagnosis increased by 78.3%. This could be explained as most ASR cases occur at young age and are associated with pregnancy or hormone therapy. The utility of AMACR as a single marker or as a putative complementary marker of a panel in distinguishing CCC from ASR needs to be compared with ER and ASS1 in further studies. The power of the regression analysis was 75%, which was smaller than the estimated power of the study (80%) and could be considered as a limitation of the study. Therefore, it is probable that the observed relationship might not remain significant in larger sample size. As this study included all the eligible samples in a single pathology laboratory, further multicenter studies are required to justify the findings of this study. In summary, this study described potential utility of AMACR as a diagnostic adjunct in distinguishing CCC from ASR. However, the utility of AMACR as a single marker or as a part of a panel with other immunohistochemical markers, including ER and ASS1, should be further evaluated.
Nuclear imaging and therapy in oncology in Poland in 2021–2022
2437e0fc-86b4-4aad-a062-1ab7f18fc1cc
10088604
Internal Medicine[mh]
“If your mother does not teach you, the world will…”: a qualitative study of parent-adolescent communication on sexual and reproductive health issues in Border districts of eastern Uganda
c1e0688c-6ae5-492c-92c8-68524fe76a19
10088803
Health Communication[mh]
Globally, adolescents aged 10–19 years in sub-Saharan Africa (SSA) bear a disproportionate burden of sexual and reproductive health (SRH) challenges . The region accounts for the highest rates of early marriage, adolescent pregnancy, unsafe abortions, complications during pregnancy and childbirth , and HIV transmission , contributing to high morbidity and mortality rates. The World Health Organization (WHO) estimates that one in 20 adolescents contract a sexually transmitted infection (STI) each year . According to the 2016 Uganda Demographic and Health Survey (UDHS), teenage pregnancy stands at 25%, with rural areas (27%) having higher rates than urban areas (19%). Teenage pregnancy statistics worsened during the 2020–2021 COVID-19 era compared to the pre-COVID-19 period. In some districts such as Namisindwa and Amudat, teenage pregnancy increased by over 50% during the COVID-19 pandemic that disrupted provision of SRH services . Access to timely sexual and reproductive health information and services is fundamental in improving SRH outcomes for adolescents. Studies elsewhere have shown that media, peers, teachers, and health workers are the main source of SRH information among adolescents . However, information from peers and media may be incorrect leading to misrepresentation making adolescents vulnerable to poor sexual and reproductive health outcomes. Some studies have shown that adolescents also prefer obtaining SRH information from parents . Thus, parent-adolescent communication on SRH issues has the potential to prevent children’s involvement in risky sexual behaviors and empower them with decision-making skills . Okigbo, Kabiru in their study among adolescents living in Kenyan slums noted that male adolescents who reported communication with their mothers were less likely to transition to first sexual intercourse compared to those who did not. Despite these benefits, such important conversations seldom occur in many settings in SSA. In Uganda, just like most SSA countries, several factors prevent discussions between parents and children. Parents are generally uncomfortable discussing sex-related issues with their children and have limited knowledge and skills to communicate effectively on SRH issues . A qualitative review on barriers to parent-child communication on SRH issues in East Africa found gender differences, level of education, parents’ occupations, religion, and social-cultural norms as key barriers to communication about SRH . Previous studies on parent-child communication about SRH in Uganda have targeted adolescents in a school setting , only considered parents perspectives and among very young adolescents (10–14 years) . Furthermore, these studies have only been conducted in urban and peri-urban settings in non-border settings . To the best of our knowledge, there are no published studies on parent-adolescent communication on sexual and reproductive health in border districts of Uganda where the population is very mobile and highly engaged in busy commercial activities which provide increased opportunities for engaging in risky sexual behaviors. Border areas have a higher HIV prevalence compared to non-border areas . Mobile populations at the borders may lack access to SRH services and those working away from home usually engage in casual sexual relationships while traveling. This study aimed to fill this gap by assessing the practices, barriers, and facilitators of parent-adolescent communication about SRH in two Eastern Uganda border districts. In this study, sexual and reproductive health refers to a wide range of topics, including abstinence, methods of contraception, HIV/AIDS and other STIs, unwanted pregnancy, condoms, sexual intercourse, and menstruation. Study design and setting Data collection was conducted between 2nd and 18th May, 2021. We used a qualitative research design to gain a deeper understanding of the facilitators and barriers of parent-adolescent communication on SRH issues in two border districts of Busia and Tororo, located in Eastern Uganda. According to the 2014 National Population and Housing Census , Busia and Tororo have a population of 323,662 and 517,082 respectively. Busia and Tororo share borders with Kenya and host the busiest ports of entry in Uganda. The predominant ethnic groups in the study districts are the Samia and Itesot in Busia, and the Japhadola and Itesot in Tororo. They have a diverse population comprising truck drivers and other transporters, cross-border traders, sex workers, border officials, border town residents, and tourists/visitors. As such, the situation in the border districts of Uganda is dire owing to the cross-border trade and transient populations, which elevate the risk of poor SRH outcomes. Residing in border areas present unique economic and social challenges such as extreme poverty, family separation (parents working in neighboring countries) which may result into adolescents seeking high risk jobs such as vending and cross-border trading. This exposes them to multiple vulnerabilities, such as transactional sex, having multiple sexual partners and high HIV vulnerability . Owing to the cross border work opportunities, many mothers cross the border for domestic employment for extended periods of time, thus, there is an increasing number of single parent households in Busia and Tororo. The main religion in the area is Christianity, while the main economic activities are cross-border trade, small-scale business, subsistence farming, sand mining, stone quarrying, fishing and gold mining. Communities in Busia and Tororo practice a popular funeral fundraising gathering called “Disco Matanga” loosely translated as ‘disco at a funeral’ whereby during a funeral, adults fundraise for the burial expenses for the departed. This fundraising is done in the night with loud music playing all night which attracts both children and adults. Reports indicate that many children (as young as four years) attend this gathering unaccompanied by parents. Study population The study population were parents of adolescents aged 10–17 years, adolescent boys and girls aged 10–17 years, and key informants. While the World Health Organization classifies adolescents as those aged 10–19-year-old, our study focused on 10-17-year-olds due to the unique legal and policy implications faced by this age group as compared to older 18–19 year-olds who are of legal age of consent according to the Uganda legal consent age of 18. Parents in this study referred to a biological mother/father or female/male caregiver of the adolescent (aged 10–17) who must have lived continuously with the adolescent for at least one year prior to data collection. Sampling A multi-stage stratified sampling design was used. From each district, two subcounties were randomly selected. A total of four sub-counties were selected – two from each district. From Tororo, Malaba TC (urban) and Mella subcounty (rural) were selected. In Busia district, Dabani (peri-urban) and Buhehe (rural) were selected using computer random numbers using Microsoft Office Excel programme. From each sub-county, two parishes were randomly selected. From Malaba TC, Obore and Amagoro parishes were selected. From Mella, Apokor and Mella parishes were selected. From Dabani, Buyengo and Dabani parishes were selected. From Buhehe, Bulwenge and Buhasaba parishes were selected. Finally, a total of 10 villages were selected using simple random sampling from these parishes. From each village, purposive sampling was used to identify households with parents who have children aged 10–17 years. Only one child and one parent was randomly selected from each household for interview. Parents with adolescent children were identified with the help of local leaders in the community. Data collection methods Focus group discussions (FGDs) and key informant interviews (KIIs) were the methods used for qualitative data collection. FGDs were used to capture a wide range of views and enable interaction between participants with differing experiences regarding parent adolescent communication which provided greater insight into attitudes, perceptions, beliefs and practices. The FGD guides were translated into Lusamia, Japhadola and Ateso, the predominant local languages. We conducted 8 FGDs with fathers (n = 2), mothers (n = 2), boys (n = 2) and girls (n = 2) adolescents in separate specific gender- groups. The FGDs were disaggregated by sex to allow for free expression of views during the discussion of potentially sensitive issues. We conducted four FGDs in each district (Table ). Owing to the observation of the COVID 19 standard operating procedures (SOPs), each FGD had six participants and lasted approximately 1 h and 30 min. All FGDs were conducted by a moderator and a note-taker. Consent from participants was sought to audio record the discussions. The interviews were conducted by the research team members (PN, BK, SOW & PK) along with a team of 6 youthful male and female research assistants (RAs). The research assistants used were below 25 years of age. This was to avoid much age disparity between the research assistants and the participants in order for the respondents to communicate freely. Given the sensitivity of this topic and cultural traditions in the study contexts, participants were interviewed by an interviewer of the same sex. Male research assistants moderated FGDs which comprised of male participants and females did the same for FGDs of females. The RAs had bachelor’s degree qualification in social sciences with significant experience in working with children, collecting data on sensitive topics, including sexual behavior, and were proficient in the local languages (Lusamia, Japhadola and Ateso). All research assistants were trained on research ethics, principles of qualitative data collection, and the study procedures and instruments. To ensure privacy, the FGDs were held in an open space (within the household compound) which offered privacy during the interviews with children so that their responses are not heard by their parents. FGD participants were provided with refreshments as compensation for their time. Additionally, 25 KIIs (13 in Tororo and 12 in Busia) were conducted with four categories of key informants that work with children: Non-Governmental Organizations (NGOs), Community Based Organizations (CBOs), officers from the District Local Government (DLG), and community leaders consisting of religious, cultural and local leaders. Key informant interviews elicited information on parents’ knowledge, attitudes, and practices about parent-child communication (PCC) on SRH; determinants of PCC; examine the facilitators and barriers of PCC; and identify parents’ and children’s preferred approaches to PCC on SRH in eastern Uganda. The KIIs were conducted in English using a semi-structured interview guide. The duration of the interviews ranged between 30 and 45 min. Data collection tools Three semi-structured interview guides (Key informant guide, FGD guide for parents, FGD guide for adolescents) were developed. The questions were based on a review of the literature, field experience, and research objectives. Areas explored included: adolescents’ and parents’ knowledge, attitudes, practices and preferred approaches to parent child communication, barriers and facilitators of parent adolescent communication, frequency and timing of SRH communication, experiences and perceptions in SRH communication, preferred sources of SRH information. Quality control and assurance We recruited competent interviewers who had experience in working with children and could speak the local languages. Training of interviewers was conducted to ensure detailed understanding of objectives, process, and output requirements. Close supervision of research assistants by the research team ensured that data is collected in a manner that maintained data integrity. The tools were translated from English to the local languages by a language professional and back translated to English to ensure conceptual equivalence and cultural sensitivity. The tools were pretested to check for accuracy in a neighboring district (Namayingo) among population groups similar to the study population before collection of data. Furthermore, the use of a tape recorder, careful probing, and interviewing up to data saturation were activities done to ensure dependability. Data management and analysis Following fieldwork, audio files from interviews were transcribed then translated into English. Handwritten notes were used to supplement information gaps from the audio-recorded transcripts. Transcripts were thematically analyzed using an inductive approach. Two members of the study team (PN & BK) read and re-read transcripts to become familiar with the data. Transcripts were annotated with initial codes relevant to the study objectives which formed the initial coding frame. The codebook was discussed and agreed upon by all members of the research team. The developed themes and sub-themes were then entered as codes into NVIVO 12 software. The two researchers (PN & BK) independently coded transcripts and met regularly to review for consistency. Discrepancies were resolved through discussion and input from the other researchers. New codes were added as they emerged and analysis continued until no new codes were identified. Data collection was conducted between 2nd and 18th May, 2021. We used a qualitative research design to gain a deeper understanding of the facilitators and barriers of parent-adolescent communication on SRH issues in two border districts of Busia and Tororo, located in Eastern Uganda. According to the 2014 National Population and Housing Census , Busia and Tororo have a population of 323,662 and 517,082 respectively. Busia and Tororo share borders with Kenya and host the busiest ports of entry in Uganda. The predominant ethnic groups in the study districts are the Samia and Itesot in Busia, and the Japhadola and Itesot in Tororo. They have a diverse population comprising truck drivers and other transporters, cross-border traders, sex workers, border officials, border town residents, and tourists/visitors. As such, the situation in the border districts of Uganda is dire owing to the cross-border trade and transient populations, which elevate the risk of poor SRH outcomes. Residing in border areas present unique economic and social challenges such as extreme poverty, family separation (parents working in neighboring countries) which may result into adolescents seeking high risk jobs such as vending and cross-border trading. This exposes them to multiple vulnerabilities, such as transactional sex, having multiple sexual partners and high HIV vulnerability . Owing to the cross border work opportunities, many mothers cross the border for domestic employment for extended periods of time, thus, there is an increasing number of single parent households in Busia and Tororo. The main religion in the area is Christianity, while the main economic activities are cross-border trade, small-scale business, subsistence farming, sand mining, stone quarrying, fishing and gold mining. Communities in Busia and Tororo practice a popular funeral fundraising gathering called “Disco Matanga” loosely translated as ‘disco at a funeral’ whereby during a funeral, adults fundraise for the burial expenses for the departed. This fundraising is done in the night with loud music playing all night which attracts both children and adults. Reports indicate that many children (as young as four years) attend this gathering unaccompanied by parents. The study population were parents of adolescents aged 10–17 years, adolescent boys and girls aged 10–17 years, and key informants. While the World Health Organization classifies adolescents as those aged 10–19-year-old, our study focused on 10-17-year-olds due to the unique legal and policy implications faced by this age group as compared to older 18–19 year-olds who are of legal age of consent according to the Uganda legal consent age of 18. Parents in this study referred to a biological mother/father or female/male caregiver of the adolescent (aged 10–17) who must have lived continuously with the adolescent for at least one year prior to data collection. A multi-stage stratified sampling design was used. From each district, two subcounties were randomly selected. A total of four sub-counties were selected – two from each district. From Tororo, Malaba TC (urban) and Mella subcounty (rural) were selected. In Busia district, Dabani (peri-urban) and Buhehe (rural) were selected using computer random numbers using Microsoft Office Excel programme. From each sub-county, two parishes were randomly selected. From Malaba TC, Obore and Amagoro parishes were selected. From Mella, Apokor and Mella parishes were selected. From Dabani, Buyengo and Dabani parishes were selected. From Buhehe, Bulwenge and Buhasaba parishes were selected. Finally, a total of 10 villages were selected using simple random sampling from these parishes. From each village, purposive sampling was used to identify households with parents who have children aged 10–17 years. Only one child and one parent was randomly selected from each household for interview. Parents with adolescent children were identified with the help of local leaders in the community. Focus group discussions (FGDs) and key informant interviews (KIIs) were the methods used for qualitative data collection. FGDs were used to capture a wide range of views and enable interaction between participants with differing experiences regarding parent adolescent communication which provided greater insight into attitudes, perceptions, beliefs and practices. The FGD guides were translated into Lusamia, Japhadola and Ateso, the predominant local languages. We conducted 8 FGDs with fathers (n = 2), mothers (n = 2), boys (n = 2) and girls (n = 2) adolescents in separate specific gender- groups. The FGDs were disaggregated by sex to allow for free expression of views during the discussion of potentially sensitive issues. We conducted four FGDs in each district (Table ). Owing to the observation of the COVID 19 standard operating procedures (SOPs), each FGD had six participants and lasted approximately 1 h and 30 min. All FGDs were conducted by a moderator and a note-taker. Consent from participants was sought to audio record the discussions. The interviews were conducted by the research team members (PN, BK, SOW & PK) along with a team of 6 youthful male and female research assistants (RAs). The research assistants used were below 25 years of age. This was to avoid much age disparity between the research assistants and the participants in order for the respondents to communicate freely. Given the sensitivity of this topic and cultural traditions in the study contexts, participants were interviewed by an interviewer of the same sex. Male research assistants moderated FGDs which comprised of male participants and females did the same for FGDs of females. The RAs had bachelor’s degree qualification in social sciences with significant experience in working with children, collecting data on sensitive topics, including sexual behavior, and were proficient in the local languages (Lusamia, Japhadola and Ateso). All research assistants were trained on research ethics, principles of qualitative data collection, and the study procedures and instruments. To ensure privacy, the FGDs were held in an open space (within the household compound) which offered privacy during the interviews with children so that their responses are not heard by their parents. FGD participants were provided with refreshments as compensation for their time. Additionally, 25 KIIs (13 in Tororo and 12 in Busia) were conducted with four categories of key informants that work with children: Non-Governmental Organizations (NGOs), Community Based Organizations (CBOs), officers from the District Local Government (DLG), and community leaders consisting of religious, cultural and local leaders. Key informant interviews elicited information on parents’ knowledge, attitudes, and practices about parent-child communication (PCC) on SRH; determinants of PCC; examine the facilitators and barriers of PCC; and identify parents’ and children’s preferred approaches to PCC on SRH in eastern Uganda. The KIIs were conducted in English using a semi-structured interview guide. The duration of the interviews ranged between 30 and 45 min. Three semi-structured interview guides (Key informant guide, FGD guide for parents, FGD guide for adolescents) were developed. The questions were based on a review of the literature, field experience, and research objectives. Areas explored included: adolescents’ and parents’ knowledge, attitudes, practices and preferred approaches to parent child communication, barriers and facilitators of parent adolescent communication, frequency and timing of SRH communication, experiences and perceptions in SRH communication, preferred sources of SRH information. We recruited competent interviewers who had experience in working with children and could speak the local languages. Training of interviewers was conducted to ensure detailed understanding of objectives, process, and output requirements. Close supervision of research assistants by the research team ensured that data is collected in a manner that maintained data integrity. The tools were translated from English to the local languages by a language professional and back translated to English to ensure conceptual equivalence and cultural sensitivity. The tools were pretested to check for accuracy in a neighboring district (Namayingo) among population groups similar to the study population before collection of data. Furthermore, the use of a tape recorder, careful probing, and interviewing up to data saturation were activities done to ensure dependability. Following fieldwork, audio files from interviews were transcribed then translated into English. Handwritten notes were used to supplement information gaps from the audio-recorded transcripts. Transcripts were thematically analyzed using an inductive approach. Two members of the study team (PN & BK) read and re-read transcripts to become familiar with the data. Transcripts were annotated with initial codes relevant to the study objectives which formed the initial coding frame. The codebook was discussed and agreed upon by all members of the research team. The developed themes and sub-themes were then entered as codes into NVIVO 12 software. The two researchers (PN & BK) independently coded transcripts and met regularly to review for consistency. Discrepancies were resolved through discussion and input from the other researchers. New codes were added as they emerged and analysis continued until no new codes were identified. Study participants’ background characteristics Tables and show the background characteristics of the study participants. There were equal number of males and females engaged in all FGDs. All parents were aged between 28 and 55, 23 were married while 1 was divorced. The adolescents were aged between 10 and 17 years and all were not married. We selected 25 Key informants, 11 females and 14 males. The participants comprised of 8 community leaders (6 religious leaders, 2 cultural leaders), 11 from the District Local Government, 2 from community-based organizations, and 4 from Non-Government organizations. Their ages ranged between 25 and 75 years. Most of them were married and belonged to the Christian faith. Parent-adolescent communication practices in Busia and Tororo border districts Participants were asked whether discussions on SRH between parents and children occur within the community. They mentioned that in the past, parents were not expected to discuss SRH with their biological children. This role was delegated to grandparents, paternal aunties and uncles. However, many participants recognized that the times have changed due to consequences of HIV/AIDs, high teenage pregnancy, influence of media and busy work schedules. Majority of the study participants acknowledged the key role parents play in communicating SRH matters with adolescents, however, they reported that only a few parents discuss SRH with their children. Parents would be the best people to communicate to their children, but few parents do that. So they play a very small role (Mother, FGD, Busia). I would say the parents have not done their part. …if the parents would step in and take lead, we wouldn’t be having this outburst of teenage pregnancy… (KI, NGO, Tororo). Adolescents reported that most SRH discussions were spontaneous and often triggered by: parents perceiving the child’s behavior as risky, signs and symptom of disease among adolescents or an unpleasant occurrence in the community (such as a neighbor’s teenage daughter falling pregnant). When you find your daughter at the truck parking yard [parking yard for cargo trucks at the border] then you should talk right there. That means she is spending time with truck drivers who will make her drop out of school (KI, DLG, Busia) …if I have a swelling on my penis, I tell my mother what I am feeling (Boy, FGD, Tororo) Examples of behaviors that parents perceived to be risky were: indecent dressing, being part of “bad” peer groups, movements late in the night, and associating with the opposite sex. Parents and adolescents were asked to describe the topics that are discussed when they have conversations on SRH matters. The responses from parents and adolescents were similar. The discussions mainly focused on abstinence from premarital sex, pubertal changes, and relationships with the opposite sex, STIs including HIV/AIDs, teenage pregnancy and risks associated with late night movements. Some talk about HIV/AIDS especially on ways how they can prevent contracting it. For the older children, parents tell them about issues of sleeping with boys, about STIs or getting pregnant (KI, Community leader, Tororo) . Parents were asked if there are sex-related topics that should never be discussed with adolescents. A few parents were of the opinion that there were no forbidden topics. Topics on “sexual intercourse” and contraceptive use especially condom use were rarely discussed. ... sex is difficult to talk about…telling them that ‘if you are to have sex, do this and that’ most parents cannot say that to the children. (Mother, FGD, Busia) They felt that adolescents need to understand sexual and reproductive health issues and implications of making wrong decisions. This information, they believe, should be provided to adolescents at home so that they are not misinformed by strangers. When asked about the age they thought was right to talk to their children about sex, most said the discussion should start once the child reaches adolescence. Others mentioned that it was when girls start menstruation (at about 10 years), whereas among the boys, it was at a later age (< 13 years). However, a few parents felt that these discussions should start as early as 5 years among both boys and girls. Gender differences existed in parent adolescent SRH communication with more caution targeting girls as compared to boys.…Mothers are expected to speak to girls and fathers to the boys. “It’s very hard for the daughter to tell the father her secrets (Father, Busia, FGD). Some few participants mentioned that SRH education is almost nonexistent among boys and their parents. They believe boys can thrive without guidance or direction. …boys grow like weeds. They don’t get pregnant; if he is unlucky he is imprisoned for impregnating a girl (Father, FGD, Tororo) Facilitators of parent-adolescent communication on SRH issues The following themes emerged as facilitators: good parent-child relationship, the role of the mother, and education level of the parent. I. Good parent-child relationship Participants mentioned that when there is a good parent-child relationship, defined as the ability of parents and children to approach each other and discuss any SRH issues openly. Adolescents who are close to their parents are motivated and able to initiate discussions on SRH matters. On the other hand, when the relationship is poor there is no communication. If the child and parent are free with each other, it is easy for them to talk. You may be willing to talk to her, but she isn’t free with you, she will end up seeking advice from the neighbors or friends just because she is not free with you. (Mother, FGD, Busia) Children will always open up to a person who is friendly, welcoming and does not discriminate. Someone who has time for them they will always open up to that person. (KI, NGO, Tororo) Parents should be able to have this free conversation between themselves and the children to make the children gain their trust so that they can be able to tell them in case anything happens. (KI, DLG, Tororo) Other participants reported that having a good parent child relationship resulted from being a suitable model for the adolescents which was an enabler for parent-child communication on SRH matters. Children felt that they should receive guidance from parents with a good conduct. You [parent] are telling them to abstain, yet you have very many children with different fathers. So sometimes they feel they should listen to someone who has a good record (KI, DLG, Busia) II. Role of the mother Participants’ narratives suggest that mothers were key in influencing parent-child communication. Given their gender role as care givers and home educators, mothers dominated parent-child communication on SRH matters. Mothers were considered close to their children and spent longer periods of time with them than fathers. Mothers perceived themselves to be better prepared and more approachable by their children. Mothers are always available to talk. They are gentler when dealing with us (Girl, FGD, Busia) Most of the children associate with their mums, so, it’s easy for them to tell their mum what is happening to their bodies. (KI, CBO, Tororo) Others referred to a mother as one who can be trusted with secrets and usually finds ways of helping children to address SRH issues. Mothers were also considered sympathetic and less harsh which made them more approachable and had more experience in discussing SRH issues with their children. When you tell her [mother] about something, she will not tell the neighbors about your issue, she will keep it a secret. (Girl, FGD, Tororo) Study participants noted that many fathers abdicate their roles of communicating about SRH to the mothers. Fathers were too strict, unapproachable, unavailable and too be busy to listen to children. They send the children, including boys to the mothers for counsel which is also a driving factor as to why mothers are most often spoken to. Most children are afraid of their father, even a boy who wants a new book, he will still go to the mother... They fear fathers because they are hostile to them…(Boy, FGD, Busia) III. Education level and exposure of the parent Parents with higher levels of education were better positioned to communicate with their children about SRH compared to those with less-education. Such parents had better communication skills and were more knowledgeable and able to respond to technical SRH questions raised by children. Less-educated parents may feel uncomfortable talking to their children about SRH. Educated parents have information so they can explain some of these things to their children. When I look at a parent who is a school dropout; what information will he or she give to a child? (KI, community leader, Tororo) Barriers to parent-child communication on SRH I. Cultural norms Cultural norms were the most commonly reported barrier to parent-child SRH discussions. These made it unacceptable for parents and children to openly discuss SRH matters. Many of these parents reported that parents in their setting did not discuss sex matters with their children because it is a taboo thus, it is an abomination to speak about sex with your child. They perceive SRH matters to be private and that children would come to know about these issues automatically without any discussion with parents. …They feel it is not right to speak to your child about sex. They still think sex is a bad thing, it is private. It should be discovered by you who is having it and it’s not a matter of discussion. Yeah. Yeah. It is a taboo. (KI, CBO, Tororo) Closely associated to cultural norms, was embarrassment as an inhibitor of communication both on the part of children and on the side of parents. Majority of the parents revealed that embarrassment, shame and awkwardness kept them from initiating the discussion with their children. This was corroborated by the adolescents. Parents felt “embarrassed” talking about SRH with their children because culture labels SRH topics as a taboo. They also fear to approach us. Unless you find one who is very brave to confront you, most of them are shy just like us the parents. She will not come and tell you about the man who is trying to convince her into a relationship. (Mother, FGD, Busia). . ..Some time back, my mother wanted to give me condoms, but she was fearing to tell me. …she told me that I want to give you something, She placed it at a table and just directed me …there is something there …you go and pick it, eeeeh (Boy, FGD, Busia) …things like boys seducing you is your secret because you feel shy sharing such information with your parent and so you keep it as your secret (Girl, FGD, Busia) Given the culture limitation, parents and children face difficulty in discussing SRH issues resulting into parents speaking to the children in parables, which limits comprehension. For example, the expression “If your mother does not teach you, the world will…” that was made by a girl from Tororo meaning that if morals, values, and good character are not imparted at home, then you will learn from hard knocks or problems that result from a lack of or neglect of instruction. Another young girl reported that her mother keeps saying: “…this world is very bitter and very dangerous you have to be very careful with your life” (Girl, FGD, Busia) . Another parent reported “…the world has gone wild, sicknesses are coming like water “(Father, FGD, Tororo) . Most of the parents are reserved and tend not to discuss in details they just caution the children, for example, “ I don’t want to see that you move with the boys. I don’t want to see you getting pregnant, I don’t want to see you in the company that may mislead you, so that is the much they can open up with their children” (KI, CBO, Busia) . They believe that words associated with SRH are obscene and would expose children to inappropriate information that could result in experimenting with sex or encourage early sexual activity. “ It’s like you can’t talk about sexual relationship it is like umm you are encouraging them to try out … Umm the word sex is culturally forbidden” ( KI, DLG, Tororo)” …Do you want to make me old [grandmother] yet I am still young? (Mother, FGD, Tororo) …they fear telling their children to use condoms because they feel the child will engage in sex knowing that it was the mother who advised her. (Mother, FGD, Busia) . Due to the sensitivity of SRH issues, some parents revealed that they would involve a “third party” usually a paternal aunt, uncle or teacher to discuss SRH matters with the adolescents. Some parents expect teachers to inform or teach children about SRH, yet teachers rarely do so. A key informant from Busia reported that the moment he learns that SRH has been integrated into the curriculum, then many parents including himself, will not discuss SRH with his children. They explained that children are free with the teachers. ... You can feel ashamed discussing these things with your parents but if the senior woman teacher talks, you can’t feel ashamed (Girl, FGD, Tororo) II. Busy parents Parents’ occupations determined the available time they spend with their children. Parents increasingly prioritized the demands of employment over child care. Many participants reported that parents were too busy to dedicate time to talk to their children about SRH. Owing to work demands, some parents do not live with their children. Concerns were expressed about absent mothers who travel for domestic employment elsewhere (mainly to neighboring Kenya) for extended periods of over a year, leaving fathers who give little time to children. Other parents leave very early in the morning and return late at night when they are tired, and the children are asleep. Because we the parents are busy, we leave very early in the morning and come back at night. And even when we come back we say we are tired, we don’t have time to talk to children mmmh (Mother, FGD, Tororo) …if the parent comes home late, there is minimal social interaction with the children (KI, Community leader, Tororo) III. Parents’ lack of knowledge Parents felt they lacked the knowledge, appropriate skills, and approaches to talk to their children about SRH, making it challenging to initiate a conversation. Some parents expressed the need for well packaged and age appropriate SRH information to enable them to address these topics. Parents reported that this lack of knowledge created a lack of confidence, making it difficult to find the courage to start a discussion with their child. A few parents had the knowledge but lacked the confidence to discuss SRH matters with children. Children also mentioned that their parents were less knowledgeable about SRH. This perception kept them from initiating a conversation with their parents. …children nowadays know a lot more than what we the parents know. You might tell her something thinking it is new yet she knows much more than you do (Mothers, FGD, Busia) Some parents do not have information. We are told to talk to our children about these things, but, how should we begin, we don’t know what to say (FGD, Fathers, Busia) … I think some parents do not have proper information and others don’t know how to talk to children (Girl, FGD, Tororo) Some participants raised concerns of excessive alcohol consumption and drug abuse among both parents and children which is a serious challenge that hinders SRH talk. Parents, especially fathers, get drunk and are unable to guide adolescents to make wise SRH decisions Fathers report to a Malwa [local brew] joint at six am … They go back home after 9 or 10 p.m. just to sleep. They do not have time for children (KI, NGO, Tororo) Some don’t talk to their children b ecaus e they are drunk all the time (Boy, FGD, Tororo) …you find 10 year olds with sachets of Waragi [local brew] and such a child once he or she is drunk you cannot bring up a conversation on SRH (KI, DLG, Tororo) . When most parents talk to their children, they use authoritative, reprimanding language, especially when they observe cases of teenage pregnancy in the community. Respondents reported that fathers are tough, harsh, and instill fear in the children. Some threaten and use corporal punishment to discipline or warn adolescents concerning inappropriate behavior, especially sexual activity. Owing to the harsh approach used by fathers, most girls and some boys prefer sharing their concerns with mothers. One girl from Tororo reported that when she asked her father for a mathematical set, he angrily told her, “ …but you are a girl, can’t you think of a way of making that money to buy the set?” One boy said: There are some fathers you tell your concerns, but they just start quarreling, accusing you of being spoilt. This creates fear and makes children keep quiet. (Boy, FGD, Busia) As a result, many children do not communicate with their parents. They tend to be shy, and fear punishment. They detest the harsh language used by some parents and only communicate in case of a crisis. The children report that some parents curse the children and tell them not to revert to them in case of problems. This approach can be counter-productive and contributes to rebellion and early marriages. In addition to attributing early marriages to the inability to cater for children’s needs, a mother confirmed the children’s observations: Some girls get married at an early age because of us parents. Sometimes, we are very hostile and yet if you don’t take good care of the children…The child will run away and get her own home (Mother, FGD, Busia) Unfortunately with our parents the communication I should say is poor because it is about shouting, it’s more of discipline, …it’s about seeing a girl with a boy then they begin beating the child, shouting that if you get pregnant I will chase you away, I will cut your head off yeah so it’s more of like disciplinary action (Girl, FGD, Tororo). Many parents indicated that harsh language is used in an endeavor to make the children understand the severity of the issues at stake. Such language is also used when the children are disobedient, for instance, when they break household curfew regulations. This approach is expected to ensure that children do not start or continue with inappropriate behavior. I think you need to be harsh and threaten them with police involvement because that is the only way they understand. Whatever the child does comes back to you the mother….So in a way we need to be harsh to them so that they take whatever we tell them seriously. (Mother, Busia, FGD) You need to be tough with them because if you bring up such issues in a joking way, then she will take it lightly as a joke. That said, there are some sensitive issues that you ought to bring out in a polite way to earn their respect and confidence like things to do with their menstrual periods for the very first time. ( Mother, Busia, FGD) Tables and show the background characteristics of the study participants. There were equal number of males and females engaged in all FGDs. All parents were aged between 28 and 55, 23 were married while 1 was divorced. The adolescents were aged between 10 and 17 years and all were not married. We selected 25 Key informants, 11 females and 14 males. The participants comprised of 8 community leaders (6 religious leaders, 2 cultural leaders), 11 from the District Local Government, 2 from community-based organizations, and 4 from Non-Government organizations. Their ages ranged between 25 and 75 years. Most of them were married and belonged to the Christian faith. Participants were asked whether discussions on SRH between parents and children occur within the community. They mentioned that in the past, parents were not expected to discuss SRH with their biological children. This role was delegated to grandparents, paternal aunties and uncles. However, many participants recognized that the times have changed due to consequences of HIV/AIDs, high teenage pregnancy, influence of media and busy work schedules. Majority of the study participants acknowledged the key role parents play in communicating SRH matters with adolescents, however, they reported that only a few parents discuss SRH with their children. Parents would be the best people to communicate to their children, but few parents do that. So they play a very small role (Mother, FGD, Busia). I would say the parents have not done their part. …if the parents would step in and take lead, we wouldn’t be having this outburst of teenage pregnancy… (KI, NGO, Tororo). Adolescents reported that most SRH discussions were spontaneous and often triggered by: parents perceiving the child’s behavior as risky, signs and symptom of disease among adolescents or an unpleasant occurrence in the community (such as a neighbor’s teenage daughter falling pregnant). When you find your daughter at the truck parking yard [parking yard for cargo trucks at the border] then you should talk right there. That means she is spending time with truck drivers who will make her drop out of school (KI, DLG, Busia) …if I have a swelling on my penis, I tell my mother what I am feeling (Boy, FGD, Tororo) Examples of behaviors that parents perceived to be risky were: indecent dressing, being part of “bad” peer groups, movements late in the night, and associating with the opposite sex. Parents and adolescents were asked to describe the topics that are discussed when they have conversations on SRH matters. The responses from parents and adolescents were similar. The discussions mainly focused on abstinence from premarital sex, pubertal changes, and relationships with the opposite sex, STIs including HIV/AIDs, teenage pregnancy and risks associated with late night movements. Some talk about HIV/AIDS especially on ways how they can prevent contracting it. For the older children, parents tell them about issues of sleeping with boys, about STIs or getting pregnant (KI, Community leader, Tororo) . Parents were asked if there are sex-related topics that should never be discussed with adolescents. A few parents were of the opinion that there were no forbidden topics. Topics on “sexual intercourse” and contraceptive use especially condom use were rarely discussed. ... sex is difficult to talk about…telling them that ‘if you are to have sex, do this and that’ most parents cannot say that to the children. (Mother, FGD, Busia) They felt that adolescents need to understand sexual and reproductive health issues and implications of making wrong decisions. This information, they believe, should be provided to adolescents at home so that they are not misinformed by strangers. When asked about the age they thought was right to talk to their children about sex, most said the discussion should start once the child reaches adolescence. Others mentioned that it was when girls start menstruation (at about 10 years), whereas among the boys, it was at a later age (< 13 years). However, a few parents felt that these discussions should start as early as 5 years among both boys and girls. Gender differences existed in parent adolescent SRH communication with more caution targeting girls as compared to boys.…Mothers are expected to speak to girls and fathers to the boys. “It’s very hard for the daughter to tell the father her secrets (Father, Busia, FGD). Some few participants mentioned that SRH education is almost nonexistent among boys and their parents. They believe boys can thrive without guidance or direction. …boys grow like weeds. They don’t get pregnant; if he is unlucky he is imprisoned for impregnating a girl (Father, FGD, Tororo) The following themes emerged as facilitators: good parent-child relationship, the role of the mother, and education level of the parent. I. Good parent-child relationship Participants mentioned that when there is a good parent-child relationship, defined as the ability of parents and children to approach each other and discuss any SRH issues openly. Adolescents who are close to their parents are motivated and able to initiate discussions on SRH matters. On the other hand, when the relationship is poor there is no communication. If the child and parent are free with each other, it is easy for them to talk. You may be willing to talk to her, but she isn’t free with you, she will end up seeking advice from the neighbors or friends just because she is not free with you. (Mother, FGD, Busia) Children will always open up to a person who is friendly, welcoming and does not discriminate. Someone who has time for them they will always open up to that person. (KI, NGO, Tororo) Parents should be able to have this free conversation between themselves and the children to make the children gain their trust so that they can be able to tell them in case anything happens. (KI, DLG, Tororo) Other participants reported that having a good parent child relationship resulted from being a suitable model for the adolescents which was an enabler for parent-child communication on SRH matters. Children felt that they should receive guidance from parents with a good conduct. You [parent] are telling them to abstain, yet you have very many children with different fathers. So sometimes they feel they should listen to someone who has a good record (KI, DLG, Busia) II. Role of the mother Participants’ narratives suggest that mothers were key in influencing parent-child communication. Given their gender role as care givers and home educators, mothers dominated parent-child communication on SRH matters. Mothers were considered close to their children and spent longer periods of time with them than fathers. Mothers perceived themselves to be better prepared and more approachable by their children. Mothers are always available to talk. They are gentler when dealing with us (Girl, FGD, Busia) Most of the children associate with their mums, so, it’s easy for them to tell their mum what is happening to their bodies. (KI, CBO, Tororo) Others referred to a mother as one who can be trusted with secrets and usually finds ways of helping children to address SRH issues. Mothers were also considered sympathetic and less harsh which made them more approachable and had more experience in discussing SRH issues with their children. When you tell her [mother] about something, she will not tell the neighbors about your issue, she will keep it a secret. (Girl, FGD, Tororo) Study participants noted that many fathers abdicate their roles of communicating about SRH to the mothers. Fathers were too strict, unapproachable, unavailable and too be busy to listen to children. They send the children, including boys to the mothers for counsel which is also a driving factor as to why mothers are most often spoken to. Most children are afraid of their father, even a boy who wants a new book, he will still go to the mother... They fear fathers because they are hostile to them…(Boy, FGD, Busia) III. Education level and exposure of the parent Parents with higher levels of education were better positioned to communicate with their children about SRH compared to those with less-education. Such parents had better communication skills and were more knowledgeable and able to respond to technical SRH questions raised by children. Less-educated parents may feel uncomfortable talking to their children about SRH. Educated parents have information so they can explain some of these things to their children. When I look at a parent who is a school dropout; what information will he or she give to a child? (KI, community leader, Tororo) Participants mentioned that when there is a good parent-child relationship, defined as the ability of parents and children to approach each other and discuss any SRH issues openly. Adolescents who are close to their parents are motivated and able to initiate discussions on SRH matters. On the other hand, when the relationship is poor there is no communication. If the child and parent are free with each other, it is easy for them to talk. You may be willing to talk to her, but she isn’t free with you, she will end up seeking advice from the neighbors or friends just because she is not free with you. (Mother, FGD, Busia) Children will always open up to a person who is friendly, welcoming and does not discriminate. Someone who has time for them they will always open up to that person. (KI, NGO, Tororo) Parents should be able to have this free conversation between themselves and the children to make the children gain their trust so that they can be able to tell them in case anything happens. (KI, DLG, Tororo) Other participants reported that having a good parent child relationship resulted from being a suitable model for the adolescents which was an enabler for parent-child communication on SRH matters. Children felt that they should receive guidance from parents with a good conduct. You [parent] are telling them to abstain, yet you have very many children with different fathers. So sometimes they feel they should listen to someone who has a good record (KI, DLG, Busia) Participants’ narratives suggest that mothers were key in influencing parent-child communication. Given their gender role as care givers and home educators, mothers dominated parent-child communication on SRH matters. Mothers were considered close to their children and spent longer periods of time with them than fathers. Mothers perceived themselves to be better prepared and more approachable by their children. Mothers are always available to talk. They are gentler when dealing with us (Girl, FGD, Busia) Most of the children associate with their mums, so, it’s easy for them to tell their mum what is happening to their bodies. (KI, CBO, Tororo) Others referred to a mother as one who can be trusted with secrets and usually finds ways of helping children to address SRH issues. Mothers were also considered sympathetic and less harsh which made them more approachable and had more experience in discussing SRH issues with their children. When you tell her [mother] about something, she will not tell the neighbors about your issue, she will keep it a secret. (Girl, FGD, Tororo) Study participants noted that many fathers abdicate their roles of communicating about SRH to the mothers. Fathers were too strict, unapproachable, unavailable and too be busy to listen to children. They send the children, including boys to the mothers for counsel which is also a driving factor as to why mothers are most often spoken to. Most children are afraid of their father, even a boy who wants a new book, he will still go to the mother... They fear fathers because they are hostile to them…(Boy, FGD, Busia) Parents with higher levels of education were better positioned to communicate with their children about SRH compared to those with less-education. Such parents had better communication skills and were more knowledgeable and able to respond to technical SRH questions raised by children. Less-educated parents may feel uncomfortable talking to their children about SRH. Educated parents have information so they can explain some of these things to their children. When I look at a parent who is a school dropout; what information will he or she give to a child? (KI, community leader, Tororo) I. Cultural norms Cultural norms were the most commonly reported barrier to parent-child SRH discussions. These made it unacceptable for parents and children to openly discuss SRH matters. Many of these parents reported that parents in their setting did not discuss sex matters with their children because it is a taboo thus, it is an abomination to speak about sex with your child. They perceive SRH matters to be private and that children would come to know about these issues automatically without any discussion with parents. …They feel it is not right to speak to your child about sex. They still think sex is a bad thing, it is private. It should be discovered by you who is having it and it’s not a matter of discussion. Yeah. Yeah. It is a taboo. (KI, CBO, Tororo) Closely associated to cultural norms, was embarrassment as an inhibitor of communication both on the part of children and on the side of parents. Majority of the parents revealed that embarrassment, shame and awkwardness kept them from initiating the discussion with their children. This was corroborated by the adolescents. Parents felt “embarrassed” talking about SRH with their children because culture labels SRH topics as a taboo. They also fear to approach us. Unless you find one who is very brave to confront you, most of them are shy just like us the parents. She will not come and tell you about the man who is trying to convince her into a relationship. (Mother, FGD, Busia). . ..Some time back, my mother wanted to give me condoms, but she was fearing to tell me. …she told me that I want to give you something, She placed it at a table and just directed me …there is something there …you go and pick it, eeeeh (Boy, FGD, Busia) …things like boys seducing you is your secret because you feel shy sharing such information with your parent and so you keep it as your secret (Girl, FGD, Busia) Given the culture limitation, parents and children face difficulty in discussing SRH issues resulting into parents speaking to the children in parables, which limits comprehension. For example, the expression “If your mother does not teach you, the world will…” that was made by a girl from Tororo meaning that if morals, values, and good character are not imparted at home, then you will learn from hard knocks or problems that result from a lack of or neglect of instruction. Another young girl reported that her mother keeps saying: “…this world is very bitter and very dangerous you have to be very careful with your life” (Girl, FGD, Busia) . Another parent reported “…the world has gone wild, sicknesses are coming like water “(Father, FGD, Tororo) . Most of the parents are reserved and tend not to discuss in details they just caution the children, for example, “ I don’t want to see that you move with the boys. I don’t want to see you getting pregnant, I don’t want to see you in the company that may mislead you, so that is the much they can open up with their children” (KI, CBO, Busia) . They believe that words associated with SRH are obscene and would expose children to inappropriate information that could result in experimenting with sex or encourage early sexual activity. “ It’s like you can’t talk about sexual relationship it is like umm you are encouraging them to try out … Umm the word sex is culturally forbidden” ( KI, DLG, Tororo)” …Do you want to make me old [grandmother] yet I am still young? (Mother, FGD, Tororo) …they fear telling their children to use condoms because they feel the child will engage in sex knowing that it was the mother who advised her. (Mother, FGD, Busia) . Due to the sensitivity of SRH issues, some parents revealed that they would involve a “third party” usually a paternal aunt, uncle or teacher to discuss SRH matters with the adolescents. Some parents expect teachers to inform or teach children about SRH, yet teachers rarely do so. A key informant from Busia reported that the moment he learns that SRH has been integrated into the curriculum, then many parents including himself, will not discuss SRH with his children. They explained that children are free with the teachers. ... You can feel ashamed discussing these things with your parents but if the senior woman teacher talks, you can’t feel ashamed (Girl, FGD, Tororo) II. Busy parents Parents’ occupations determined the available time they spend with their children. Parents increasingly prioritized the demands of employment over child care. Many participants reported that parents were too busy to dedicate time to talk to their children about SRH. Owing to work demands, some parents do not live with their children. Concerns were expressed about absent mothers who travel for domestic employment elsewhere (mainly to neighboring Kenya) for extended periods of over a year, leaving fathers who give little time to children. Other parents leave very early in the morning and return late at night when they are tired, and the children are asleep. Because we the parents are busy, we leave very early in the morning and come back at night. And even when we come back we say we are tired, we don’t have time to talk to children mmmh (Mother, FGD, Tororo) …if the parent comes home late, there is minimal social interaction with the children (KI, Community leader, Tororo) III. Parents’ lack of knowledge Parents felt they lacked the knowledge, appropriate skills, and approaches to talk to their children about SRH, making it challenging to initiate a conversation. Some parents expressed the need for well packaged and age appropriate SRH information to enable them to address these topics. Parents reported that this lack of knowledge created a lack of confidence, making it difficult to find the courage to start a discussion with their child. A few parents had the knowledge but lacked the confidence to discuss SRH matters with children. Children also mentioned that their parents were less knowledgeable about SRH. This perception kept them from initiating a conversation with their parents. …children nowadays know a lot more than what we the parents know. You might tell her something thinking it is new yet she knows much more than you do (Mothers, FGD, Busia) Some parents do not have information. We are told to talk to our children about these things, but, how should we begin, we don’t know what to say (FGD, Fathers, Busia) … I think some parents do not have proper information and others don’t know how to talk to children (Girl, FGD, Tororo) Some participants raised concerns of excessive alcohol consumption and drug abuse among both parents and children which is a serious challenge that hinders SRH talk. Parents, especially fathers, get drunk and are unable to guide adolescents to make wise SRH decisions Fathers report to a Malwa [local brew] joint at six am … They go back home after 9 or 10 p.m. just to sleep. They do not have time for children (KI, NGO, Tororo) Some don’t talk to their children b ecaus e they are drunk all the time (Boy, FGD, Tororo) …you find 10 year olds with sachets of Waragi [local brew] and such a child once he or she is drunk you cannot bring up a conversation on SRH (KI, DLG, Tororo) . When most parents talk to their children, they use authoritative, reprimanding language, especially when they observe cases of teenage pregnancy in the community. Respondents reported that fathers are tough, harsh, and instill fear in the children. Some threaten and use corporal punishment to discipline or warn adolescents concerning inappropriate behavior, especially sexual activity. Owing to the harsh approach used by fathers, most girls and some boys prefer sharing their concerns with mothers. One girl from Tororo reported that when she asked her father for a mathematical set, he angrily told her, “ …but you are a girl, can’t you think of a way of making that money to buy the set?” One boy said: There are some fathers you tell your concerns, but they just start quarreling, accusing you of being spoilt. This creates fear and makes children keep quiet. (Boy, FGD, Busia) As a result, many children do not communicate with their parents. They tend to be shy, and fear punishment. They detest the harsh language used by some parents and only communicate in case of a crisis. The children report that some parents curse the children and tell them not to revert to them in case of problems. This approach can be counter-productive and contributes to rebellion and early marriages. In addition to attributing early marriages to the inability to cater for children’s needs, a mother confirmed the children’s observations: Some girls get married at an early age because of us parents. Sometimes, we are very hostile and yet if you don’t take good care of the children…The child will run away and get her own home (Mother, FGD, Busia) Unfortunately with our parents the communication I should say is poor because it is about shouting, it’s more of discipline, …it’s about seeing a girl with a boy then they begin beating the child, shouting that if you get pregnant I will chase you away, I will cut your head off yeah so it’s more of like disciplinary action (Girl, FGD, Tororo). Many parents indicated that harsh language is used in an endeavor to make the children understand the severity of the issues at stake. Such language is also used when the children are disobedient, for instance, when they break household curfew regulations. This approach is expected to ensure that children do not start or continue with inappropriate behavior. I think you need to be harsh and threaten them with police involvement because that is the only way they understand. Whatever the child does comes back to you the mother….So in a way we need to be harsh to them so that they take whatever we tell them seriously. (Mother, Busia, FGD) You need to be tough with them because if you bring up such issues in a joking way, then she will take it lightly as a joke. That said, there are some sensitive issues that you ought to bring out in a polite way to earn their respect and confidence like things to do with their menstrual periods for the very first time. ( Mother, Busia, FGD) Cultural norms were the most commonly reported barrier to parent-child SRH discussions. These made it unacceptable for parents and children to openly discuss SRH matters. Many of these parents reported that parents in their setting did not discuss sex matters with their children because it is a taboo thus, it is an abomination to speak about sex with your child. They perceive SRH matters to be private and that children would come to know about these issues automatically without any discussion with parents. …They feel it is not right to speak to your child about sex. They still think sex is a bad thing, it is private. It should be discovered by you who is having it and it’s not a matter of discussion. Yeah. Yeah. It is a taboo. (KI, CBO, Tororo) Closely associated to cultural norms, was embarrassment as an inhibitor of communication both on the part of children and on the side of parents. Majority of the parents revealed that embarrassment, shame and awkwardness kept them from initiating the discussion with their children. This was corroborated by the adolescents. Parents felt “embarrassed” talking about SRH with their children because culture labels SRH topics as a taboo. They also fear to approach us. Unless you find one who is very brave to confront you, most of them are shy just like us the parents. She will not come and tell you about the man who is trying to convince her into a relationship. (Mother, FGD, Busia). . ..Some time back, my mother wanted to give me condoms, but she was fearing to tell me. …she told me that I want to give you something, She placed it at a table and just directed me …there is something there …you go and pick it, eeeeh (Boy, FGD, Busia) …things like boys seducing you is your secret because you feel shy sharing such information with your parent and so you keep it as your secret (Girl, FGD, Busia) Given the culture limitation, parents and children face difficulty in discussing SRH issues resulting into parents speaking to the children in parables, which limits comprehension. For example, the expression “If your mother does not teach you, the world will…” that was made by a girl from Tororo meaning that if morals, values, and good character are not imparted at home, then you will learn from hard knocks or problems that result from a lack of or neglect of instruction. Another young girl reported that her mother keeps saying: “…this world is very bitter and very dangerous you have to be very careful with your life” (Girl, FGD, Busia) . Another parent reported “…the world has gone wild, sicknesses are coming like water “(Father, FGD, Tororo) . Most of the parents are reserved and tend not to discuss in details they just caution the children, for example, “ I don’t want to see that you move with the boys. I don’t want to see you getting pregnant, I don’t want to see you in the company that may mislead you, so that is the much they can open up with their children” (KI, CBO, Busia) . They believe that words associated with SRH are obscene and would expose children to inappropriate information that could result in experimenting with sex or encourage early sexual activity. “ It’s like you can’t talk about sexual relationship it is like umm you are encouraging them to try out … Umm the word sex is culturally forbidden” ( KI, DLG, Tororo)” …Do you want to make me old [grandmother] yet I am still young? (Mother, FGD, Tororo) …they fear telling their children to use condoms because they feel the child will engage in sex knowing that it was the mother who advised her. (Mother, FGD, Busia) . Due to the sensitivity of SRH issues, some parents revealed that they would involve a “third party” usually a paternal aunt, uncle or teacher to discuss SRH matters with the adolescents. Some parents expect teachers to inform or teach children about SRH, yet teachers rarely do so. A key informant from Busia reported that the moment he learns that SRH has been integrated into the curriculum, then many parents including himself, will not discuss SRH with his children. They explained that children are free with the teachers. ... You can feel ashamed discussing these things with your parents but if the senior woman teacher talks, you can’t feel ashamed (Girl, FGD, Tororo) Parents’ occupations determined the available time they spend with their children. Parents increasingly prioritized the demands of employment over child care. Many participants reported that parents were too busy to dedicate time to talk to their children about SRH. Owing to work demands, some parents do not live with their children. Concerns were expressed about absent mothers who travel for domestic employment elsewhere (mainly to neighboring Kenya) for extended periods of over a year, leaving fathers who give little time to children. Other parents leave very early in the morning and return late at night when they are tired, and the children are asleep. Because we the parents are busy, we leave very early in the morning and come back at night. And even when we come back we say we are tired, we don’t have time to talk to children mmmh (Mother, FGD, Tororo) …if the parent comes home late, there is minimal social interaction with the children (KI, Community leader, Tororo) Parents felt they lacked the knowledge, appropriate skills, and approaches to talk to their children about SRH, making it challenging to initiate a conversation. Some parents expressed the need for well packaged and age appropriate SRH information to enable them to address these topics. Parents reported that this lack of knowledge created a lack of confidence, making it difficult to find the courage to start a discussion with their child. A few parents had the knowledge but lacked the confidence to discuss SRH matters with children. Children also mentioned that their parents were less knowledgeable about SRH. This perception kept them from initiating a conversation with their parents. …children nowadays know a lot more than what we the parents know. You might tell her something thinking it is new yet she knows much more than you do (Mothers, FGD, Busia) Some parents do not have information. We are told to talk to our children about these things, but, how should we begin, we don’t know what to say (FGD, Fathers, Busia) … I think some parents do not have proper information and others don’t know how to talk to children (Girl, FGD, Tororo) Some participants raised concerns of excessive alcohol consumption and drug abuse among both parents and children which is a serious challenge that hinders SRH talk. Parents, especially fathers, get drunk and are unable to guide adolescents to make wise SRH decisions Fathers report to a Malwa [local brew] joint at six am … They go back home after 9 or 10 p.m. just to sleep. They do not have time for children (KI, NGO, Tororo) Some don’t talk to their children b ecaus e they are drunk all the time (Boy, FGD, Tororo) …you find 10 year olds with sachets of Waragi [local brew] and such a child once he or she is drunk you cannot bring up a conversation on SRH (KI, DLG, Tororo) . When most parents talk to their children, they use authoritative, reprimanding language, especially when they observe cases of teenage pregnancy in the community. Respondents reported that fathers are tough, harsh, and instill fear in the children. Some threaten and use corporal punishment to discipline or warn adolescents concerning inappropriate behavior, especially sexual activity. Owing to the harsh approach used by fathers, most girls and some boys prefer sharing their concerns with mothers. One girl from Tororo reported that when she asked her father for a mathematical set, he angrily told her, “ …but you are a girl, can’t you think of a way of making that money to buy the set?” One boy said: There are some fathers you tell your concerns, but they just start quarreling, accusing you of being spoilt. This creates fear and makes children keep quiet. (Boy, FGD, Busia) As a result, many children do not communicate with their parents. They tend to be shy, and fear punishment. They detest the harsh language used by some parents and only communicate in case of a crisis. The children report that some parents curse the children and tell them not to revert to them in case of problems. This approach can be counter-productive and contributes to rebellion and early marriages. In addition to attributing early marriages to the inability to cater for children’s needs, a mother confirmed the children’s observations: Some girls get married at an early age because of us parents. Sometimes, we are very hostile and yet if you don’t take good care of the children…The child will run away and get her own home (Mother, FGD, Busia) Unfortunately with our parents the communication I should say is poor because it is about shouting, it’s more of discipline, …it’s about seeing a girl with a boy then they begin beating the child, shouting that if you get pregnant I will chase you away, I will cut your head off yeah so it’s more of like disciplinary action (Girl, FGD, Tororo). Many parents indicated that harsh language is used in an endeavor to make the children understand the severity of the issues at stake. Such language is also used when the children are disobedient, for instance, when they break household curfew regulations. This approach is expected to ensure that children do not start or continue with inappropriate behavior. I think you need to be harsh and threaten them with police involvement because that is the only way they understand. Whatever the child does comes back to you the mother….So in a way we need to be harsh to them so that they take whatever we tell them seriously. (Mother, Busia, FGD) You need to be tough with them because if you bring up such issues in a joking way, then she will take it lightly as a joke. That said, there are some sensitive issues that you ought to bring out in a polite way to earn their respect and confidence like things to do with their menstrual periods for the very first time. ( Mother, Busia, FGD) To the best of our knowledge, this is the first qualitative study to assess the facilitators and barriers of parent-adolescent communication on sexual and reproductive health in a Ugandan border setting. We captured the views of 10-17-year-old adolescents, parents, and key Informants in two border districts located in Eastern Uganda. The findings highlight several important points that are useful for designing interventions to improve parent-adolescent communication among parents and children. This study found that only a few parents had SRH discussions with their children. This finding is in line with those of previous studies from sub-Saharan Africa . In their study among adolescents aged 13–17 years in Nigeria, Mbachu, Agu found that majority of adolescents had never discussed sex-related matters with their parents. Among those who engaged in SRH discussions, these mainly focused on abstinence and HIV/AIDS. This finding is also consistent with the findings of Mbachu, Agu , Wamoyi, Fenwick , and Seif and Moshiro . A possible explanation might be that border settings characterized by cross-border movements and trade tend to have disproportionate rates of prostitution, violence and HIV/ STIs prevalence compared to populations in non-border areas . Hence, parents’ emphasis on perceived effective preventive measures. Another possible explanation for this is that it is common for parents in conservative cultures to focus on abstinence-only messages given that contraception and sex are seen as a taboo. However, some adolescents and key informant reported that some of the discussions they had with parents focused on “how to contribute to the household income”. In other words, children were encouraged to engage in income generating activities. This is not surprising in this context with high levels of poverty which requires involvement of both children and parents for household sustenance. Lack of proper SRH education reflects problems facing adolescents such as unprotected sex, unplanned pregnancies with unsafe abortions, HIV/STIs. The implications of this finding is that the parents should not be ignored in programs that wish to reduce adolescents’ risky sexual behaviors. The major barrier to parent adolescent communication in this study was parent’s busy schedules due to pressures of work which hindered interaction between parents and children. A previous study by Mmbaga, Leonard also confirmed that busy schedules hindered SRH discussions between parents and secondary school adolescents’ age 16–19 years in Tanzania. Another study conducted in rural and urban Uganda by Muhwezi, Katahoire found that communication on SRH issues between parents and their children was also hindered by parents busy work schedules. Plausible explanations for this could be: Busia and Tororo are predominantly commercial towns with cross border trading in the context of high mobility . Most men and women leave very early in the morning for work and many return late in the night. It is mainly women who cross the border (to Kenya) for work leaving behind husbands who rarely engage in SRH communication with children. Absent parents are less likely to have a close and trusting relationship with their children, which affects the communication process as documented by other studies in SSA . These work related stressors in a border setting leave little room for SRH discussions between parents and children. Abdication of parental roles leads adolescents to rely on peer influence and social media as the most common sources of SRH information. Unfortunately, studies have shown that information obtained from these sources is either incorrect or false. This is a major cause of early sexual activity, and consequently high rate of unwanted pregnancies and unsafe abortions among adolescents . Adolescents expressed a preference for discussions with mothers compared to fathers. Several studies in different contexts have demonstrated the key role mothers play in impacting children’s sexual and reproductive health decision making . Mothers were described as being approachable and the discussions described as warm and open. A study among Jordanian and Syrian parents (mothers and fathers) of youth aged 15–19 years old indicated that mothers perceived themselves as being more approachable by their children . Mothers were reported to have a closer bond with the children which is attributed to social and culturally formed gender roles and expectations. Achen, Nyakato in their qualitative study examining the impact of gender norms and expectations on parent–child SRH communication in rural south-western Uganda argued that activities ascribed to girls such as doing household chores including cooking, cleaning and care giving roles ultimately prepare them to bond with children. Both girls and boys described SRH communication with fathers as non-existent, rare, difficult, and uncomfortable . Presence of the mother has been highlighted elsewhere as having a higher impact on adolescent’s sexual behavior. Many parents lacked confidence in their ability to discuss SRH matters with their children, attributing this, to their lack of relevant knowledge and also to their low level of educational achievement. Additionally, some parents lacked the communication skills, and were uncomfortable discussing sexual and reproductive health issues. For example, parents could not explicitly discuss sex with their children. Others had incorrect knowledge while others did not know what to tell the child which limited what they could communicate. A knowledgeable parent easily comprehends the importance of SRH communication and forms a favorable attitude to interact. Our findings are consistent with those of other studies among sub-Saharan populations . Others said they had not received any SRH education while growing up, therefore, found it difficult to confidently talk about issues they did not know much about. This highlighted a generational gap where parents draw on their own experiences growing up where such issues were not discussed within families. This study also found that parents avoided topics on condoms and contraception. This finding was also reported in a Tanzanian and South African study . Parents’ failure to discuss contraception could arise from parents’ fear that such communication would be interpreted as an encouragement for sexual activity. Premarital sexual activity especially among adolescents is strongly discouraged in many SSA settings and thus discussions on SRH emphasize abstinence rather than contraception. Selective SRH topic discussions by parents violates adolescents’ right to comprehensive and accurate health information. Findings show parents adopted a harsh and authoritarian approach to SRH communication, which made it difficult for children to openly discuss their SRH concerns. An open, loving and supportive relationship between parents and their children was a foundation for good parent-child communication. It was clear that adolescents who enjoyed a good relationship with their parents, especially their mother, were able to discuss any issues openly. However, these discussions often started late (onset of pubertal changes) when adolescents had already engaged in sexual activity. Late communication, particularly after adolescents have begun sexual activity is unlikely to influence decisions to abstain from sex or practice safe sex. A study by Downing, Jones argued that the timing of parent-child SRH communication would be more effective if it takes place before sexual debut to reinforce protective factors. These findings underscore the urgency of enabling parents to initiate SRH communication with adolescents at younger ages in such a high risk sexual behavior (engaging in transactional sex, and having multiple sexual partners) context to avoid unwanted pregnancies and associated negative SRH outcomes. Culturally it’s a taboo for parents to speak to their children about sexual and reproductive health matters. This finding is similar to previous studies which also established that cultural norms do not allow parents to directly talk to their children about issues of sexual and reproductive health. This caused the parents and adolescents embarrassment to engage in SRH discussions. Ugandan culture ascribes paternal aunts (sengas) / uncles (kojjas) as the main source of SRH knowledge to adolescents . This arrangement was possible in the old extended family environment, but as family became less extended, increased exposure to other sources of information such as schools, peers and social media, the cash economy and highly mobile population in border settings, the role of the paternal aunts and uncles has diminished. These findings suggest that cultural norms and conservative attitudes do not offer a friendly environment where issues of SRH could be honestly and openly discussed. As a result, adolescents are missing vital and beneficial SRH information and guidance. This study set out to explore the facilitators and barriers of parent-adolescent communication on SRH in two Eastern Uganda border districts. This study has shown that a good parent-child relationship, role of the mother and parents level of education were the main facilitators of parent adolescent communication. Conversely, parent-adolescent communication about sexual issues is reduced when parents are engaged in busy work schedules, cultural norms and having limited knowledge and skill to initiate SRH discussions with children. The results of this study highlight the unique sexual and reproductive health challenges faced by adolescents in border settings, placing them at a greater risk of poor SRH outcomes. This vulnerability creates an opportunity to engage all stakeholders including parents to deconstruct sociocultural norms around adolescent sexual and reproductive health, sensitizing and developing the capacity of parents, encourage initiation of SRH discussions at early ages and integrating parent-adolescent communication into parenting interventions, as potential strategies to improve SRH communication between parents and adolescents in high-risk settings such as borders. Limitations A limitation of this study is that it was conducted among adolescents and parents living in border settings of Uganda, and findings may not be applicable to adolescents living in other urban or rural settings. A limitation of this study is that it was conducted among adolescents and parents living in border settings of Uganda, and findings may not be applicable to adolescents living in other urban or rural settings.
Patient experiences of, and preferences for, surgical wound care education
c68bac54-1d37-4b2f-afd1-5e6308ac3ea8
10088828
Patient Education as Topic[mh]
INTRODUCTION Internationally, one‐in‐four surgical patients develop post‐operative complications within 14 days of hospital discharge, with surgical site infection (SSI) being the most common. , In a recent systematic review and meta‐analysis, the cumulative incidence of SSI for general surgery was 11%. Patients with post‐operative complications have negative psychosocial and functional outcomes, prolonged hospitalisation, and 6% experience unplanned readmissions. In 2007, direct costs associated with hospital re‐admission, as well as prolonged hospitalisation and reoperation, were estimated to cost the United States (US) Veteran Health Administration hospitals US$8338–$29 595 per patient, depending on severity. Post‐operative complications like SSIs are preventable; advocates assert that enabling patient participation in self‐management wound care practices can help to prevent SSI; however rigorous studies are required to substantiate this claim. Surgical wound care management varies in complexity. For some patients, surgical wounds are “clean” and heal by primary intention. This type of wound care can be straightforward such as patients removing dressing themselves and/or a General Practitioner removing staples at 7–10 days. Other patients are discharged from the hospital with complex dressings such as negative wound pressure therapy, or wounds dehisce, with some intentionally left open to heal (e.g., breast abscess), all of which require more intensive treatment, multiple operations and frequent community care. , Regardless of wound complexity, patients face challenges managing their wounds in the community. Timely hospital discharge education may equip surgical patients with the knowledge and skills to participate in wound care management once home. Patient education includes information provided to patients, and when delivered effectively can enable patient understanding. Researchers have found that patients who perceive they received more information than expected at hospital discharge had significantly fewer wound complications than those who received less information than expected. It has been estimated that half of the hospital readmissions might be prevented with better patient discharge education. There is growing interest in the importance of patient‐centred discharge education, which entails delivering education in a way that (1) encourages partnerships and shared decision‐making between the patient and healthcare professional; and (2) is individualised and based on patient needs and preferences. Delivering discharge education in this manner increases patient confidence, empowerment and ability to self‐care once home. , However, practice variability means some patients do not receive discharge education, while others receive piecemeal information during other hospital activities, and often patients do not understand the importance of the information being shared ad hoc. Further, when discharge education does occur it is not truly patient‐centred, due to a lack of shared decision‐making. Also, patients' preferences for, and experiences of, discharge education are not always in concordance, as some patients desire written information but do not receive it. Notably, patients with SSI have reported receiving inadequate discharge instructions about surgical wound care, highlighting possible links between discharge education and SSI. The aims of this study were to: describe patients' experiences of surgical wound care discharge education and participation in wound care decisions, and their preferences for discharge education delivery; determine the estimated cost and what demographic factors (age, sex, level of education) predict patients' ability to manage their surgical wounds after hospital discharge; and determine whether patient experiences with surgical wound care discharge education predict their ability to manage their surgical wound after hospital discharge. Understanding predictors of patients' ability to manage their surgical wound care provides a better understanding of individual patient needs and can be used to tailor education. MATERIALS AND METHODS 2.1 Design A cross‐sectional survey. 2.2 Setting and sampling This quantitative telephone survey took place at two metropolitan tertiary hospitals in Queensland, Australia. The two sites were chosen as they cater to a broad patient population and perform most types of surgeries. From 2020–2021, about 44 700 patient admissions were. From April to September 2021, patients were recruited from a total of 21 wards (day surgery, short‐stay and long‐stay surgical wards) across the two sites. Ethics approval was granted by the participating health services (HREC/2020/QGC/64063) and university (2020/880). We aimed to obtain a sample size of 330 surgical patients; 165 patients at each site. Based on the literature in this area, , , , , , we identified 15 potential predictors of the patient's ability to manage their surgical wounds. Hair et al. suggest that 10–20 participants are needed for each predictor (i.e., ≈ n = 300 for this study). Based on our previous research using phone interviews, loss to follow‐up during post‐discharge phone calls can be high, thus we aimed to over‐recruit by 10%, resulting in the target sample of 330 surgical patients. Inclusion criteria were patients: post‐elective or emergency surgery with a surgical wound; aged ≥18 years; scheduled for hospital discharge within approximately 48 h of study recruitment; competent to give consent for research participation; and able to complete a telephone interview. Exclusion criteria were patients: who were unable to understand English, palliated, or being discharged to a nursing home or other care facility. 2.3 Data collection A nurse researcher at each site was trained in study procedures and was provided with a Standard Operating Protocol. On the day of data collection, the nurse researcher approached eligible patients consecutively at their site and provided an ethics‐approved written and oral description of the project, giving patients time to consider their participation. All eligible patients approached were recorded in a screening log in the secure, web‐based Research Electronic Data Capture software (REDCap). , Patients willing to participate provided written consent, their contact details, and demographic and clinical data (age, sex, type of surgery, highest level of education, and wound location). To obtain a baseline understanding of participants' perceived ability to manage their wounds, they answered one question, “I will be able to take care of my surgical wound at home” which had a 5‐scale response (strongly disagree—strongly agree). A similar question was repeated when the survey was delivered to participants once home to allow comparison over time. The survey was administered to participants by telephone, approximately 14 days after surgery (if they were discharged home at that point) or 14 days after hospital discharge from the surgical admission, whichever came first. The rationale for this timeline was that research shows that 75% of post‐operative surgical complications occur within 14 days after hospital discharge to the home. Patient demographic data, clinical data and survey responses were recorded in a de‐identified manner using a study ID, so that their personal information (i.e., name and phone number) were unable to be linked. 2.4 Survey development A study‐specific survey was co‐developed with patients, wound care experts and researchers. Through a process of face validity, content validity and pilot testing, 106 items were reduced to 18 items. The survey is named “Surgical Wounds and Patient Participation Questionnaire” and contains four themes listed below; three about patient experiences of surgical wound care discharge education and one regarding their preferences (Please see Appendix for further details about items): Experiences Wound care discharge education (10 items) Participation in wound care decisions (3 items) Patients' ability to manage their surgical wound to prevent wound complications (1 item) Preferences 4 Preferences for discharge education delivery (4 items) Response options varied across themes including items that allowed multiple response options, Likert scale response options (strongly disagree – strongly agree) and yes/no/not applicable. Recruited participants were sent a preliminary text message to arrange the telephone call and/or provide a reminder. At the pre‐arranged date and time, the nurse researcher telephoned recruited participants. Two follow‐up phone calls were made if participants did not answer (three phone calls in total). The nurse researcher recorded all data in a REDCap database. 2.5 Data analysis Data were exported from REDCap into SPSS (Version 27) and cleaned and checked for accuracy. Descriptive statistics were used to compute absolute (n) and relative (%) frequencies, and means and standard deviations or medians/interquartile ranges, depending on the level and distribution of data. Survey items with multi‐response options, which are largely related to patient preferences, were analysed descriptively. Multiple logistic regression was used to determine the predictors of patients' ability to manage their surgical wounds at home. Missing data was 0.06%; thus, missing values were not imputed. The predictors selected were based on the literature and included: patient age, sex, level of education, experiences of ‘wound care discharge education’ (Theme 1) and ‘participation in wound care decisions’ (Theme 2). , , , , , We also included the hospital site as a covariate to statistically control for this potential confounder. Independent predictor variables were binarised based on researchers' judgements that these response options were conceptually consistent to allow them to be combined: The 5‐item Likert scale became: 0 = strongly disagree, disagree and neutral; and 1 = agree and strongly agree. Yes/no/not applicable response options became: 0 = no and not applicable; and 1 = yes. The dependent outcome measure was the patients' ability to manage their surgical wounds. This outcome had five Likert scale response options ranging from 1 = strongly disagree to 5 = strongly agree. The outcome was non‐normally distributed, therefore, prior to the analysis, this variable was dichotomised. Using the 50th percentile as a cut‐off point the data was recoded as 0 = strongly disagree, disagree and neutral responses, and 1 = agree and strongly agree responses. A model‐building approach was used to identify model predictors. Prior to the analysis, the following multiple logistic regression assumptions were checked. First, all predictors were checked for multicollinearity; there were no statistically significant correlations of .70 or above. Next, relationships between the predictors and outcome measures were checked using univariate analysis, and predictors that had a P ‐value of ≤.20 were simultaneously entered into a logistic regression model. , For multiple logistic regression analysis statistical significance was set at P < .05. 2.6 Consumer and clinician engagement The Guidance for Reporting Involvement of Patients and the Public checklist was used to plan and report consumer and clinician engagement (See Appendix ). One consumer and one clinician were team members from early survey development (reported elsewhere) through to manuscript preparation. A second consumer joined the team during data collection. The two consumers had experienced a surgical wound and the clinician was a wound care expert. They reviewed study findings and provided their interpretations, which were used to shape the discussion and recommendations for future research and practice. Design A cross‐sectional survey. Setting and sampling This quantitative telephone survey took place at two metropolitan tertiary hospitals in Queensland, Australia. The two sites were chosen as they cater to a broad patient population and perform most types of surgeries. From 2020–2021, about 44 700 patient admissions were. From April to September 2021, patients were recruited from a total of 21 wards (day surgery, short‐stay and long‐stay surgical wards) across the two sites. Ethics approval was granted by the participating health services (HREC/2020/QGC/64063) and university (2020/880). We aimed to obtain a sample size of 330 surgical patients; 165 patients at each site. Based on the literature in this area, , , , , , we identified 15 potential predictors of the patient's ability to manage their surgical wounds. Hair et al. suggest that 10–20 participants are needed for each predictor (i.e., ≈ n = 300 for this study). Based on our previous research using phone interviews, loss to follow‐up during post‐discharge phone calls can be high, thus we aimed to over‐recruit by 10%, resulting in the target sample of 330 surgical patients. Inclusion criteria were patients: post‐elective or emergency surgery with a surgical wound; aged ≥18 years; scheduled for hospital discharge within approximately 48 h of study recruitment; competent to give consent for research participation; and able to complete a telephone interview. Exclusion criteria were patients: who were unable to understand English, palliated, or being discharged to a nursing home or other care facility. Data collection A nurse researcher at each site was trained in study procedures and was provided with a Standard Operating Protocol. On the day of data collection, the nurse researcher approached eligible patients consecutively at their site and provided an ethics‐approved written and oral description of the project, giving patients time to consider their participation. All eligible patients approached were recorded in a screening log in the secure, web‐based Research Electronic Data Capture software (REDCap). , Patients willing to participate provided written consent, their contact details, and demographic and clinical data (age, sex, type of surgery, highest level of education, and wound location). To obtain a baseline understanding of participants' perceived ability to manage their wounds, they answered one question, “I will be able to take care of my surgical wound at home” which had a 5‐scale response (strongly disagree—strongly agree). A similar question was repeated when the survey was delivered to participants once home to allow comparison over time. The survey was administered to participants by telephone, approximately 14 days after surgery (if they were discharged home at that point) or 14 days after hospital discharge from the surgical admission, whichever came first. The rationale for this timeline was that research shows that 75% of post‐operative surgical complications occur within 14 days after hospital discharge to the home. Patient demographic data, clinical data and survey responses were recorded in a de‐identified manner using a study ID, so that their personal information (i.e., name and phone number) were unable to be linked. Survey development A study‐specific survey was co‐developed with patients, wound care experts and researchers. Through a process of face validity, content validity and pilot testing, 106 items were reduced to 18 items. The survey is named “Surgical Wounds and Patient Participation Questionnaire” and contains four themes listed below; three about patient experiences of surgical wound care discharge education and one regarding their preferences (Please see Appendix for further details about items): Experiences Wound care discharge education (10 items) Participation in wound care decisions (3 items) Patients' ability to manage their surgical wound to prevent wound complications (1 item) Preferences 4 Preferences for discharge education delivery (4 items) Response options varied across themes including items that allowed multiple response options, Likert scale response options (strongly disagree – strongly agree) and yes/no/not applicable. Recruited participants were sent a preliminary text message to arrange the telephone call and/or provide a reminder. At the pre‐arranged date and time, the nurse researcher telephoned recruited participants. Two follow‐up phone calls were made if participants did not answer (three phone calls in total). The nurse researcher recorded all data in a REDCap database. Data analysis Data were exported from REDCap into SPSS (Version 27) and cleaned and checked for accuracy. Descriptive statistics were used to compute absolute (n) and relative (%) frequencies, and means and standard deviations or medians/interquartile ranges, depending on the level and distribution of data. Survey items with multi‐response options, which are largely related to patient preferences, were analysed descriptively. Multiple logistic regression was used to determine the predictors of patients' ability to manage their surgical wounds at home. Missing data was 0.06%; thus, missing values were not imputed. The predictors selected were based on the literature and included: patient age, sex, level of education, experiences of ‘wound care discharge education’ (Theme 1) and ‘participation in wound care decisions’ (Theme 2). , , , , , We also included the hospital site as a covariate to statistically control for this potential confounder. Independent predictor variables were binarised based on researchers' judgements that these response options were conceptually consistent to allow them to be combined: The 5‐item Likert scale became: 0 = strongly disagree, disagree and neutral; and 1 = agree and strongly agree. Yes/no/not applicable response options became: 0 = no and not applicable; and 1 = yes. The dependent outcome measure was the patients' ability to manage their surgical wounds. This outcome had five Likert scale response options ranging from 1 = strongly disagree to 5 = strongly agree. The outcome was non‐normally distributed, therefore, prior to the analysis, this variable was dichotomised. Using the 50th percentile as a cut‐off point the data was recoded as 0 = strongly disagree, disagree and neutral responses, and 1 = agree and strongly agree responses. A model‐building approach was used to identify model predictors. Prior to the analysis, the following multiple logistic regression assumptions were checked. First, all predictors were checked for multicollinearity; there were no statistically significant correlations of .70 or above. Next, relationships between the predictors and outcome measures were checked using univariate analysis, and predictors that had a P ‐value of ≤.20 were simultaneously entered into a logistic regression model. , For multiple logistic regression analysis statistical significance was set at P < .05. Consumer and clinician engagement The Guidance for Reporting Involvement of Patients and the Public checklist was used to plan and report consumer and clinician engagement (See Appendix ). One consumer and one clinician were team members from early survey development (reported elsewhere) through to manuscript preparation. A second consumer joined the team during data collection. The two consumers had experienced a surgical wound and the clinician was a wound care expert. They reviewed study findings and provided their interpretations, which were used to shape the discussion and recommendations for future research and practice. RESULTS Of the 729 patients screened, 637 were eligible, and 419 were approached for consent. Of the 330 who provided consent 270 patients completed the survey (See Figure ). As shown in Table , the mean sample age was 55.1 years (SD 17.9) and more females participated. There were average differences between sites, with Site 2 participants reporting higher levels of education. Site 1 had more orthopaedic and neurology patients whereas Site 2 had predominantly general surgical patients. 3.1 Patient experiences and preferences Descriptive data for each of the four themes appears in Table and . Theme 1 data shows that frequently delivered content was arrangements for follow‐up appointments and who to contact if there were surgical wound concerns (See Table ). These instructions were delivered verbally, often with opportunities for patients to ask questions. For Theme 2, most patients (n = 227; 84.4%) reported that medical and nursing staff discussed surgical wound‐related pain management options, 165 (61.3%) stated that medical and nursing staff discussed wound care treatment options, and 107 (40.1%) were invited to participate in wound care decision‐making. Regarding Theme 3, while in hospital, slightly more than three‐quarters of patients (n = 208; 77.0%) stated they would be able to take care of their surgical wounds at home. Two weeks after discharge, most patients (n = 244; 90.4%) reported they undertook their surgical wound care at home. Table highlights Theme 4 responses, showing patients preferred wound information delivered both verbally and with printed materials, by medical and nursing staff, at discharge. 3.2 Factors influencing the patient's ability to self‐manage their wound at home Fifteen predictors were tested at the univariate stage; 8 had P ‐values of ≤.2 and were entered into the multiple logistic regression model (Table ). The full model containing all 8 predictors was statistically significant ( χ 2 = 24.03; [8, N = 270], P = .002), indicating that the model was able to distinguish between patients who reported they were and were not able to manage their surgical wound at home. The model explained between 8.7% (Cox and Snell R squared) and 19.1% (Nagelkerke R squared) of the variance in patients' perceived ability to manage their surgical wound, and correctly classified 90.9% of cases. As shown in Table , only two predictors, both of which related to the theme ‘participation in wound care decisions’, made a statistically significant contribution to the model. The strongest predictor of patients' perceived ability to manage their surgical wound was being invited to share in wound care‐related decision‐making, recording an odds ratio of 6.57 (95% CI 1.45–29.79, P = .02). This indicates patients who were invited to take part in decisions about wound care were 6.5 times more likely to perceive they were able to manage their post‐discharge surgical wound care compared to those who were not involved in decision‐making. Patients who agreed that medical staff and/or nursing staff discussed wound‐related pain management options were 3.1 times more likely to report being able to manage their surgical wound at home (OR = 3.12, 95% CI = 1.03–9.42, P = .04) compared to those who did not receive this information. Patient experiences and preferences Descriptive data for each of the four themes appears in Table and . Theme 1 data shows that frequently delivered content was arrangements for follow‐up appointments and who to contact if there were surgical wound concerns (See Table ). These instructions were delivered verbally, often with opportunities for patients to ask questions. For Theme 2, most patients (n = 227; 84.4%) reported that medical and nursing staff discussed surgical wound‐related pain management options, 165 (61.3%) stated that medical and nursing staff discussed wound care treatment options, and 107 (40.1%) were invited to participate in wound care decision‐making. Regarding Theme 3, while in hospital, slightly more than three‐quarters of patients (n = 208; 77.0%) stated they would be able to take care of their surgical wounds at home. Two weeks after discharge, most patients (n = 244; 90.4%) reported they undertook their surgical wound care at home. Table highlights Theme 4 responses, showing patients preferred wound information delivered both verbally and with printed materials, by medical and nursing staff, at discharge. Factors influencing the patient's ability to self‐manage their wound at home Fifteen predictors were tested at the univariate stage; 8 had P ‐values of ≤.2 and were entered into the multiple logistic regression model (Table ). The full model containing all 8 predictors was statistically significant ( χ 2 = 24.03; [8, N = 270], P = .002), indicating that the model was able to distinguish between patients who reported they were and were not able to manage their surgical wound at home. The model explained between 8.7% (Cox and Snell R squared) and 19.1% (Nagelkerke R squared) of the variance in patients' perceived ability to manage their surgical wound, and correctly classified 90.9% of cases. As shown in Table , only two predictors, both of which related to the theme ‘participation in wound care decisions’, made a statistically significant contribution to the model. The strongest predictor of patients' perceived ability to manage their surgical wound was being invited to share in wound care‐related decision‐making, recording an odds ratio of 6.57 (95% CI 1.45–29.79, P = .02). This indicates patients who were invited to take part in decisions about wound care were 6.5 times more likely to perceive they were able to manage their post‐discharge surgical wound care compared to those who were not involved in decision‐making. Patients who agreed that medical staff and/or nursing staff discussed wound‐related pain management options were 3.1 times more likely to report being able to manage their surgical wound at home (OR = 3.12, 95% CI = 1.03–9.42, P = .04) compared to those who did not receive this information. DISCUSSION We found that patients preferred a combination of verbal and written surgical wound care instructions delivered by medical and nursing staff. Study participants reported that this information largely focussed on follow‐up arrangements and who to contact in the community. We also found that discharge education that encouraged patient participation in shared decision‐making and pain management conversations, enhanced patients' ability to manage their surgical wounds at home. However, only 40% of patients surveyed experienced shared wound care decision‐making. Using logistic regression, we found that patient participation was associated with participants' perceived ability to self‐manage their surgical wound once home. This is consistent with previous qualitative research where patients have linked both shared decision‐making during discharge planning and understanding wound pain management as enhancing their recovery. Further, research interventions that increase patient participation in their surgical care have been shown to improve patient knowledge and self‐confidence in care management. However, patients in our study reported that they participated in shared decision‐making infrequently, which could be due to physicians’ preferences for dyadic approaches when preparing patients to self‐manage. While shared decision‐making has been well‐defined for people with chronic wounds in the community, enacting shared decision‐making in acute surgical wound care is largely unexplored, representing an area for future research. , We uncovered that specific wound care instructions were infrequently provided to patients. This may be problematic as wound care communication between hospital and community healthcare professionals is limited, causing delays in patient care and increased community staff dissatisfaction. In the UK, surgical wounds account for 730 000‐1 840 000 visits to GPs, practice nurses and community nurses annually, costing up to £52 million. Despite this burden, GPs and community nurses lack guidelines and report challenges related to managing surgical wounds in the community such as feeling pressured to follow hospital orders (even if not best practice) and poor access to hospital wound care products. It has been suggested that patients and families could be a conduit between the hospital and the community by providing specific wound care instructions. In fact, in another study, surgical patients reported wanting more specific information such as the rationale for dressing product choices. Overall, increasing specific wound care information provided to patients could be a priority area for future healthcare improvement to empower patients to engage in post‐operative recovery. We found that most participants desired and experienced verbal information, and about two‐thirds wanted and received written information. A blended approach of both verbal and written information is patients' preference and may be optimal given that only 47% of patients recall receiving “verbal only” discharge instructions. Researchers have shown that adding written discharge instructions improved recall by 58% and 67% for video discharge instructions. Additionally, we found that some patients desired electronic information‐sharing options even though they had not experienced this. Electronic discharge education interventions are increasingly being designed and show promising outcomes. For example, videos can demonstrate to patients how to clean and remove their surgical dressing; patients find these videos useful. Additionally, mobile health (mHealth) applications (app) can provide patients with daily wound care education and opportunities to upload wound photos for healthcare professionals to monitor. This app significantly decreased patients' functional limitations and increased the quality of life in the intervention group. Overall, a blended approach of verbal and written is recommended as it improves recall and aligns with a patient preference; however, as patients become more exposed to electronic interventions it will be interesting to monitor their preferences. Our findings indicate that patients preferred surgical wound care information from medical and nursing staff. Other studies provide reasoning for these preferences; patients value regular contact with medical staff who performed the surgery, inspect the wound and report on healing and recognise the empathic nature of nurses and feel comfortable approaching them for discharge information. Yet, there are barriers to these healthcare professionals providing effective discharge education, such as the medical staff's rushed and impersonal manner and nurses' reliance on medical staff to clarify patient queries. Additionally, 35% of medical residents report they are unsure which member of the multi‐disciplinary team should be responsible for discharge education, and often nurses report undertaking this responsibility in the absence of other healthcare professionals. In one study, the introduction of a Nurse Practitioner to the surgical team enhanced responsibility for discharge education and reduced unnecessary emergency department visits. Considering our findings, a clearly defined multidisciplinary approach may ensure optimal and patient‐centred discharge education. We acknowledge several limitations. First, our study was conducted across two public hospitals, which may limit the generalisability of findings. While we recruited from 21 different wards, we did not access all surgical wards, thus selection bias may have occurred. However, involving many wards, from tertiary service hospitals that serve large catchment areas, heightens generalisability. A consumer on our team suggested the administration of the survey to private hospital patients to identify what can be learnt and exchanged across the two hospital systems. Second, while we found relationships between patient participation and patient ability to self‐manage their wound, the correlational design only permits the measurement of associations, and there may be other variables that are potential confounders. Third, our logistic regression findings had large confidence intervals meaning there is uncertainty in these results. More research with larger samples is required to confirm the findings. Fourth, the findings are at risk of recall bias, as participants reported experiences occurring two weeks prior to their phone call. Finally, consumers and clinicians involved in interpreting study findings were disappointed that family participation was not measured in our survey, highlighting that some wound locations cannot be managed without assistance. We recommend future research that investigates the family's role. CONCLUSIONS In conclusion, our study provides insights into an approach to discharge education for surgical wound care that promotes patient partnership and is based on patient experience and preferences. In terms of preferences, patients prefer discharge information that is verbal and written, by medical and nursing staff, at discharge. To enhance partnerships, shared decision‐making, patient participation and pain discussions can increase patients' ability to manage their wound. These results provide a new avenue for enhancing discharge education; embedding shared decision‐making processes into discharge education is a critical area for improvement to enhance patient self‐management abilities. Additionally, an approach to discharge education based on patient preference has been confirmed, providing the basis for pathways for discharge education that meets patient needs. This study was funded by a Griffith University New Researcher Grant. Georgia Tobiano and Sharon Latimer were employed with funding from a NHMRC Centre for Research Excellence—Grant No. APP1196436.
Management of child maltreatment suspicions in general practice: a mixed methods study
50b9b21d-b5ab-4bb2-b2cd-fa36bf458403
10088924
Family Medicine[mh]
Child maltreatment is a major public-health and social-welfare problem, with dramatic consequences for the victim’s physical, mental, and emotional health throughout childhood and adult life . WHO defines child maltreatment as the abuse and neglect that occurs to children under 18 years of age, including all types of physical and/or emotional ill-treatment, sexual abuse, neglect, negligence and commercial or other exploitation, which results in actual or potential harm to the child’s health, survival, development or dignity in the context of a relationship of responsibility, trust or power. A recent meta-analysis shows significantly increased health-related and economic costs resulting from adverse childhood experiences across all European countries . Reports to the social authorities in cases of suspicion of abuse and neglect are mandatory for all citizens in Denmark. However, those who work professionally with children, including health care professionals in all settings, workers in schools, kindergartens, daycare etc., and workers in the sectors of care and support of people with social or other special needs and challenges, have an extended obligation to react when there is a presumption that a child needs help. It has been suggested, however, that up to 90% of child maltreatment goes unnoticed . Studies with adult victims of childhood abuse and neglect describe how victims felt overlooked or ignored by health professionals, even though they considered their precarious situation to be obvious to outsiders . Likewise, it has been shown that children of substance abusers or patients with mental illness often lacked recognition of their precarious situation by their GP . General practice is the front line of the health care system in Denmark and provides expense-free health care visits on demand. Approximately 20% of regular consultations in Danish general practice are with children, and cover everything from three scheduled prophylactic child-well visits during the first year and annual visits until the child turns five combined with immunisations and ad hoc contacts, often with infections or injuries. More than 90% of children attend the first three child-well visits, after which attendance seems to decline slightly . Thus, the general practitioner (GP) and sometimes the practice nurse (PN) may be the most consistent health professional in children’s lives, as they follow them from pregnancy throughout their childhood. Continuity of care is a core principle of the way that general practice is organised, as is timely diagnosis and prioritising those whose needs are greatest . This positions general practice as central in early recognition and reporting of child maltreatment. The longitudinal contact between the GP, PN, the child, and the rest of the family may offer opportunities to identify children at risk. It has been argued that GPs seem reluctant to report on their suspicions of child maltreatment , possibly due to a lack of knowledge about symptoms and how to deal with suspicions, uncertainty about the diagnosis, and fear of impeding the relationship with the family . In a pilot study we found that in cases of obvious signs of maltreatment, GPs are not in any doubt about how to proceed . However, in the complex reality of clinical general practice, GPs are faced with a wide range of different child health concerns, which rarely offer room for suspicion when signs are unclear . Moreover, a Norwegian study of children as next of kin to parents with mental illness or substance abuse, have shown that although GPs may have an important supportive role to play for ‘invisible’ children, they often miss the opportunity to do so, due to working conditions in general practice . Little is known about how suspicions of child maltreatment are managed in a general practice context. In this article, we seek to direct attention towards what happens in that space before reports to social services are made, or not made. In order to address this knowledge gab, we explore the question how Danish GPs and PNs deal with suspicions of child maltreatment, what actions they take, and which challenges they face. Our study was designed as a convergent, parallel, mixed methods approach , combining observations of consultations and interviews with GPs and PNs, and questionnaires with GPs. In order to understand different aspects of how suspicions of child maltreatment are managed in clinical practice, we wanted to combine quantitative and qualitative data to generate a more complete and detailed understanding of the topic under investigation. We combined a nationwide questionnaire completed by GPs and ethnographic fieldwork, consisting of interviews with GPs and PNs with observations in different general practice clinics in the period October 2019 through June 2020. Data collection in the two studies was carried out simultaneously, and meetings were held continuously throughout the study period, to discuss progress and provisional findings as they emerged. Questionnaire Data collection In October 2019 we sent a questionnaire to all registered doctors working in GP in Denmark, exploring doctors’ knowledge, experience, attitudes, and personal involvement with child abuse and neglect. The respondents are presented in . We used a validated Danish translation of a questionnaire originally developed for dentists and dental hygienists . Data collection was completed in June 2020. The questionnaire In the present study we present the part of the questionnaire concerning management of suspicions of child maltreatment among GP doctors. Two questions explored the preferences of GPs in cases where they suspect child maltreatment. The first addressed concrete suspicions: who will the GP prefer to report to, or discuss with, if he/she suspects child abuse or neglect. More than one replies were possible among the four suggested (social services, police, colleague(s), caregiver, y/n), as well as free-text. The second question had two arms, and explored whether the GP would prefer to discuss the case with a colleague before reporting to the social authorities (y/n), and whether he/she would prefer to discuss with other professionals (y/n, free-text) before reacting to a suspicion in a hypothetical case of child abuse or neglect. Finally, the questionnaire included several factors (listed in ) and explored their possible influence (y/n) in the GPs’ decision to report to social services; more than one replies were possible. Three factors (fear of breaking the legislation, fear of doing something wrong, cooperation with the family) were added to the original questionnaire by the authors based on personal communications and experiences. Statistical analysis To explore possible demographic and geographic differences among GPs, we stratified the questionnaire responses according to sex, age, type of practice (group, single, collaboration between several practices; town/city, country, mixed), and geographical area (five Danish regions: Capital Region, Zealand, Central Region of Denmark, North Denmark Region, Southern Denmark Region). Fieldwork and interviews Data collection The qualitative data were based on five weeks of observation in five different general practice clinics, and 20 interviews with GPs and PNs, carried out by the first author between November 2019 and March 2020. Participating practices were strategically sampled , to ensure variation in practice type, geographical location, setting (rural, urban, provincial), and patient population (sociodemographic composition). The first author spent one week at each clinic with different doctors, nurses and patients, and observed hundreds of consultations covering a wide range of health-related problems, not only child consultations. This proved invaluable in developing a contextual understanding of how GPs and PNs think about and develop concerns, diagnoses, and care for patients. The first author also interviewed 20 GPs and PNs, all of whom had experience with child consultations. Some worked at the clinics where observations were carried out, and others were recruited from different clinics, locations, and patient populations through purposive sampling. provides an overview of the GPs and PNs who were interviewed. The choice to include both GPs and PNs was based on recent developments in Danish general practice, where more consultations are handled by practice nurses, such as child vaccinations and child well visits. The interviews focussed on experiences with reporting on child maltreatment, perceptions of what child maltreatment is and how it may manifest, and child welfare in the context of general practice. Data analysis All interviews were transcribed verbatim and, together with field notes, read several times to develop an overview of patterns and overarching themes. Subsequently, the first author carried out an open coding using Nvivo13 and developed 25 codes, which were grouped into five themes: what is wrong with the child; cooperation with other sectors; suspicion; the doctor-patient relationship, and general practice as a context. illustrates the coding process. The themes were then discussed within the research group, which was made up of two forensic specialists in child abuse, one GP, two paediatricians with experience in the field of maltreatment, and one anthropologist with research experience from general practice. Theoretical perspective To make sense of how, when, and why suspicions of maltreatment arise in general practice, studies have applied theoretical concepts such as intuition and gut feeling . We explore this through the concept of uncertainty which is increasingly recognised as a condition for practicing medicine and is intrinsic to making choices (on treatment, procedures, medication etc.). As noted by Professor of general practice Guri Rortveit; uncertainty ‘is a core concept of medical activity, especially in general practice, where illness is evaluated at an early stage and available diagnostic tools are limited’ [ ,p.135]. Within social sciences, research focus on understanding how uncertainty is dealt with and made sense of in social situations , what it means to people living in particular situations and contexts, and how it is experienced and managed in daily life . We try to bridge the medical and social approaches as we explore how the need for support amid feelings of uncertainty may be an important aspect of diagnosing child maltreatment in situations where there are no concrete biological signs or indications, but still ‘something’ which alerts the attention of the health professional. Data collection In October 2019 we sent a questionnaire to all registered doctors working in GP in Denmark, exploring doctors’ knowledge, experience, attitudes, and personal involvement with child abuse and neglect. The respondents are presented in . We used a validated Danish translation of a questionnaire originally developed for dentists and dental hygienists . Data collection was completed in June 2020. The questionnaire In the present study we present the part of the questionnaire concerning management of suspicions of child maltreatment among GP doctors. Two questions explored the preferences of GPs in cases where they suspect child maltreatment. The first addressed concrete suspicions: who will the GP prefer to report to, or discuss with, if he/she suspects child abuse or neglect. More than one replies were possible among the four suggested (social services, police, colleague(s), caregiver, y/n), as well as free-text. The second question had two arms, and explored whether the GP would prefer to discuss the case with a colleague before reporting to the social authorities (y/n), and whether he/she would prefer to discuss with other professionals (y/n, free-text) before reacting to a suspicion in a hypothetical case of child abuse or neglect. Finally, the questionnaire included several factors (listed in ) and explored their possible influence (y/n) in the GPs’ decision to report to social services; more than one replies were possible. Three factors (fear of breaking the legislation, fear of doing something wrong, cooperation with the family) were added to the original questionnaire by the authors based on personal communications and experiences. Statistical analysis To explore possible demographic and geographic differences among GPs, we stratified the questionnaire responses according to sex, age, type of practice (group, single, collaboration between several practices; town/city, country, mixed), and geographical area (five Danish regions: Capital Region, Zealand, Central Region of Denmark, North Denmark Region, Southern Denmark Region). In October 2019 we sent a questionnaire to all registered doctors working in GP in Denmark, exploring doctors’ knowledge, experience, attitudes, and personal involvement with child abuse and neglect. The respondents are presented in . We used a validated Danish translation of a questionnaire originally developed for dentists and dental hygienists . Data collection was completed in June 2020. In the present study we present the part of the questionnaire concerning management of suspicions of child maltreatment among GP doctors. Two questions explored the preferences of GPs in cases where they suspect child maltreatment. The first addressed concrete suspicions: who will the GP prefer to report to, or discuss with, if he/she suspects child abuse or neglect. More than one replies were possible among the four suggested (social services, police, colleague(s), caregiver, y/n), as well as free-text. The second question had two arms, and explored whether the GP would prefer to discuss the case with a colleague before reporting to the social authorities (y/n), and whether he/she would prefer to discuss with other professionals (y/n, free-text) before reacting to a suspicion in a hypothetical case of child abuse or neglect. Finally, the questionnaire included several factors (listed in ) and explored their possible influence (y/n) in the GPs’ decision to report to social services; more than one replies were possible. Three factors (fear of breaking the legislation, fear of doing something wrong, cooperation with the family) were added to the original questionnaire by the authors based on personal communications and experiences. To explore possible demographic and geographic differences among GPs, we stratified the questionnaire responses according to sex, age, type of practice (group, single, collaboration between several practices; town/city, country, mixed), and geographical area (five Danish regions: Capital Region, Zealand, Central Region of Denmark, North Denmark Region, Southern Denmark Region). Data collection The qualitative data were based on five weeks of observation in five different general practice clinics, and 20 interviews with GPs and PNs, carried out by the first author between November 2019 and March 2020. Participating practices were strategically sampled , to ensure variation in practice type, geographical location, setting (rural, urban, provincial), and patient population (sociodemographic composition). The first author spent one week at each clinic with different doctors, nurses and patients, and observed hundreds of consultations covering a wide range of health-related problems, not only child consultations. This proved invaluable in developing a contextual understanding of how GPs and PNs think about and develop concerns, diagnoses, and care for patients. The first author also interviewed 20 GPs and PNs, all of whom had experience with child consultations. Some worked at the clinics where observations were carried out, and others were recruited from different clinics, locations, and patient populations through purposive sampling. provides an overview of the GPs and PNs who were interviewed. The choice to include both GPs and PNs was based on recent developments in Danish general practice, where more consultations are handled by practice nurses, such as child vaccinations and child well visits. The interviews focussed on experiences with reporting on child maltreatment, perceptions of what child maltreatment is and how it may manifest, and child welfare in the context of general practice. Data analysis All interviews were transcribed verbatim and, together with field notes, read several times to develop an overview of patterns and overarching themes. Subsequently, the first author carried out an open coding using Nvivo13 and developed 25 codes, which were grouped into five themes: what is wrong with the child; cooperation with other sectors; suspicion; the doctor-patient relationship, and general practice as a context. illustrates the coding process. The themes were then discussed within the research group, which was made up of two forensic specialists in child abuse, one GP, two paediatricians with experience in the field of maltreatment, and one anthropologist with research experience from general practice. The qualitative data were based on five weeks of observation in five different general practice clinics, and 20 interviews with GPs and PNs, carried out by the first author between November 2019 and March 2020. Participating practices were strategically sampled , to ensure variation in practice type, geographical location, setting (rural, urban, provincial), and patient population (sociodemographic composition). The first author spent one week at each clinic with different doctors, nurses and patients, and observed hundreds of consultations covering a wide range of health-related problems, not only child consultations. This proved invaluable in developing a contextual understanding of how GPs and PNs think about and develop concerns, diagnoses, and care for patients. The first author also interviewed 20 GPs and PNs, all of whom had experience with child consultations. Some worked at the clinics where observations were carried out, and others were recruited from different clinics, locations, and patient populations through purposive sampling. provides an overview of the GPs and PNs who were interviewed. The choice to include both GPs and PNs was based on recent developments in Danish general practice, where more consultations are handled by practice nurses, such as child vaccinations and child well visits. The interviews focussed on experiences with reporting on child maltreatment, perceptions of what child maltreatment is and how it may manifest, and child welfare in the context of general practice. All interviews were transcribed verbatim and, together with field notes, read several times to develop an overview of patterns and overarching themes. Subsequently, the first author carried out an open coding using Nvivo13 and developed 25 codes, which were grouped into five themes: what is wrong with the child; cooperation with other sectors; suspicion; the doctor-patient relationship, and general practice as a context. illustrates the coding process. The themes were then discussed within the research group, which was made up of two forensic specialists in child abuse, one GP, two paediatricians with experience in the field of maltreatment, and one anthropologist with research experience from general practice. To make sense of how, when, and why suspicions of maltreatment arise in general practice, studies have applied theoretical concepts such as intuition and gut feeling . We explore this through the concept of uncertainty which is increasingly recognised as a condition for practicing medicine and is intrinsic to making choices (on treatment, procedures, medication etc.). As noted by Professor of general practice Guri Rortveit; uncertainty ‘is a core concept of medical activity, especially in general practice, where illness is evaluated at an early stage and available diagnostic tools are limited’ [ ,p.135]. Within social sciences, research focus on understanding how uncertainty is dealt with and made sense of in social situations , what it means to people living in particular situations and contexts, and how it is experienced and managed in daily life . We try to bridge the medical and social approaches as we explore how the need for support amid feelings of uncertainty may be an important aspect of diagnosing child maltreatment in situations where there are no concrete biological signs or indications, but still ‘something’ which alerts the attention of the health professional. Below we present the results from the quantitative and qualitative studies separately and subsequently we discuss them in combination. Questionnaire Attitudes, preferences and factors affecting the decision to make a mandatory report We sent 3,429 questionnaires to all GPs in Denmark and 1,252 completed questionnaires were returned (response rate 37.6%). 512 (41%) of the respondents were male, and 1,233 (98%) had finished specialty training in general practice. Data on the background total Danish GP population ( [Doctors and practice population 1997–2020 Key figures from the members registry] , 2020) is shown for comparison . Among the options suggested by the questionnaire, the GPs preferred to report or discuss with social services (94%); a colleague (63%); the caregiver (60%); and the police (10%) in case of suspected child abuse and/or neglect. Generally, no large differences were observed across strata, except for the number of GPs who would notify or discuss a case with colleagues (mean 63%; range 31 -72%). This option was reported by less than one third of GPs working alone, and by more than two thirds of female and younger GPs. Most GPs (83%), especially female (86%), younger (90%), and working in a group practice (88.8%), would rather discuss cases with (a) colleague(s) before making a report. A few GPs would discuss the issue with the child’s school, kindergarten, or daycare, the child’s caregiver, or family, or with someone from social services. In general, no large differences were observed across sex, age, type of practice, or region . depicts the percentage of responders reporting each of the factors affecting the GPs’ decision to make a report about child maltreatment to social services. Around half of responders reported: fear that the child will be further exposed to abuse and/or neglect , uncertainty about correct diagnosis , and collaboration with family . Less than a tenth was concerned about impact to their practice , fears of litigation , or fears for their own family . Replies did not differ across types of practice and geographical regions (data not shown). depicts the percentage of responders reporting each factor according to age and sex. Some factors showed small sex differences and a pattern of decline with age for both sexes ( potential impact on GP practice, fear for own family, fear of litigation, notification procedures unknown ). However, the overall percentages of GP doctors reporting each factor were roughly of the same magnitude across age and sex strata, and no clear patterns were observed. Interviews and fieldwork – managing suspicions in general practice Two overarching topics were identified in the qualitative data: rising suspicion (‘something’ not right) and managing suspicion. One key point that stand is that it was often impossible for GPs to figure out what was going on with a child from a single consultation, and one strategy they used was to make follow-up appointments to keep track of the child. This safety net approach was widely used to maintain the relationship with the child and the family that caused some level of concern and to both GPs and PNs one of the greatest challenges was that they feared that losing this trust and connection with the family could potentially harm the child. I was aware of it even before I had the consultation and thought that it was all really rather strange. So I made a follow-up appointment and said well we just have to follow up on this, I gave some other reason, and I saw him a few times after that, and I still thought something was off, but I didn’t think that there was an obvious reason…. I also asked another GP to take a look, just to take a look and see if he noticed anything. But we didn’t think that there was anything that we could base a report on. (GP 9) In addition to safety netting, as noted in the above quote, most GPs preferred discussing their suspicions with a colleague before making a report, when they were unsure about what they had observed/sensed, or when they experienced patient cases where they did not know how to act. In these cases, practice nurses would often call their GP into the consultation to observe. I won’t say that we always do it but in those more difficult cases, I think we need to discuss it, in order to be able to deal with it ourselves. Because when a suspicion is raised it is nice to just get a feeling that it is not just me being paranoid. So, we typically discuss it over lunch, or we knock on each other’s door if we need another set of eyes. (PN 5) When the concern was particularly vague or if it gave rise to increased uncertainty, the colleague called upon was often a hospital specialist in the paediatric department. Well, sometimes it can be the way that parents explain the symptoms.… that they are overly concerned or not concerned or when something appears unusual in the interaction… I don’t know if you necessarily think abuse, but one thought could be whether this child is cared for properly. Are the parents able to provide support when they are suffering from whatever… are in pain and so on. And that can also be a reason to refer to the pediatric department where they are able to get that support, right, if they need it. (GP 11) Thus, referring to the specialists was used as a means of support, and rather than reporting GPs would often refer, when they had the feeling that something was wrong with the child or a family, but they were unsure about how to pinpoint their uncertainty. Sometimes I chose to refer because I am really uncomfortable with the situation. And at the paediatric department they will be like, well there is nothing here…. No but we do have to observe the situation for more than the 10 minutes we have here in general practice. (GP 2) One finding that featured throughout the interviews was how GPs and PNs were centrally placed in terms of being the patients’ health and care coordinator [ tovholder ]. If the families had nowhere else to go, they would seek help from their GP. Just the other day one of my patients came in, a man, and he had just been contacted by the social services, because someone had reported that they thought that he was not caring properly for his children. And he came to me to ask what to do in that situation right. In fact, we are kind of society’s dust bin, right, if no one else will help then your GP will. (GP 7) GPs and PNs valued this cooperation with patients and families and referred to themselves as one of the most consistent figures in the lives of vulnerable families. Not losing touch with those families was important to them. I think that we are very central because we are so stable…. In fact, we are more stable than the people from the municipality right. They know us and we know the families – for different things. Not only because of the child but we know the father for his issues and the mother for hers. And the child well visits are also a way of forming a bond. So we are considered more as on their side. (PN 1) In most of the interviews particularly the GPs pointed out that there are challenges inherent in the interaction between general practice and social services in the municipality, which is the unit responsible for managing reports of suspicions of child maltreatment. One barrier was communication across the sectors. GPs were often unaware of the actions taken by social services after a report was made. There are several challenges with cooperation on child care ……. For instance that we get no response on our reports…. well now at least they have started sending an acknowledgement of receiving the report. (GP 4) Most GPs found cooperation with social services difficult, and the lack of response was frustrating. The GPs did not consult with social services when in doubt, and primarily reported on cases where there were concrete observations, and in most cases, reports were made in cooperation with the family, as a way of getting help to a family in need. Attitudes, preferences and factors affecting the decision to make a mandatory report We sent 3,429 questionnaires to all GPs in Denmark and 1,252 completed questionnaires were returned (response rate 37.6%). 512 (41%) of the respondents were male, and 1,233 (98%) had finished specialty training in general practice. Data on the background total Danish GP population ( [Doctors and practice population 1997–2020 Key figures from the members registry] , 2020) is shown for comparison . Among the options suggested by the questionnaire, the GPs preferred to report or discuss with social services (94%); a colleague (63%); the caregiver (60%); and the police (10%) in case of suspected child abuse and/or neglect. Generally, no large differences were observed across strata, except for the number of GPs who would notify or discuss a case with colleagues (mean 63%; range 31 -72%). This option was reported by less than one third of GPs working alone, and by more than two thirds of female and younger GPs. Most GPs (83%), especially female (86%), younger (90%), and working in a group practice (88.8%), would rather discuss cases with (a) colleague(s) before making a report. A few GPs would discuss the issue with the child’s school, kindergarten, or daycare, the child’s caregiver, or family, or with someone from social services. In general, no large differences were observed across sex, age, type of practice, or region . depicts the percentage of responders reporting each of the factors affecting the GPs’ decision to make a report about child maltreatment to social services. Around half of responders reported: fear that the child will be further exposed to abuse and/or neglect , uncertainty about correct diagnosis , and collaboration with family . Less than a tenth was concerned about impact to their practice , fears of litigation , or fears for their own family . Replies did not differ across types of practice and geographical regions (data not shown). depicts the percentage of responders reporting each factor according to age and sex. Some factors showed small sex differences and a pattern of decline with age for both sexes ( potential impact on GP practice, fear for own family, fear of litigation, notification procedures unknown ). However, the overall percentages of GP doctors reporting each factor were roughly of the same magnitude across age and sex strata, and no clear patterns were observed. Interviews and fieldwork – managing suspicions in general practice Two overarching topics were identified in the qualitative data: rising suspicion (‘something’ not right) and managing suspicion. One key point that stand is that it was often impossible for GPs to figure out what was going on with a child from a single consultation, and one strategy they used was to make follow-up appointments to keep track of the child. This safety net approach was widely used to maintain the relationship with the child and the family that caused some level of concern and to both GPs and PNs one of the greatest challenges was that they feared that losing this trust and connection with the family could potentially harm the child. I was aware of it even before I had the consultation and thought that it was all really rather strange. So I made a follow-up appointment and said well we just have to follow up on this, I gave some other reason, and I saw him a few times after that, and I still thought something was off, but I didn’t think that there was an obvious reason…. I also asked another GP to take a look, just to take a look and see if he noticed anything. But we didn’t think that there was anything that we could base a report on. (GP 9) In addition to safety netting, as noted in the above quote, most GPs preferred discussing their suspicions with a colleague before making a report, when they were unsure about what they had observed/sensed, or when they experienced patient cases where they did not know how to act. In these cases, practice nurses would often call their GP into the consultation to observe. I won’t say that we always do it but in those more difficult cases, I think we need to discuss it, in order to be able to deal with it ourselves. Because when a suspicion is raised it is nice to just get a feeling that it is not just me being paranoid. So, we typically discuss it over lunch, or we knock on each other’s door if we need another set of eyes. (PN 5) When the concern was particularly vague or if it gave rise to increased uncertainty, the colleague called upon was often a hospital specialist in the paediatric department. Well, sometimes it can be the way that parents explain the symptoms.… that they are overly concerned or not concerned or when something appears unusual in the interaction… I don’t know if you necessarily think abuse, but one thought could be whether this child is cared for properly. Are the parents able to provide support when they are suffering from whatever… are in pain and so on. And that can also be a reason to refer to the pediatric department where they are able to get that support, right, if they need it. (GP 11) Thus, referring to the specialists was used as a means of support, and rather than reporting GPs would often refer, when they had the feeling that something was wrong with the child or a family, but they were unsure about how to pinpoint their uncertainty. Sometimes I chose to refer because I am really uncomfortable with the situation. And at the paediatric department they will be like, well there is nothing here…. No but we do have to observe the situation for more than the 10 minutes we have here in general practice. (GP 2) One finding that featured throughout the interviews was how GPs and PNs were centrally placed in terms of being the patients’ health and care coordinator [ tovholder ]. If the families had nowhere else to go, they would seek help from their GP. Just the other day one of my patients came in, a man, and he had just been contacted by the social services, because someone had reported that they thought that he was not caring properly for his children. And he came to me to ask what to do in that situation right. In fact, we are kind of society’s dust bin, right, if no one else will help then your GP will. (GP 7) GPs and PNs valued this cooperation with patients and families and referred to themselves as one of the most consistent figures in the lives of vulnerable families. Not losing touch with those families was important to them. I think that we are very central because we are so stable…. In fact, we are more stable than the people from the municipality right. They know us and we know the families – for different things. Not only because of the child but we know the father for his issues and the mother for hers. And the child well visits are also a way of forming a bond. So we are considered more as on their side. (PN 1) In most of the interviews particularly the GPs pointed out that there are challenges inherent in the interaction between general practice and social services in the municipality, which is the unit responsible for managing reports of suspicions of child maltreatment. One barrier was communication across the sectors. GPs were often unaware of the actions taken by social services after a report was made. There are several challenges with cooperation on child care ……. For instance that we get no response on our reports…. well now at least they have started sending an acknowledgement of receiving the report. (GP 4) Most GPs found cooperation with social services difficult, and the lack of response was frustrating. The GPs did not consult with social services when in doubt, and primarily reported on cases where there were concrete observations, and in most cases, reports were made in cooperation with the family, as a way of getting help to a family in need. We sent 3,429 questionnaires to all GPs in Denmark and 1,252 completed questionnaires were returned (response rate 37.6%). 512 (41%) of the respondents were male, and 1,233 (98%) had finished specialty training in general practice. Data on the background total Danish GP population ( [Doctors and practice population 1997–2020 Key figures from the members registry] , 2020) is shown for comparison . Among the options suggested by the questionnaire, the GPs preferred to report or discuss with social services (94%); a colleague (63%); the caregiver (60%); and the police (10%) in case of suspected child abuse and/or neglect. Generally, no large differences were observed across strata, except for the number of GPs who would notify or discuss a case with colleagues (mean 63%; range 31 -72%). This option was reported by less than one third of GPs working alone, and by more than two thirds of female and younger GPs. Most GPs (83%), especially female (86%), younger (90%), and working in a group practice (88.8%), would rather discuss cases with (a) colleague(s) before making a report. A few GPs would discuss the issue with the child’s school, kindergarten, or daycare, the child’s caregiver, or family, or with someone from social services. In general, no large differences were observed across sex, age, type of practice, or region . depicts the percentage of responders reporting each of the factors affecting the GPs’ decision to make a report about child maltreatment to social services. Around half of responders reported: fear that the child will be further exposed to abuse and/or neglect , uncertainty about correct diagnosis , and collaboration with family . Less than a tenth was concerned about impact to their practice , fears of litigation , or fears for their own family . Replies did not differ across types of practice and geographical regions (data not shown). depicts the percentage of responders reporting each factor according to age and sex. Some factors showed small sex differences and a pattern of decline with age for both sexes ( potential impact on GP practice, fear for own family, fear of litigation, notification procedures unknown ). However, the overall percentages of GP doctors reporting each factor were roughly of the same magnitude across age and sex strata, and no clear patterns were observed. Two overarching topics were identified in the qualitative data: rising suspicion (‘something’ not right) and managing suspicion. One key point that stand is that it was often impossible for GPs to figure out what was going on with a child from a single consultation, and one strategy they used was to make follow-up appointments to keep track of the child. This safety net approach was widely used to maintain the relationship with the child and the family that caused some level of concern and to both GPs and PNs one of the greatest challenges was that they feared that losing this trust and connection with the family could potentially harm the child. I was aware of it even before I had the consultation and thought that it was all really rather strange. So I made a follow-up appointment and said well we just have to follow up on this, I gave some other reason, and I saw him a few times after that, and I still thought something was off, but I didn’t think that there was an obvious reason…. I also asked another GP to take a look, just to take a look and see if he noticed anything. But we didn’t think that there was anything that we could base a report on. (GP 9) In addition to safety netting, as noted in the above quote, most GPs preferred discussing their suspicions with a colleague before making a report, when they were unsure about what they had observed/sensed, or when they experienced patient cases where they did not know how to act. In these cases, practice nurses would often call their GP into the consultation to observe. I won’t say that we always do it but in those more difficult cases, I think we need to discuss it, in order to be able to deal with it ourselves. Because when a suspicion is raised it is nice to just get a feeling that it is not just me being paranoid. So, we typically discuss it over lunch, or we knock on each other’s door if we need another set of eyes. (PN 5) When the concern was particularly vague or if it gave rise to increased uncertainty, the colleague called upon was often a hospital specialist in the paediatric department. Well, sometimes it can be the way that parents explain the symptoms.… that they are overly concerned or not concerned or when something appears unusual in the interaction… I don’t know if you necessarily think abuse, but one thought could be whether this child is cared for properly. Are the parents able to provide support when they are suffering from whatever… are in pain and so on. And that can also be a reason to refer to the pediatric department where they are able to get that support, right, if they need it. (GP 11) Thus, referring to the specialists was used as a means of support, and rather than reporting GPs would often refer, when they had the feeling that something was wrong with the child or a family, but they were unsure about how to pinpoint their uncertainty. Sometimes I chose to refer because I am really uncomfortable with the situation. And at the paediatric department they will be like, well there is nothing here…. No but we do have to observe the situation for more than the 10 minutes we have here in general practice. (GP 2) One finding that featured throughout the interviews was how GPs and PNs were centrally placed in terms of being the patients’ health and care coordinator [ tovholder ]. If the families had nowhere else to go, they would seek help from their GP. Just the other day one of my patients came in, a man, and he had just been contacted by the social services, because someone had reported that they thought that he was not caring properly for his children. And he came to me to ask what to do in that situation right. In fact, we are kind of society’s dust bin, right, if no one else will help then your GP will. (GP 7) GPs and PNs valued this cooperation with patients and families and referred to themselves as one of the most consistent figures in the lives of vulnerable families. Not losing touch with those families was important to them. I think that we are very central because we are so stable…. In fact, we are more stable than the people from the municipality right. They know us and we know the families – for different things. Not only because of the child but we know the father for his issues and the mother for hers. And the child well visits are also a way of forming a bond. So we are considered more as on their side. (PN 1) In most of the interviews particularly the GPs pointed out that there are challenges inherent in the interaction between general practice and social services in the municipality, which is the unit responsible for managing reports of suspicions of child maltreatment. One barrier was communication across the sectors. GPs were often unaware of the actions taken by social services after a report was made. There are several challenges with cooperation on child care ……. For instance that we get no response on our reports…. well now at least they have started sending an acknowledgement of receiving the report. (GP 4) Most GPs found cooperation with social services difficult, and the lack of response was frustrating. The GPs did not consult with social services when in doubt, and primarily reported on cases where there were concrete observations, and in most cases, reports were made in cooperation with the family, as a way of getting help to a family in need. We combined questionnaire and ethnographic data and explored the ways suspicions of child maltreatment are managed in Danish general practice. Our results show that most GPs (94.2%) prefer to report to social services in cases of suspicion of child abuse and/or neglect. However, before making the report many GPs prefer to discuss the case with a colleague, especially GPs who were younger, female, and working in group practices. The PNs never made the referral on their own, this was always done by the GP. However, the management of suspicions of child maltreatment were similar across the professions. Generally, the findings from the questionnaires in our study were similar across type of practice and Region. The qualitative data supported and expanded these findings, highlighting the challenges of communication with social services and the very limited opportunities for collaboration around the child and family. The questionnaires showed that only a few GPs (8.8.%) preferred discussing cases with social services before making a report, possibly because of a lack of response or feedback. The ethnographic data provided an in-depth understanding of the feelings of uncertainty expressed by GPs and PNs, especially when their concerns were based on a feeling that something was off, and not on clear-cut signs. In these cases they did not seek advice or support from social services, but when they are in doubt about their findings, they preferred to refer to the paediatric department or discuss their concern with colleagues, while trying to maintain their relationship with the family. The GPs and PNs stressed that they were often one of the few professionals who had a longstanding relationship with the families who struggled the most, and making new appointments with the child or family were used actively as a strategy to keep track of the child. The two different methodologies uniquely supported each other. The ethnographic data provided in-depth perspectives on the findings of the questionnaire responses and elucidated the difficult processes around reporting to social authorities from a GP setting. These perspectives seemed supported by the responses of most participants to the questionnaire. The questionnaire response rate was, as seen in similar studies, quite low, thus raising questions of representativeness and generalisability of the results. Although the responders were similar to the background population, some underrepresentation of GPs over 60 years of age and GPs working in the Capital Region occurred. We have no data to evaluate whether the responders differed according to their attitudes and experiences in dealing with child maltreatment compared to the non-responders and can thus not preclude selection bias with respect to this. The participants in the ethnographic study were selected based on practice type, patient population and geographic location, which may have reduced the potential selection bias. Combining the two data sources should of course be considered with caution, as the interviews and observations should not be read as a validation of the questionnaire responses, nor vice versa. The ethnographic data do, however, provide context and depth to the overall patterns that can be observed in the questionnaires, and our results should be interpreted from this perspective. According to both questionnaires and interviews, on most occasions the GPs discussed their concerns and their intention to make a report with the child’s caregivers and made the report in collaboration with the family, a finding reflected in another recent study . Although this transparency seems positive, and points to the negotiations around patients’ life circumstances, specific situations and contexts, GPs may still be unable to follow up on the family after making a report. Without feedback or support, other than referring the family to social services or the hospital paediatric departments, and without established channels to consult with other professionals on their concerns, a core concept expressed by the GPs in our study was the feeling of uncertainty. This feeling may be further enhanced by relatively limited experience with cases of abuse and neglect in general practice, suggesting that other doctors may be the most important network GPs somewhat haphazardly use to deal with difficult cases. The questionnaire did not differentiate which colleagues, from hospital or practice, the GPs prefer to discuss cases with, but in the interviews, it was obvious that many did rely largely on colleagues from their own practice, albeit specialist departments at the hospital were used as a safety net. The GPs used both telephone advice and referral to the specialist departments as second opinions, rather than referring to the social services, which may further delay the assistance to a child in need of help. Interestingly, GPs working in single practices did not consult colleagues to the same extent as GPs from group practices, which may of course reflect the significance of availability when GPs involve colleagues. Nevertheless, it may also indicate that they consult less frequently with the specialist departments, which may be considered an example of how reasonable suspicion means different things to different people, as suggested by the authors of a US study . They show that young females with fewer reports during the past 2 years had a more substantive and conceptual understanding of reasonable suspicion. In the pilot study we carried out prior to this study, we also found the significance of experience reflected in tendencies to report and refer on suspicions . Contrary to other studies , considerations and fear for personal or professional impact seem to have little influence on GPs decisions to report. This may reflect both willingness to run risks while caring for patients and feeling safe to make difficult choices when necessary. The key factors affecting GPs in their decision to make a report were either centred around the child and the family or related to uncertainty. They included fear of triggering an unstable family and thereby causing further harm to the child, fear for future collaboration with the family, fear that the child could be worse off if social services intervened or were related to an inborn uncertainty about the diagnosis. Many of these factors are hard to cover in guidelines, which are the most often suggested tools to assist clinicians in situations of uncertainty. As pointed out by Stolper et al. [ ,p.122], while GPs are often blamed for low reporting rates for child maltreatment, this does not mean that the detection rate is low. Insights from social sciences have pointed out how ‘ control and uncertainty are always negotiated within social relations’ [ ,p.11], which may be related with how GPs try to improve the child’s situation by making use of the doctor-patient relationship and by involving other professionals, such as paediatric departments. Our results indicate that it is not necessarily that GPs and PNs do not discover or suspect that things are not right. They manage their uncertainty by referring to specialists, by working on relations with the family, or by watchful waiting. However, if the GPs are to act early this uncertainty must be acknowledged and perhaps incorporated into guidelines and teaching curriculum for medical students and GP trainees, in order to enable proactive attention to child maltreatment. If uncertainty is taken seriously as a central and intrinsic aspect of acting on suspicions of child maltreatment, we may be able to better assist GPs in acting early and proactively in those situations where there are no cuts or bruises, but still ‘something’ which alerts their attention. Moreover, it seems important to establish a better relation with, and understanding of, the responsibilities of social services, what reporting to them means, and how this might help the child and the family. There is little doubt that the complex reality of general practice provides an important but also difficult point of departure for detecting child maltreatment. It seems vital to improve the communication, transparency, collaboration, and feedback between general practice and social services in order to improve child welfare. GPs and PNs often feel left to themselves in managing their suspicion and do not consult with social services when in doubt, although social services are the responsible authorities for children at risk of maltreatment. Reacting to the suspicion of child maltreatment in general practice holds the potential of caring for children who are subjected to neglect and/or abuse much earlier than when these children are seen by doctors at the more specialised departments, who rarely meet the child until the impairment is severe.
Danish general practitioners as gatekeepers for gynaecological patients in regions with different density of resident specialists in gynaecology: in which situations and to whom do they refer? A cross-sectional study
da26bb59-75bc-4b21-920c-fe44fccb9aa7
10088933
Gynaecology[mh]
In many European countries, the General Practitioner (GP) acts as a professional medical front line person between the wishes and needs of the population on the one hand and access to the specialised healthcare system on the other hand . This gatekeeper system and GPs having a list of patients enrolled at their practice to ensure continuity of care has been seen as part of a comprehensive healthcare system and as a tool to ensure equal access for those in need of care . In the course of a year, 86% of the Danish population comes into direct contact with their GP . The composition of the population enrolled at the GPs list and those who actually contact the GP have an impact on the likelihood of referral to the various specialties . Nevertheless, in Danish as well as in international studies, referral percentages are very similar, with 4–6% of GP contacts being referred to a resident specialist or to a Hospital/Outpatient Clinic (HOC) . The GP referral patterns to resident specialists vary. A wide range of external conditions such as local access to resident specialist, social conditions and the general morbidity of those enrolled at the GP practice have been shown to have an impact on the proportion of patients that are referred . Therefore, referrals occur for very different reasons and at different points in time during a patient contact. In addition, in Denmark there is an unequal distribution between health care regions of specialists, which might shift the referral pattern towards hospital care. Within the gynaecological specialty, the GP can refer patients either to a HOC or to a Resident Specialist in Gynaecology (RSG). It is unknown in which situations the GP refers gynaecological patients and, also, whether these patients are referred to an RSG or to the HOC. There is also a lack of knowledge as to whether the density of RSG influences the referral pattern; moreover, it is not known whether differences in the density of RSGs results in an inequality in the specialist treatment of gynaecological diseases. The present study investigated the referral patterns for GPs referring gynaecological patients to the RSG or to the HOC in specific situations according to density of RSG. Further, we examined whether patients were referred to the HOC or to the RSG, or whether they were treated by the GP her/himself depending on the density of RSGs for six benign gynaecological diagnoses. Setting The Danish health care system is divided into five administrative regions which are defined geographically as the Capital Region (population ∼1.9 million), the Region of Zealand (∼0.8 million), the Southern Region (∼1.2 million), the Central Region (∼1.3 million), and the Northern Region (∼0.6 million). These regions govern primary and secondary health care services provided by GPs, hospitals, and resident specialists. GPs serve as gatekeepers to secondary care, including referrals to resident specialists and inpatient and outpatient hospital care. The Danish healthcare system is based on free and equal access to treatment and is mainly tax financed . Each region politically decides how many resident specialists they require within each discipline, such as in gynaecology and obstetrics, but the number of female individuals per RSG varies considerably between regions, going from approximately 20,000 in the Capital Region of Denmark to approximately 145,000 in the North Denmark Region . Design This was as a cross-sectional study based on questionnaire data from GPs. Study population A total of 100 GPs were randomly selected from each of the five Danish regions with the help of a distribution key based on the total number of doctors in the respective region. Five hundred GPs were invited to take part in the questionnaire study. Questionnaires The anonymised questionnaire comprised questions about demographic data of the GP, including age and sex. Furthermore, it asked in which situations the GP referred gynaecological patients and to whom (HOC or RSG). Six benign gynaecological diagnoses were provided as examples: (i) excessive and frequent menstruation with regular cycle, (ii) Lichen simplex chronicus, (iii) postmenopausal bleeding, (iv) menopausal and perimenopausal disorder, (v) dyspareunia, and (vi) insertion of (intrauterine) contraceptive device (IUD). The GP was asked which diagnoses (s)he treated her/himself or referred to a RSG or to the HOC. The questionnaires were field tested before use. Three GPs were interviewed regarding their understanding of the questions and thereafter completed by five additional GPs. As the GPs deemed the questions understandable, no changes were made. For a list of questions, see Appendix Table A1 . Data collection The GPs received the questionnaire by postal mail in September 2020. A cover letter containing information on the study and a postage paid return envelope were enclosed with each questionnaire. The returned questionnaires were entered into Research Electronic Data Capture (REDCap) by two independent persons and merged by a third person. Study data were collected and managed using REDCap hosted at the University of Southern Denmark. Data analysis Characteristics of responding GPs were reported as numbers and proportions for each of the five regions. Differences between the responding GPs in each region were tested using Pearson’s Chi-Squared test. Referral patterns of gynaecological patients from GPs overall and for six specific reasons were reported as numbers and proportions. Associations between GPs reason for referring to RSG, HOC or both and density of RSG were calculated as odds ratios (OR) with 95% confidence intervals (CI) using generalized linear models for the binomial family. Likewise, the associations between GP referral to RSG, HOC or keeping patients in the GP’s practice, and density of RSG were calculated for the six specific diagnoses. Data analyses were conducted using STATA statistical software 16 (StataCorp, College Station, TX, USA). Ethical approval According to EU's General Data Protection Regulation (article 30), the project was listed at The Record of Processing Activities for Research Projects in Southern Denmark Region (j. no: 19/19630). According to the Consolidation Act on Research Ethics Review of Health Research Projects, Consolidation Act number 1083 of 15 September, 2017 section 14 (2) notification of questionnaire surveys or medical database research projects to the research ethics committee system is only required if the project involves human biological material. Therefore, this study was conducted without an approval from the committees (J.no.: S-20192000-78). The Danish health care system is divided into five administrative regions which are defined geographically as the Capital Region (population ∼1.9 million), the Region of Zealand (∼0.8 million), the Southern Region (∼1.2 million), the Central Region (∼1.3 million), and the Northern Region (∼0.6 million). These regions govern primary and secondary health care services provided by GPs, hospitals, and resident specialists. GPs serve as gatekeepers to secondary care, including referrals to resident specialists and inpatient and outpatient hospital care. The Danish healthcare system is based on free and equal access to treatment and is mainly tax financed . Each region politically decides how many resident specialists they require within each discipline, such as in gynaecology and obstetrics, but the number of female individuals per RSG varies considerably between regions, going from approximately 20,000 in the Capital Region of Denmark to approximately 145,000 in the North Denmark Region . This was as a cross-sectional study based on questionnaire data from GPs. A total of 100 GPs were randomly selected from each of the five Danish regions with the help of a distribution key based on the total number of doctors in the respective region. Five hundred GPs were invited to take part in the questionnaire study. The anonymised questionnaire comprised questions about demographic data of the GP, including age and sex. Furthermore, it asked in which situations the GP referred gynaecological patients and to whom (HOC or RSG). Six benign gynaecological diagnoses were provided as examples: (i) excessive and frequent menstruation with regular cycle, (ii) Lichen simplex chronicus, (iii) postmenopausal bleeding, (iv) menopausal and perimenopausal disorder, (v) dyspareunia, and (vi) insertion of (intrauterine) contraceptive device (IUD). The GP was asked which diagnoses (s)he treated her/himself or referred to a RSG or to the HOC. The questionnaires were field tested before use. Three GPs were interviewed regarding their understanding of the questions and thereafter completed by five additional GPs. As the GPs deemed the questions understandable, no changes were made. For a list of questions, see Appendix Table A1 . The GPs received the questionnaire by postal mail in September 2020. A cover letter containing information on the study and a postage paid return envelope were enclosed with each questionnaire. The returned questionnaires were entered into Research Electronic Data Capture (REDCap) by two independent persons and merged by a third person. Study data were collected and managed using REDCap hosted at the University of Southern Denmark. Characteristics of responding GPs were reported as numbers and proportions for each of the five regions. Differences between the responding GPs in each region were tested using Pearson’s Chi-Squared test. Referral patterns of gynaecological patients from GPs overall and for six specific reasons were reported as numbers and proportions. Associations between GPs reason for referring to RSG, HOC or both and density of RSG were calculated as odds ratios (OR) with 95% confidence intervals (CI) using generalized linear models for the binomial family. Likewise, the associations between GP referral to RSG, HOC or keeping patients in the GP’s practice, and density of RSG were calculated for the six specific diagnoses. Data analyses were conducted using STATA statistical software 16 (StataCorp, College Station, TX, USA). According to EU's General Data Protection Regulation (article 30), the project was listed at The Record of Processing Activities for Research Projects in Southern Denmark Region (j. no: 19/19630). According to the Consolidation Act on Research Ethics Review of Health Research Projects, Consolidation Act number 1083 of 15 September, 2017 section 14 (2) notification of questionnaire surveys or medical database research projects to the research ethics committee system is only required if the project involves human biological material. Therefore, this study was conducted without an approval from the committees (J.no.: S-20192000-78). Of the 500 GPs who received a questionnaire, 347 GPs (69.4%) replied. Of these, 61.4% were female. Regarding age, 51.2% were younger than 50 years, and 76.3% were younger than 60 years. The majority (58.8%) had more than 10 years of professional experience as GPs and most commonly worked in practices with two to three doctors (45.2%). Most practices had both female and male GPs (52.3%). There were no statistically significant differences in any GP characteristics between regions . Referral patterns in specific situations As shown in , 62.9% of GPs referred gynaecological patients to RSG and 9.6% to hospitals/outpatient clinics and 27.5% replied that they referred equally to both. In case of suspected malignancy or suspected severe illness, GPs referred mainly to the HOC. The majority of GPs prefer to refer their patients to RSG with regard to waiting time, patients’ wish, service and distance. In addition, 85.1% of GPs responded that they would prefer to refer patients to RSG, if waiting time and distance were the same as for HOCs. shows, that in regions with a lower density of RSGs than the highest, GPs less frequently referred patients to the RSG. In relation to waiting time and distance, as the density of RSG decreased, the probability of being referred to hospital increased. Referral patterns according to diagnosis As can be seen from , with regard to the six benign gynaecological diagnoses, GPs were more likely to refer to the RSG than to the HOC and more likely to carry out the treatment themselves than to refer patients to the HOC in all other diagnoses than Postmenopausal bleeding. Apart from the diagnoses of Menopausal and perimenopausal disorders and the Insertion of IUD, the general practitioners were more likely to refer patients to RSG than to perform the treatment themselves. demonstrates, for the six benign gynaecological diagnoses, that GPs in the region with the lowest density of RSGs (Northern Region) referred to a RSG to a lesser extent than in the region with the highest density (Capital Region). On closer inspection of the table shows, this difference was significant for Excessive and frequent menstruation with regular cycle, Lichen simplex chronicus, Postmenopausal bleeding, Dyspareunia and Insertion of IUD. Insertion of IUD was more often treated by the GPs themselves in regions where the density of RSG was not the highest. The same applied to patients with Lichen simplex chronicus, although these patients were also referred to the HOC more frequently in regions with a lower density of RSG. As shown in , 62.9% of GPs referred gynaecological patients to RSG and 9.6% to hospitals/outpatient clinics and 27.5% replied that they referred equally to both. In case of suspected malignancy or suspected severe illness, GPs referred mainly to the HOC. The majority of GPs prefer to refer their patients to RSG with regard to waiting time, patients’ wish, service and distance. In addition, 85.1% of GPs responded that they would prefer to refer patients to RSG, if waiting time and distance were the same as for HOCs. shows, that in regions with a lower density of RSGs than the highest, GPs less frequently referred patients to the RSG. In relation to waiting time and distance, as the density of RSG decreased, the probability of being referred to hospital increased. As can be seen from , with regard to the six benign gynaecological diagnoses, GPs were more likely to refer to the RSG than to the HOC and more likely to carry out the treatment themselves than to refer patients to the HOC in all other diagnoses than Postmenopausal bleeding. Apart from the diagnoses of Menopausal and perimenopausal disorders and the Insertion of IUD, the general practitioners were more likely to refer patients to RSG than to perform the treatment themselves. demonstrates, for the six benign gynaecological diagnoses, that GPs in the region with the lowest density of RSGs (Northern Region) referred to a RSG to a lesser extent than in the region with the highest density (Capital Region). On closer inspection of the table shows, this difference was significant for Excessive and frequent menstruation with regular cycle, Lichen simplex chronicus, Postmenopausal bleeding, Dyspareunia and Insertion of IUD. Insertion of IUD was more often treated by the GPs themselves in regions where the density of RSG was not the highest. The same applied to patients with Lichen simplex chronicus, although these patients were also referred to the HOC more frequently in regions with a lower density of RSG. Statement of principal findings This cross-sectional study showed that the referral patterns of GPs was highly dependent on the density of RSGs. The higher the density of RSGs, the more likely that gynaecological patients were referred to the RSG, and conversely, the lower the density of RSGs, the more likely that gynaecological patients were referred to the HOC. GPs most often refer their gynaecological patients to the HOC in cases of suspicion of cancer or other severe disease. Strengths and weaknesses of the study Because none of the previously existing questionnaires we could find on this topic addressed all the items we wanted to include in this study, we developed a study-specific questionnaire. This ensured that the relevant questions were included and that the context was given. We used paper questionnaires as it was not possible obtain a list of email addresses of the GPs due to the General Data Protection Regulation (GDPR). Paper questionnaires have shown declining response rates over the past decade. A low response rate may induce selection bias because respondents may differ systematically from non-respondents, and the study population will thus not represent the target population . However, we achieved a fair response rate with a percentage of 69.4% and with 61.4% females compared to the Danish national average of 58.1% . Thus, the risk of selection bias must be considered low. However, because we did not have access to any information on the targeted study sample, we could not perform a responder – non-responder analysis. For logistic reasons, we selected and invited 100 GPs from each Danish region. This corresponds to 15% of all GPs in Denmark. However, as the number of GPs in the different regions is not the same in absolute numbers, this resulted in a different percentage of invitations between regions, ranging from 9.7% (Capital Region) to 35.1% (Northern Region). Since GPs in Denmark, regardless of the region in which they practice, have the same education at the respective time in their career and the distribution of GPs in the regions is almost the same with regard to sex and age , we believe that this study sample is generalisable to the GP population in its entirety. The fact that we found no differences in GP characteristics over the regions strengthens the credibility of our results. The present study has been carried out in Denmark under Danish conditions in the health system. However, the results should be comparable with health systems that are similarly structured e.g. GP as gatekeeper, especially with the other Scandinavian countries thus we assume that the conditions would be similar due to the great cultural proximity. Findings in relation to other studies Women with gynaecological problems, who are referred to an RSG are always examined by a specialist but when referred to an HOC, they would often be examined by a doctor, who is not yet a specialist but still in training. To compensate for this, HOCs are organized such that doctors in training can always call in a specialist , although this depends on whether the examining doctor decides to call a specialist or not. Due to lack of experience, it may happen that the doctor in training comes to misjudgements and does not call a specialist although it would be indicated. Thus, this may delay the correct diagnosis of a serious disease . This difference means that unless all patients have equal access to relevant care, there would be an inequality in the quality of care depending on in which part of the country they live, which, in turn, can have an impact on the health of this group of the population. Our study demonstrated that GPs prefer to refer their gynaecological patients to RSG; only 9.6% of GPs refer their patients exclusively to the hospital, although most would refer their gynaecological patients directly to the hospital, if they suspect cancer or another severe diagnosis. We examined five geographic regions with different densities of RSG and found that the referral pattern depends on the density of RSG. These results are in agreement with previous studies that have shown that if the number of resident specialists increases, more patients are referred to a resident specialist and at the same time fewer patients are referred to hospitals . With regard to the diagnoses examined, the present study shows that the referral pattern is strongly dependent on the density of RSG in the local region and for five of the six gynaecological diagnoses examined, there was a significantly lower chance for the patient to be referred to an RSG in the region with the lowest density compared to the region with the highest density of RSG. The national average distance from the patient’s place of residence to the hospital is greater than the average distance from the patient’s place of residence to the RSG in the region with the highest density of RSG. This results in a longer transport time and more costs for the patients who live in the region with lowest density of RSG. This can have detrimental effects, as it has shown in previous studies that there is an association between travel distance and cancer prognosis . We also know that the distance to the hospital is linked to an increasing diagnostic interval for cancer . As far as we know, this has not been investigated in relation to the density of RSGs. However, since the RSG is a specialist, it is not unlikely that such studies would obtain similar results. When delays are discussed in the diagnosis of cancer, for example, patient delays, GP delays, and system delays are mentioned , but the density of resident specialists has not been taken into account, although it is known that increased availability of specialist care translates into higher referral rates . Possible mechanisms and implications for clinicians or policy makers In regions with a lower density of resident specialists in gynaecology, women are less frequently referred to a resident specialist in gynaecology. If there are regions in the same country with different densities of resident specialists in gynaecology, one must assume that the population will have an unequal opportunity to have a specialist examination. This results in an injustice in the healthcare system within the same country. Whether or not this inequality should be accepted or not is a political decision, but our results indicate that there are significant differences between regions that may have an impact on the gynaecologic treatment of women. Clearly, further studies are needed to determine the exact consequences of the difference in referral patterns in terms of treatment outcomes. However, the results from our study should already facilitate the future planning of health care in gynaecology with the aim of reducing inequality in the access to RSG. This cross-sectional study showed that the referral patterns of GPs was highly dependent on the density of RSGs. The higher the density of RSGs, the more likely that gynaecological patients were referred to the RSG, and conversely, the lower the density of RSGs, the more likely that gynaecological patients were referred to the HOC. GPs most often refer their gynaecological patients to the HOC in cases of suspicion of cancer or other severe disease. Because none of the previously existing questionnaires we could find on this topic addressed all the items we wanted to include in this study, we developed a study-specific questionnaire. This ensured that the relevant questions were included and that the context was given. We used paper questionnaires as it was not possible obtain a list of email addresses of the GPs due to the General Data Protection Regulation (GDPR). Paper questionnaires have shown declining response rates over the past decade. A low response rate may induce selection bias because respondents may differ systematically from non-respondents, and the study population will thus not represent the target population . However, we achieved a fair response rate with a percentage of 69.4% and with 61.4% females compared to the Danish national average of 58.1% . Thus, the risk of selection bias must be considered low. However, because we did not have access to any information on the targeted study sample, we could not perform a responder – non-responder analysis. For logistic reasons, we selected and invited 100 GPs from each Danish region. This corresponds to 15% of all GPs in Denmark. However, as the number of GPs in the different regions is not the same in absolute numbers, this resulted in a different percentage of invitations between regions, ranging from 9.7% (Capital Region) to 35.1% (Northern Region). Since GPs in Denmark, regardless of the region in which they practice, have the same education at the respective time in their career and the distribution of GPs in the regions is almost the same with regard to sex and age , we believe that this study sample is generalisable to the GP population in its entirety. The fact that we found no differences in GP characteristics over the regions strengthens the credibility of our results. The present study has been carried out in Denmark under Danish conditions in the health system. However, the results should be comparable with health systems that are similarly structured e.g. GP as gatekeeper, especially with the other Scandinavian countries thus we assume that the conditions would be similar due to the great cultural proximity. Women with gynaecological problems, who are referred to an RSG are always examined by a specialist but when referred to an HOC, they would often be examined by a doctor, who is not yet a specialist but still in training. To compensate for this, HOCs are organized such that doctors in training can always call in a specialist , although this depends on whether the examining doctor decides to call a specialist or not. Due to lack of experience, it may happen that the doctor in training comes to misjudgements and does not call a specialist although it would be indicated. Thus, this may delay the correct diagnosis of a serious disease . This difference means that unless all patients have equal access to relevant care, there would be an inequality in the quality of care depending on in which part of the country they live, which, in turn, can have an impact on the health of this group of the population. Our study demonstrated that GPs prefer to refer their gynaecological patients to RSG; only 9.6% of GPs refer their patients exclusively to the hospital, although most would refer their gynaecological patients directly to the hospital, if they suspect cancer or another severe diagnosis. We examined five geographic regions with different densities of RSG and found that the referral pattern depends on the density of RSG. These results are in agreement with previous studies that have shown that if the number of resident specialists increases, more patients are referred to a resident specialist and at the same time fewer patients are referred to hospitals . With regard to the diagnoses examined, the present study shows that the referral pattern is strongly dependent on the density of RSG in the local region and for five of the six gynaecological diagnoses examined, there was a significantly lower chance for the patient to be referred to an RSG in the region with the lowest density compared to the region with the highest density of RSG. The national average distance from the patient’s place of residence to the hospital is greater than the average distance from the patient’s place of residence to the RSG in the region with the highest density of RSG. This results in a longer transport time and more costs for the patients who live in the region with lowest density of RSG. This can have detrimental effects, as it has shown in previous studies that there is an association between travel distance and cancer prognosis . We also know that the distance to the hospital is linked to an increasing diagnostic interval for cancer . As far as we know, this has not been investigated in relation to the density of RSGs. However, since the RSG is a specialist, it is not unlikely that such studies would obtain similar results. When delays are discussed in the diagnosis of cancer, for example, patient delays, GP delays, and system delays are mentioned , but the density of resident specialists has not been taken into account, although it is known that increased availability of specialist care translates into higher referral rates . In regions with a lower density of resident specialists in gynaecology, women are less frequently referred to a resident specialist in gynaecology. If there are regions in the same country with different densities of resident specialists in gynaecology, one must assume that the population will have an unequal opportunity to have a specialist examination. This results in an injustice in the healthcare system within the same country. Whether or not this inequality should be accepted or not is a political decision, but our results indicate that there are significant differences between regions that may have an impact on the gynaecologic treatment of women. Clearly, further studies are needed to determine the exact consequences of the difference in referral patterns in terms of treatment outcomes. However, the results from our study should already facilitate the future planning of health care in gynaecology with the aim of reducing inequality in the access to RSG.
Prescription of potentially addictive medications after a multilevel community intervention in general practice
2e73638f-5f6a-41b2-8b16-cb53286caf73
10088976
Family Medicine[mh]
Opioids, benzodiazepines and z-hypnotics (potentially addictive medications, PAMs) are widely prescribed to patients with somatic conditions, mental disorders, and addiction-related problems . Used on indication, these medications are well suited to alleviate symptoms. However, prolonged use is associated with tolerance development, reduced effect, withdrawal reactions, and rebound effects upon discontinuation . PAMs can also lead to increased risk for adverse health outcomes, including falls, fractures, memory impairment and vehicle accidents . For the patients, the harms of non-therapeutic usage of PAMs are likely to outweigh any benefits obtained, and alternative non-pharmacological therapies have shown better or equivalent efficacy . Reduced use of PAMs can therefore provide major benefits both for the individual patient and for public health. According to the Norwegian Institute of Public Health, an especially concerning area where research efforts should be increased, concerns concomitant use of PAMs . Studies have shown unfavorable tendencies in prescription of PAMs . For opioids, prescription rates have increased across the three Nordic countries during the last decades . A review of prescriptions of PAMs in the Norwegian adult population from 2005 to 2013 found that over 20% of the patients that were prescribed z-hypnotics continued to take the medications throughout a four-year period, and 10% of these patients received the medications for daily use . Internationally, the increase in long-term usage of PAMs has prompted a worldwide discussion about the challenging aspects to ensure more targeted, therapeutic use. In Norway, general practitioners (GPs) have the main responsibility for prescription, and eventually continuation and withdrawal, of PAMs . A meta-synthesis of GPs experiences and perceptions of prescribing found that deliberations and decisions related to PAMs prescribing are complex and demanding . GPs can differ in perceptions of their role, responsibility, and attitudes toward PAMs, and moreover, perceive a lack of alternative treatment options for these patients . A crucial starting point for good treatment and therapeutic use is to ensure that GPs have good knowledge of PAMs; effect profile, indications, contraindications, side effects, and dangers of tolerance development, harmful use, and iatrogenic addiction syndrome . In addition, they need good clinical communication skills and knowledge of alternative treatment methods . Through careful prescriptions and, where possible, by avoiding initial prescription and instead using non-pharmacological treatment strategies, PAM prescriptions can be reduced and dependance avoided . Both nationally and internationally, GP-targeted educational interventions, patient information letters, psychological support, and pharmacological substitutions have each been found to lead to deprescribing of PAMs . However, follow-up in these studies have been limited to the first year or less, while the long-term effects of the interventions are unknown. Based on an initiative from the GPs in Molde Municipality in Norway, a multilevel community intervention was initiated by the municipal chief physician in 2018, to improve the quality of prescription practice of PAMs. The aim of the intervention was to jointly increase professional and public awareness and knowledge on PAMs and their therapeutic use, and to reduce non-therapeutic prescribing. The objective of this study is to evaluate the long-term results of the multilevel community intervention, using indicators as total amount prescribed and long-term prescription. Study setting Molde is a municipality in the west-coast of Norway, comprising 32,000 inhabitants. In 2017, all regular GPs in this municipality were asked through surveys and interviews with the municipal chief physician to address their main concerns and goals for improvement of their practice . Nearly all GPs in Molde aimed to improve their knowledge of PAMs to ensure that prescription practice was in accordance with clinical recommendations. The municipal chief physician therefore joined forces with the local GPs and designed a multilevel community intervention aiming to improve prescription practices and reduce non-therapeutic prescriptions of PAMs. The intervention The multilevel community intervention was implemented as a public health intervention, not designed as a research project. The intervention was conducted in 2018 and consisted of several parts, targeting both the GPs, patients, and the public. Identical routines for prescriptions, accompanied by adapted medical note templates, were implemented among all the GP offices. Upon prescription renewal, GPs were encouraged to convert the patient usage of PAM into the average daily dosage, and to have a face-to-face consultation with the patient. Patients received information about therapeutic use of PAMs, tapering recommendations, and non-pharmacological treatment options during consultations, upon prescriptions renewal, and through patient letters. The public awareness was raised through information provided in the local newspaper and at the municipality’s website. Further details are reported in the Supplementary File (1 ). Recruitment of participating GPs to the research project Two years after the multilevel community intervention was implemented in 2018, all 36 GPs in Molde Municipality were invited to participate in this follow-up evaluation project by contributing their anonymized prescription data on PAMs for the period 2017–2027. Of the 36 GPs, seven had changed their workplace, one retired, two did not answer, and 26 agreed to participate. Ethics All participants received information about the study and on voluntary participation, in line with standards given by the Norwegian Center for Research Data (NSD). GPs who consented to participate in the study provided a written authorization to obtain their prescription data from the Norwegian Prescription Database. According to the Regional Committee for Medical and Research Ethics, this project did not require further ethical approval (Reference 230089, provided on February 05, 2021). Study variables Prescription data for each of the consenting physicians were obtained for the time-period 01 January 2017–31 December 2020, covering opioids (ATC code N02A), benzodiazepines and benzodiazepine derivates (ATC codes N05BA, N05CD and N03AE), and z-hypnotics (ATC code N05CF). As several of the physicians had periods of absence from practice, prescription data for each physician were used only for months where they were working as a GP. Average amounts prescribed for each of the above-mentioned medication groups were calculated using defined daily doses (DDD) for each physician for each year . These numbers were subsequently divided by the number of patient-years provided for (i.e. average size of the physicians’ patient population in the relevant year multiplied by the number of months in practice and divided by 12). We similarly calculated the average number of DDDs per patient per year prescribed excluding any palliative prescriptions to terminal patients (reimbursement code § 2–90). For each prescription, the patient’s sex and birth year was available. However, we did not have information on the distribution of age and sex among each physician’s regular patients. To be able to compare prescriptions between sexes and age groups, we therefore assumed the distribution in each patient population to be similar to the distribution in the general population in Molde. We initially categorized age in 20-year bands but chose to merge groups 0–19 and 20–39 years of age, as the number of DDDs in each of these groups were low. We calculated the number of DDDs prescribed for each patient, and for each physician, calculated the number of patients per 1000 patients per year who received 10 or less, 11–30, 31–90, or more than 90 DDDs, respectively, of each group of PAMs. We also calculated how many patients received prescriptions from only one, two or all three groups of PAMs, per 1000 patients in the physician’s patient population per year. To be able to compare the time trend in our study sample to national trends, we also retrieved publicly available national prescriptions data as well as population size in five-year age groups for the same ATC-codes as included in our study . Statistics We performed several analyses to evaluate changes in the prescriptions over time, details are outlined in the Supplementary File . We first graphed the average unadjusted number of patients receiving categories of DDDs of PAMs per year, weighted by person-time. Second, we estimated concomitant use of different PAMs (i.e. opioids, benzodiazepines, or z-hypnotics) each year. Third, for the main results, we estimated unadjusted size of and changes in prescriptions from 2017 to 2020 using a linear mixed model with random intercept to account for dependence in observations within physicians. To communicate these results, we also calculated the magnitude of change relative to prescriptions in 2017 by simple arithmetic. Fourth, we graphed the estimated number of DDDs per patient per year within groups of age and sex for each PAM. We chose not to present CIs for these numbers, as there is substantial uncertainty about age and sex distributions among all patients. Fifth, to assess whether the difference in prescriptions over time depended on patients’ age or sex, we performed additional analyses adjusting for age group and sex and used likelihood ratio (LR) tests to compare models with or without interaction terms between time and age group or sex, respectively. Finally, we compared the prescriptions in our study sample to national trends, using Poisson regression analyses adjusted for sex and age in five-year categories. In additional analyses, we first excluded palliative prescriptions and second estimated changes using oral morphine equivalent doses of opioids. All CI are set to 95%. Data were imported to and analyzed using STATA 16 and 17. Molde is a municipality in the west-coast of Norway, comprising 32,000 inhabitants. In 2017, all regular GPs in this municipality were asked through surveys and interviews with the municipal chief physician to address their main concerns and goals for improvement of their practice . Nearly all GPs in Molde aimed to improve their knowledge of PAMs to ensure that prescription practice was in accordance with clinical recommendations. The municipal chief physician therefore joined forces with the local GPs and designed a multilevel community intervention aiming to improve prescription practices and reduce non-therapeutic prescriptions of PAMs. The multilevel community intervention was implemented as a public health intervention, not designed as a research project. The intervention was conducted in 2018 and consisted of several parts, targeting both the GPs, patients, and the public. Identical routines for prescriptions, accompanied by adapted medical note templates, were implemented among all the GP offices. Upon prescription renewal, GPs were encouraged to convert the patient usage of PAM into the average daily dosage, and to have a face-to-face consultation with the patient. Patients received information about therapeutic use of PAMs, tapering recommendations, and non-pharmacological treatment options during consultations, upon prescriptions renewal, and through patient letters. The public awareness was raised through information provided in the local newspaper and at the municipality’s website. Further details are reported in the Supplementary File (1 ). Two years after the multilevel community intervention was implemented in 2018, all 36 GPs in Molde Municipality were invited to participate in this follow-up evaluation project by contributing their anonymized prescription data on PAMs for the period 2017–2027. Of the 36 GPs, seven had changed their workplace, one retired, two did not answer, and 26 agreed to participate. All participants received information about the study and on voluntary participation, in line with standards given by the Norwegian Center for Research Data (NSD). GPs who consented to participate in the study provided a written authorization to obtain their prescription data from the Norwegian Prescription Database. According to the Regional Committee for Medical and Research Ethics, this project did not require further ethical approval (Reference 230089, provided on February 05, 2021). Prescription data for each of the consenting physicians were obtained for the time-period 01 January 2017–31 December 2020, covering opioids (ATC code N02A), benzodiazepines and benzodiazepine derivates (ATC codes N05BA, N05CD and N03AE), and z-hypnotics (ATC code N05CF). As several of the physicians had periods of absence from practice, prescription data for each physician were used only for months where they were working as a GP. Average amounts prescribed for each of the above-mentioned medication groups were calculated using defined daily doses (DDD) for each physician for each year . These numbers were subsequently divided by the number of patient-years provided for (i.e. average size of the physicians’ patient population in the relevant year multiplied by the number of months in practice and divided by 12). We similarly calculated the average number of DDDs per patient per year prescribed excluding any palliative prescriptions to terminal patients (reimbursement code § 2–90). For each prescription, the patient’s sex and birth year was available. However, we did not have information on the distribution of age and sex among each physician’s regular patients. To be able to compare prescriptions between sexes and age groups, we therefore assumed the distribution in each patient population to be similar to the distribution in the general population in Molde. We initially categorized age in 20-year bands but chose to merge groups 0–19 and 20–39 years of age, as the number of DDDs in each of these groups were low. We calculated the number of DDDs prescribed for each patient, and for each physician, calculated the number of patients per 1000 patients per year who received 10 or less, 11–30, 31–90, or more than 90 DDDs, respectively, of each group of PAMs. We also calculated how many patients received prescriptions from only one, two or all three groups of PAMs, per 1000 patients in the physician’s patient population per year. To be able to compare the time trend in our study sample to national trends, we also retrieved publicly available national prescriptions data as well as population size in five-year age groups for the same ATC-codes as included in our study . We performed several analyses to evaluate changes in the prescriptions over time, details are outlined in the Supplementary File . We first graphed the average unadjusted number of patients receiving categories of DDDs of PAMs per year, weighted by person-time. Second, we estimated concomitant use of different PAMs (i.e. opioids, benzodiazepines, or z-hypnotics) each year. Third, for the main results, we estimated unadjusted size of and changes in prescriptions from 2017 to 2020 using a linear mixed model with random intercept to account for dependence in observations within physicians. To communicate these results, we also calculated the magnitude of change relative to prescriptions in 2017 by simple arithmetic. Fourth, we graphed the estimated number of DDDs per patient per year within groups of age and sex for each PAM. We chose not to present CIs for these numbers, as there is substantial uncertainty about age and sex distributions among all patients. Fifth, to assess whether the difference in prescriptions over time depended on patients’ age or sex, we performed additional analyses adjusting for age group and sex and used likelihood ratio (LR) tests to compare models with or without interaction terms between time and age group or sex, respectively. Finally, we compared the prescriptions in our study sample to national trends, using Poisson regression analyses adjusted for sex and age in five-year categories. In additional analyses, we first excluded palliative prescriptions and second estimated changes using oral morphine equivalent doses of opioids. All CI are set to 95%. Data were imported to and analyzed using STATA 16 and 17. Our study sample includes data from 20 GPs in 2017, increasing to 25 GPs in 2020, as one or more GPs were absent each year . Each GP had on average 1147 patients (standard deviation (SD) 342) in their patient population in 2017, decreasing slightly to 1086 patients (SD 293) in 2020. Overall, assuming the patient population not to include individuals from other municipalities, our data covers between 67 and 85% of the population. Around 5% of opioid prescriptions were for palliative patients, while 0.6–2.4% of benzodiazepine prescriptions and only up to 0.5% of z-hypnotic prescriptions were for palliative patients. For opioids and benzodiazepines, GPs most often prescribed 10 DDD or less per patient per year . The number of patients receiving different amounts of opioids was fairly constant from 2017 to 2020. The number of patients receiving 31–90 DDDs of benzodiazepines decreased from 16 (95% CI 11–21) per 1000 patients in 2017 to 11 (95% CI 8–14) in 2020 and the number of patients receiving more than 90 DDDs decreased from 9 (95% CI 7–11) per 1000 patients in 2017 to 7 (95% CI 5–8) per 1000 patients in 2019. The number of patients receiving large amounts of z-hypnotics also decreased over time, with 34 patients per 1000 (95% CI 28–42) receiving more than 90 DDDs in 2017 compared to 24 (95% CI 19–28) per 1000 in 2020. The number of patients receiving all three groups of PAMs was low and stable (5 per 1000 in 2017, 4 per 1000 in 2018 to 2020, Supplementary Table S1 ). The number of patients receiving two different groups of PAMs declined slightly from 28 per 1000 in 2017 to 26 per 1000 in 2018 and further to 24 per 1000 in 2019 and 2020. Estimated average total prescription of PAMs was reduced with 4.5 DDD (95% CI 3.3–5.6) per patient in 2020 compared to 2017 . This corresponds to an estimated 27% reduction. Similarly, prescriptions of opioids were reduced by 0.7 DDD (95% CI 0.1–1.2), benzodiazepines by 0.8 DDD (95% CI 0.5–1.2) and z-hypnotics by 2.9 DDD (95% CI 2.2–3.7) per patient comparing 2020 to 2017. This represents 17%, 27% and 30% estimated reduction in opioids, benzodiazepines, and z-hypnotics, respectively. There were small differences between the years 2018 to 2020 for each group of PAMs, with no clear trend over these years. Prescriptions were still slightly lower in 2019 than 2020. Results were similar when excluding palliative prescriptions ( Supplementary Table S2 ). Opioid prescriptions were lower throughout the observation period when using oral morphine equivalent doses, while the pattern of changes over time were similar to main results ( Supplementary Tables S3 and S4 ). GPs prescribed more of each PAM to female patients compared to male patients . Prescriptions of benzodiazepines were notably higher among women aged 80 years and older, and prescriptions of z-hypnotics increased with age among both men and women. We found weak statistical evidence that the decline in prescriptions of z-hypnotics depended on age (LR-test p = .085) with a greater decline in older age ( Supplementary Table S5 ). The statistical evidence for differences in prescription changes between age groups was weak for opioids and benzodiazepines (LR-test p = .9), as was the evidence for differences in changes between men and women (LR-test p .4–.8). We still note an observed 28% decline in prescriptions of benzodiazepines from 14.4 DDD in 2017 to 10.3 DDD in 2020 per 1000 women over 80 years of age . Compared to all of Norway, the GPs in our study sample prescribed 26% less opioids (95% CI 26–27%), 38% less benzodiazepines (95% CI 37–38%) and 16% less z-hypnotics (95% CI 15–16%) in 2017 ( Supplementary Table S6 ). For each PAM, there was a national trend of lower prescriptions for each year, most prominently so for benzodiazepines, where prescriptions were 13% lower in 2020 compared to 2017. Still, the reduction in prescriptions from 2017 to 2020 was around 20 to 30% greater in our study sample compared to all of Norway. This study showed a 27% reduction in prescription of PAMs after a multilevel community intervention targeting both GPs, patients, and the general public. The reduction in prescriptions was substantial for each class of PAMs, though somewhat greater for z-hypnotics and benzodiazepines, compared to opioids. These changes clearly exceeded a national trend of lower prescription rates. While the largest decline appeared the first year after the intervention, the change sustained for the three observed years. A minor (and statistically non-significant) increase in prescriptions observed in the third year compared to the second year may represent a random fluctuation or an effect of changed prescription routines due to the Covid-19 pandemic (see below). Generally, reduced prescriptions were found across age groups and sexes. However, the prescription of z-hypnotics was substantially greater in the older age groups, where the decline also seemed stronger. For benzodiazepines, we also noted a substantially higher use among the oldest women with a pronounced decline after the intervention. The use of registry data assured completeness of prescriptions from the participating GPs, and multilevel analyses allowed us to account for differences between the physicians and their patient populations. We still note some limitations. The study was small and only include prescriptions from the regular GPs. We cannot exclude the possibility that patients sought prescriptions from out-of-hours care or specialist health services, which would cause an overestimation of the effect. However, this is unlikely in the Norwegian healthcare setting (see Supplementary Information ). Another limitation is that we do not have the age and sex distribution among all patients served by the GPs in the study, so the sex- and age-specific analyses should be interpreted with caution. Although we were able to consider which of the GPs were present at any time, extrapolation of prescriptions made when present may not be correct. However, this potential measurement error would be independent of the intervention and thus unlikely to explain the results. Of the physicians originally participating in the intervention, eight are missing for reasons presumably unrelated to the effects of the intervention. Still, there is a theoretical chance that non-participation might have led to overestimation of the effect. However, as only two out of 28 eligible GPs did not participate, such effect is expected to be small. Non-therapeutic use of PAMs among older patients is of particular concern due to age-related pharmacokinetic and pharmacodynamic changes, multimorbidity, and polypharmacy . The high prescriptions of benzodiazepines among women and z-hypnotics among both sexes that we found from age 80, is thus of concern. Our intervention decreased the prescription of both these medications, and for z-hypnotics, the decline seemed strongest among the oldest patients. Our findings correspond with previous research suggesting that mixed interventions could yield discontinuation rates of benzodiazepines and other hypnotics between 27% and 80% among older people . However, since the sustainability of these interventions was previously indetermined , and since GPs have experienced elderly patients rejecting proposed medication changes as a particular challenge , our finding that the intervention had long-term effects on prescription rates for older patients is important. GPs find the process of prescribing PAMs complex and demanding and have called for more knowledge, tools for a practical approach, and a clear overview of effective reduction strategies . This was also the starting point for our intervention. At the same time, GPs can perceive deprescribing interventions initiated by the regional authorities as a type of control or cost reduction tool interfering with their clinical autonomy . The fact that the initiative came from the GPs themselves, leading to a collective, multilevel approach concomitantly targeting GPs, patients, and the public is likely to have increased their motivation and dedication. In our intervention, the municipality chief physician offered the GPs support, strategies, and tools that were based on a thorough understanding of how prescribing decisions are made. The patients received information through several channels and a face-to-face consultation with their GP, which allowed for genuine user involvement . In line with recommendations , a detailed description of our intervention is included as a supplement to this paper to allow replication. During the Covid-19 pandemic, communication between the GPs and the patients, including renewal of PAM prescriptions, were quite abruptly digitalized to a high degree. This mean that the intervention component of providing face-to-face consultations was not carried out in 2020, potentially affecting the GPs’ prescribing practice. Prescribing without a patient consultation has been found to be one of the main predictors of high-volume prescribing of PAMs . At the individual GP level, competing priorities and time pressure have been reported as barriers for putting medication changes on the agenda . Although this multilevel community intervention reduced the GPs’ prescriptions of PAMs, we believe that the intervention can be even more successful in the long run with periodic public reminders and training for GPs as this could maintain awareness. In a scale-up of the intervention, we recommend that provider training in non-technical therapeutic skills is implemented, as it may enhance the efficacy of prescriber education programs . Supplemental Material Click here for additional data file. Supplemental Material Click here for additional data file.
Liver Cirrhosis among Young Adults Admitted to the Department of Gastroenterology in a Tertiary Care Centre: A Descriptive Cross-sectional Study
9f8da46d-cbd9-44fb-9e87-2ffbfa33ef6f
10088990
Internal Medicine[mh]
Liver cirrhosis refers to a disorder that alters the overall typical architecture of the liver. Globally, the majority of instances are ascribed to non-alcoholic fatty liver disease, viral hepatitis, or excessive alcohol usage. Depending on the aetiology and whether portal hypertension or hepatocellular damage predominates, the clinical appearance of cirrhosis differs. However, even in the absence of any clear clinical symptoms, substantial liver damage may be present. Various causes of cirrhosis in adults have been studied but the aetiology of cirrhosis in young adults less than or equal to 40 years has not been well studied, and the incidence of cryptogenic cirrhosis remains unknown. Early interventions and preventions are required to stabilize disease progression and to avoid or delay clinical decompensation and the need for liver transplantation. The aim of this study was to find out the prevalence of liver cirrhosis among young adults admitted to the Department of Gastroenterology in a tertiary care centre. A descriptive cross-sectional study was done among young adults admitted to the Department of Gastroenterology in Tribhuvan University Teaching Hospital between 25 November 2021 to 30 November 2022 after receiving ethical approval from the Institutional Review Committee of same institute [Reference number: 227(6-11)E2-078/079]. All patients admitted to the Gastroenterology ward of the hospital aged >18 and ≤40 years were included in the study. Patients who do not give informed consent were excluded from the study. Informed consent was signed and confidentiality of the information was ensured. Convenience sampling was done. The sample size was calculated by using following formula: n = Z 2 × p × q e 2 = 1.96 2 × 0.50 × 0.50 0.04 2 = 601 Where, n = minimum required sample size Z = 1.96 at a 95% Confidence Interval (CI) p = prevalence taken as 50% for maximum sample size calculation q = 1-p e = margin of error, 7% The calculated minimum required sample size 601. However, 989 patients were included in the study. Each patient was subjected to a detailed clinical history regarding the duration of illness and symptoms. Predetermined proforma was used as the tool for data collection. All patients were subjected to detailed clinical and laboratory data including demographics, and history of alcohol consumption, medications, substance abuse, and other systemic diseases. Various biochemical studies like alkaline phosphatase (ALP), aspartate aminotransferase (AST), alanine transaminase (ALT), serum total globulin/ gamma globulins, serology like antinuclear antibody (ANA), anti-smooth muscle antibody (ASMA), antimitochondrial Aantibody (AMA), immunoglobulin A (IgA), tissue transglutaminase (tTG), liver-kidney microsomal (LKM-1), viral hepatitis markers, HLA-DR3 or DR4) and abdominal ultrasound was done for liver and spleen size, parenchymal echogenicity, portal vein diameter, and ascites. Serum ceruloplasmin, urinary copper levels and slit lamp examination for the Kayser-Fleischer ring were done when indicated. Each patient had undergone upper gastrointestinal (UGI) endoscopy and diagnostic findings were documented. Child-Turcotte-Pugh (CTP) score and Model for End Stage Liver Disease (MELD) scores were calculated for all the patients. Information was gathered using a standardized proforma. Data collected were entered and analyzed using IBM SPSS Statistics version 20.0. Point estimate and 95% CI were calculated. Among 989 patients, liver cirrhosis among young adults was seen in 200 (20.22%) (18.12-22.32, 95% CI). In a total of 200 cases, liver cirrhosis was seen in 145 (72.50%) men and 55 (27.5%) women. The participants ranged in age from 18 to 40 years, with mean age of 28.92±5.73 years. A total of 68 (34%) Brahmins made up the majority of the study group, followed by 42 (21%) Khas, 25 (12.50%) Newar, 20 (10%) Madheshi, and 6 (3%) Tharus. There were 73 (36.50%) farmers made up the research group, which also included 56 (28%) retired people, 37 (18.50%) people who worked for the government, and 34 (17%) housewives. In this study 113 (56.50%) patients were from rural areas, while 87 (43.50%) patients were from metropolitan areas. A total of 80 (40%) patients were from a moderate socioeconomic class, 70 (35%) from a lower one and just 50 (25%) from a higher one. Chronic alcohol use was the primary cause of cirrhosis in 164 (82%) patients. Other causes were non-alcoholic steatohepatitis (NASH) and chronic viral hepatitis seen in 20 (10%) cases and 12 (6%) cases respectively. The remaining cases, 4 (2%) were labelled as cryptogenic . Abdominal distension was the most frequent manifestation, occurring in 187 (93.50%) cases, followed by anorexia 140 (70%), fatigue 120 (60%), and vomiting 104 (52%). Ascites was clinically evident in 184 (92%) individuals. There were 108 (54%) individuals who had upper gastrointestinal bleeding. The other typical signs were icterus, followed by pallor, pedal edema, and hair loss over the body . There were 108 (54%) individuals who had UGI bleeding. The most frequent endoscopic finding was gastro-oesophagal varices, which were discovered in 180 (90%)patients, followed by portal gastropathy in 150 (75%) patients, peptic ulcers in 15 (7.50%) patients, gastro-duodenitis in 4 (2%) patients, Mallory Weiss tears in 20 (10%) patients and GI malignancies in 2 (1%) patients. The participants were divided into groups based on their CTP classifications. Most cases belong to CTP C 120 (60%) patients . The most common complications was ascites seen in 184 (92%) of the patients. Hepatic encephalopathy (HE) was seen in 36 (18%) cirrhotic patients, followed by spontaneous bacterial peritonitis (SBP) in 26 (13%) patients, and hepatorenal syndrome (HRS) in 22 (11%) patients. The prevalence of liver cirrhosis among young adults was 20.22% which was high when compared to a similar study done in other tertiary care centres of Nepal, but lower to the similar studies done in Nepal. , In the current investigation, alcoholic liver disease was the most prevalent aetiology of cirrhosis and was found in 164 (82%) individuals which was similar to other studies. , , The most prevalent cause of cirrhosis in Nepal is chronic alcohol use. Therefore, cirrhosis cases are increasingly being detected in young people, as was shown in the current research, which may be related to early alcohol consumption and dependency. The prevalence of alcohol use, misuse, and dependency among the younger population is increasing, which may be the cause of this condition. In this study, 184 (92%) individuals had ascites followed by pallor in 144 (72%) and pedal edema in 120 (60%), icterus was seen in 148 (74%) patients. In this research, 108 (54%) patients had upper gastrointestinal bleeding. These results were similar to other studies. , In the current study, the most frequent finding on UGI endoscopy was gastro-oesophageal varices, which were observed in 180 (90%) patients, followed by portal gastropathy in 150 (75%) patients which was similar to other studies. The mean age of the patients from our study was similar to these studies. With regards to the gender-wise distribution of the patients, our study showed that ALD was more predominant in males which is similar to other studies. , , The increased prevalence of ethanol use among males compared to women is most likely the cause of the male preponderance over female in all investigations. Additionally, there may be disparities in how the two sexes seek medical attention. In the present research, a total of 113 (56.5%) patients came from rural regions which was less compare to other studies. The CTP score of our study was similar to the studies carried out at other tertiary centre. In our investigation, ascites, which was discovered in 184 (92%) patients, was determined to be the most typical complication of cirrhosis at presentation, followed by UGI bleed in 108 (54%) patients. Rebleeding was seen in 33 (16.5%) patients. Hepatic encephalopathy 36 (18%), SBP 26 (13%), and HRS 22 (11%) followed. According to one of the study, the most frequent complications were ascites in 78.6% of patients, variceal bleeding in 43.4%, hepatic encephalopathy in 21.6%, SBP in 4.2%, HRS in 2.7%, HCC in 1.3%, hypersplenism in 0.4%, and sepsis in 12.8% of patients. These findings were consistent with those from our study. In some investigations, SBP incidences between 10% and 30% higher than ours have been reported. - According to a recent study, hospitalized patients had a prevalence of SBP of 24.7% and 34.9%, respectively. , The results of the study cannot be generalised as the population under study is limited to patients admitted to one tertiary care centre. Also, because of the descriptive nature of this study, an association between exposure and outcome cannot be made in this study design and risk factors cannot be made out. A larger study conducted at different centres should be conducted to better understand the exact burden of liver cirrhosis in young adults. The prevalence of liver cirrhosis in young adults in our study was found to be lower than in studies done in similar settings. People need to be made aware of the negative consequences of frequent alcohol use. Early diagnosis of viral hepatitis and alcoholic liver illnesses provides survival advantages, and their treatment may lessen the burden of cirrhosis. In instances of cirrhosis that have already progressed, necessary management and therapy, the avoidance of complications, and routine monitoring and follow-ups may all lower morbidity and death.
Contraception Use among Women Visiting Outpatient Department of Gynaecology in a Tertiary Care Centre: A Descriptive Cross-sectional Study
a21e851b-93bb-4fe3-aca7-93eb55c43f0f
10088992
Gynaecology[mh]
Family planning (FP) services can bring a wide range of benefits to women, their families and society as a whole. FP can help in reducing maternal mortality by decreasing the number of unwanted pregnancies and risky abortions, and the proportion of births at high risk. It has been estimated that fulfilling women's unmet need for modern contraceptives would save about 140,000 to 150,000 maternal lives annually. , Failure to plan a pregnancy can adversely affect the health of the family as a whole. Even when women know some methods of contraceptives, they do not know the availability or how to use them properly. Knowledge, attitude and practices towards family planning are the basic fundamentals of achieving the goals and targets of family planning. The aim of this study was to find out the prevalence of contraception use among women visiting the outpatient department of gynaecology of a tertiary care centre. A descriptive cross-sectional study was carried out among women attending the outpatient Department of Gynaecology from 10 April 2021 to 10 April 2022 at KIST Medical College and Teaching Hospital. Ethical approval was taken from the Institutional Review Committee of the same institute (Reference number: 2079/80-03). Women aged 15 to 49 years coming to the outpatient department (OPD) for any gynaecological problem were included in the study irrespective of their complaints and diagnosis. Postmenopausal, unmarried or pregnant women were excluded from the study. Convenience sampling was done. The sample size was calculated by using the following formula: n = Z 2 × p × q e 2 = 1.96 2 × 0.50 × 0.50 0.07 2 = 196 Where, n = minimum required sample size Z = 1.96 at 95% Confidence Interval (CI) p = prevalence taken as 50% for maximum sample size calculation q = 1-p e = margin of error, 7% The minimum required sample size was 196. However, 208 participants were included in the study. A semi-structured questionnaire was developed. The questionnaire had two parts; the first part consisted of socio-demographic information and obstetric history. The second part consisted of various questions that assessed the uses of contraception. Information on the use of any form of contraceptive methods that women were using currently for any duration was taken. Pretesting was done for the questionnaire and modifications were done accordingly. The questionnaires were directly administered by investigators and research volunteers in a dedicated place in the OPD maintaining adequate privacy and confidentiality. Data was collected and analysed using Microsoft Excel 2013. Point estimate and 95% CI were calculated. Out of 208 patients, contraceptive use was found in 146 (70.19%) (63.97-76.41, 95% CI) women. The mean year of use of contraception was 4.98±5.42 years. Short-acting reversible contraception (SARC) was used by 97 (66.44%). The most commonly used contraceptive device was Depo Provera 43 (29.45%) followed by male barrier methods 29 (19.86%) . Hospitals, health posts and health professionals were the most common source of information about contraception by 59 (40.41%) . The mean age of the participants was 32.29±7.26 years (Range: 20 to 49). The majority 41 (29.10%) of the women were in the 26-30 years age group. Among them, 20 (13.70%) did not have any formal education. A total of 69 (47.26%) were involved in agriculture and agriculture-related business . A total of 132 (90.41%) were multiparous and 51 (34.93%) participants had at least one abortion. Among all the abortions, 37 (25.34%) underwent a medical abortion, 18 (23.08%) underwent surgical abortion and 23 (29.49%) underwent a spontaneous abortion. The majority of them were done for 46 (83.64%) unwanted pregnancies followed by 5 (9.09%) missed abortions, 3 (5.45%) for molar pregnancy, and 1 (1.82%) for severe maternal depression . A total of 9 (4.37%) had never used any form of contraceptive devices and 53 (25.48%) had discontinued using contraceptive devices. The most common reason for discontinuation of contraceptive devices was to conceive 25 (47.17%) . In our study, 70.19% women were currently using some form of contraceptive devices while in other studies it was 44.39%, 66.3%, 73.75%, 85.5%, and 92%. , - This implies that in our study comparatively a lesser number of participants are using contraceptive devices. In our study, the most commonly used contraceptive device was injectable, similarly, in another study too the most common contraceptive devices were injectable. , while in other studies, the most common contraceptive devices were OCPs and condoms. , , We can conclude from this information that the choice of contraceptive devices can vary among different study populations and study sites. Depo provera was most commonly used in our study, by about 30% while in other studies it was used among 2.6%, 26.7%, 29.3%, 37.4%, and 54.7%. Condoms were the second most used method, used by about 20% while in other studies it ranged from 19.15% to 37.1%. Similarly, OCPs were used by 17% of participants while their use was about 20% in other studies. , Regarding the permanent method of contraception, tubal ligation was done by about 9% of patients, similar to our study, it was 11.11 % and 12.7% in other studies. Vasectomy was practised by about 5.5% of participants in our study while it was found to be practised by 10.4% in another study done in Nepal. Our finding of female sterilisation was found to be comparable in other studies but participants using vasectomy were found to be lesser. The source of information was by radio (79%) and friends (60.7%) while health institutions were 11% and health workers were 60%. But in our study radio/ television was the source of information for 25.34% and friends/family in 11.64% while health institutes and health professionals were sources of information for 40.42%. In a study done in Northern India, the most common source of information on contraception was media in 55.7% and 45%. In another study done in Pakistan, the main source of information were friends and families followed by health workers. In our study apart from radio/television, social media was the source of information for only 9.59%. In our study, about 14% of participants had no formal education while in other studies 23% to 42% had no formal education. , , Similar to our study, the majority of participants had completed primary or secondary-level education. , , In contrast to our study where 27% of participants had a bachelor's or above level of education which was present in only 2.4% and 4%. In our study, most of the participants, about 47%, were engaged in agriculture or agriculture-based businesses. While in other studies 30% to 70% of the participants were engaged in agriculture About 16% of women were housewives in our study which is much less when compared to 78%. In our study about 18% was involved in services while in other studies it was found to be 12.7% and 28%. Similarly, 13% of participants were involved in business in our study which was about 16% in other studies. , Globally, the majority of abortions are still the direct consequence of the non-use of any contraception. More than 90% of abortions are performed on unintended pregnancies. A total of 70% of unintended pregnancies are due to the non-use of contraception. In our study among the women with prior medical or surgical abortions, 83.64% were for unwanted pregnancies, which is quite high. This could be because they may have used abortion in substitution for contraception and it also reflects that the level of knowledge, attitude and practice of contraception is still suboptimal. The major limitation of our study is that it is a single-centre study with a relatively small sample size. Therefore, the result of this study may not be generalisable in community settings and other institutional settings. The prevalence of contraception use is lower than in other studies done in similar settings. Contraception promotion programs have to be encouraged in order to raise awareness about different contraceptive devices and various aspects of them. This will promote good attitudes and efficient use of contraception. And as a result, it will decrease unwanted pregnancies hence the number of abortions and abortion-related complications.
Pyonephrosis among Patients with Pyelonephritis Admitted in Department of Nephrology and Urology of a Tertiary Care Centre: A Descriptive Cross-sectional Study
23999021-b69f-4495-9c6d-5661add6a35d
10088997
Internal Medicine[mh]
Pyonephrosis is a serious infective condition of kidneys characterised by the presence of pus in the renal collecting system. It is associated with obstruction in the renal collecting system and suppurative destruction of the renal parenchyma leading to total or near total loss of function of the affected kidney. Therefore, early diagnosis and prompt management among patients of pyelonephritis is the key to good outcomes. If the obstruction is relieved early by urinary diversion techniques such as Double J (DJ) stenting or percutaneous nephrostomy (PCN) insertion, and the patient is aggressively managed with antibiotics, there is a possibility of avoiding permanent loss of renal function and subsequent nephrectomy. The existing data is generally in form of case reports or small series with very few existing studies. This study aimed to determine the prevalence of pyonephrosis among patients with pyelonephritis admitted to the Department of Nephrology and Urology of a tertiary care centre. This descriptive cross-sectional study was conducted from 1 February 2016 to 31 July 2021 in the departments of Nephrology and Urology in Indian Naval Hospital Ship (INHS) Asvini, Mumbai, India. Data was collected after ethical approval from the Institution Ethics Committee of the same institute (Reference number: IEC/56/21). All adult patients of pyelonephritis aged greater than 18 years visiting the hospital during the study period were enrolled in the study. Patients with infections of the transplanted kidney, pregnant females, or those who did not drain pus from the collecting system after decompression were excluded from the study. Convenience sampling was used. The sample size was calculated by using the following formula: n = Z 2 × p × q e 2 = 1.96 2 × 0.50 × 0.50 0.05 2 = 385 Where, n = minimum required sample size Z = 1.96 at a 95% Confidence Interval (CI) p = prevalence taken as 50% for maximum sample size calculation q = 1-p e = margin of error, 5% The minimum sample size calculated was 385. However, we have included 550 patients in the study. Pyelonephritis was defined based on Centre of Disease Control (CDC) criteria, as the presence of clinical features like fever, dysuria, urgency, frequency, costovertebral tenderness with pyuria, or organisms cultured from blood/urine, or evidence of infection on ultrasonography (USG) or Computed Tomography (CT) scan. Pyonephrosis was defined as evidence of pyelonephritis with radiological presence (by USG or CT scan) of obstruction as hydronephrosis or hydroureteronephrosis (HDUN); or purulent exudate, pus, echogenic debris, fluid/fluid levels in the renal pelvis or urinary collecting system. After a diagnosis of pyonephrosis, all patients were taken up for urinary diversion with DJ stenting or PCN. The available clinical, demographic and laboratory parameters were recorded as per the proforma. The standard cut-offs for the laboratory of this centre were considered for any abnormalities. Anaemia was defined as haemoglobin less than 12 g/dL, leukocytosis as leucocyte count ≥11500/cubic mm, azotemia as serum creatinine ≥1.4 mg/dL and pyuria as urine leucocytes ≥10/HPF. Data were analysed using IBM SPSS Statistics 21.0. Point estimate and 95% CI were calculated. Among 550 patients, pyonephrosis was found in 60 (10.90%) (8.30-13.50, 95% CI). The mean age was 54.62±12.14 years, and the majority of them were males 41 (68.33%). The most common clinical symptom was flank pain with or without fever, seen in 46 (76.66%) patients. Fever was present in 37 (61.66%) patients The most common haematological abnormalities were leucocytosis seen in 49 (81.66%) and anaemia seen in 46 (76.66%) cases. The urine was cloudy in 48 (80%) patients and frank pus was drained in 12 (20%) patients . E. coli was the most common offending organism seen in 20 (33.33%) followed by Pseudomonas in 9 (15%), while no growth was seen in 10 (16.66%) patients . X-ray KUB showed radiopaque calculi in the region of the ureter in 19 (31.66%), of which 12 (63.15%) were in the proximal and 7 (36.84%) were in the distal ureteric region. A total of 16 (26.66%) patients had radiopaque calculi in the renal fossa, and 8 (50%) were staghorn calculi (occupying the entire pelvis or at least two calyces). Ultrasonography showed classical echogenic debris with floaters and internal echoes in the dilated pelvicalyceal system in 44 (73.33%) patients. A total of 29 (48.33%) patients underwent non-contrast CT scans and 31 (51.66%) underwent contrast-enhanced CT (ce-CT) scans with the urography phase. CT scan revealed features of ureteric obstruction in 19 (31.66%) patients. DJ stenting was successfully done in 44 (73.33%) patients. PCN was done in the remaining 16 (26.66%) patients. Nephrectomy was done in 6 (10%) cases. A total of 1 (1.66%) patient who presented with features of sepsis and septic shock, was treated aggressively with antibiotics, supportive measures, and PCN to drain about 650 ml of pus; but he succumbed to his illness. The present study showed that pyonephrosis was found in 10.90% of pyelonephritis. E. coli was the most common offending organism in this study. The classic ultrasonography features included echogenic debris with floaters and internal echoes in the dilated pelvicalyceal system in 44 (73.33%) patients. A previous study showed 17 (10%) patients had pyonephrosis amongst pyelonephritis, most probably secondary to a stone. These findings are similar to the findings of our study. Pyonephrosis represents a spectrum of infected diseases of the kidney ranging from infected hydronephrosis to the more diffuse xanthogranulomatous pyelonephritis. It is characterised by a collection of purulent material in the pelvicalyceal system, due to any form of distal obstruction. The most common aetiology is a stone in the ureter or kidney. A previous study extensively studied the role of USG in pyonephrosis kidneys showing a spectrum of USG findings ranging from echogenic debris to solid-looking material in the pelvicalyceal system seen in 61% of cases. In our study, USG could diagnose pyonephrosis in 73.3% of patients. Another study found radiological features of pyonephrosis in 12% of pyelonephritis patients. CT scan is the radiological modality of choice and the presence of gas or fluid/ fluid levels in the pelvicalyceal system is strongly suggestive of an infective aetiology in CT, with other findings being the thickening of the renal pelvis with perinephric fat stranding. , Gram-negative organisms especially E. coli are the most commonly isolated organism in patients with pyonephrosis with the incidence of E Coli being 28.5 % (28/70) in a study. Our study showed a similar cultural profile. The initial management in pyonephrosis is urgent decompression of the urinary system, by either PCN or ureteral stenting with DJS, with neither of them showing superiority in terms of the effectiveness of drainage. , As the data were collected retrospectively, there might be missing data. There is also a likelihood of measurement bias amongst clinicians and radiologists in diagnosis cases of pyonephrosis. Since the convenience sampling method was used so there might be selection bias and could not be generalized in a larger population. The prevalence of pyonephrosis was similar to other studies done in similar settings. The patients with pyonephrosis have flank pain and fever as the common clinical features and E. coli is the commonest offending organism. Early detection and management of pyonephrosis is a most in patients with pyelonephritis.
B-Lynch Suture Management among Patients with Postpartum Hemorrhage in a Tertiary Care Centre: A Descriptive Cross-sectional Study
0ac16170-f8f7-4a0e-b81a-91ff4d017148
10089001
Suturing[mh]
Post-partum haemorrhage (PPH) is a leading cause of global maternal mortality and morbidity, accounting for 25-30% of all maternal deaths, and 75-90% of these casualties result from uterine atony. It may lead to a cesarean hysterectomy thus impairing future fertility. Uterine atony accounts for more than 80% of cases of primary PPH. PPH may occur after vaginal delivery 4% or cesarean births 6%. , B-Lynch provides compression to both sides of the uterine body without disturbing the anatomy. It can stop postpartum haemorrhage without the need for pelvic surgery and potentially preserve fertility. The success rate of B-Lynch in avoiding hysterectomy is 86.4% and has been widely recommended for controlling PPH. Thus, the practice of this surgical management in the country is following the rising curve but there is a dearth of published literature regarding this topic. The aim of this study was to find out the prevalence of B-Lynch sutures management among patients with post-partum haemorrhage in a tertiary care centre. This descriptive cross-sectional study was conducted in the Department of Obstetrics and Gynaecology of Tribhuvan University Teaching Hospital from 1 April 2017 to 1 April 2021. The study was conducted after taking ethical approval from the Institutional Review Committee of the Institute of Medicine [Reference number: 497(6-11 )C-2077/078]. All the patients who were delivered in the hospital and who had atonic PPH during the study period were included in this study. Patients with traumatic PPH, congenital malformations, complete placenta previa/accreta, bleeding disorders, disseminated intravascular coagulation (DIC), and retained bits of placenta were excluded from the study. Convenience sampling was used. The sample size was calculated using the following formula: n = Z 2 × p × q e 2 = 1.64 2 × 0.5 × 0.5 0.10 2 = 68 Where, n = minimum required sample size Z = 1.645 at 90% Confidence Interval (CI) p = prevalence taken as 50% for maximum sample size calculation q = 1-p e = margin of error, 10% The calculated sample size was 68. However, 72 samples were taken in this study. The active management of the third stage of labour was done in all the cases (controlled cord traction, uterine massage, and oxytocin). Despite using uterotonics in atonic PPH if the patients had intractable haemorrhage then a B-Lynch brace suture was applied. In this study, the patients with the primary PPH who didn't respond to uterotonics and managed with B-Lynch suture application due to uncontrollable haemorrhage were included. Data were collected from the labour room confinement book and cesarean section record book. The proforma for data collection included parity, mode of delivery, an indication of cesarean delivery, blood transfusion, additional surgical procedures, and need for Intensive Care Unit (ICU). After managing atonic PPH with medical drugs B-Lynch suture was applied with Polydioxanone Vicryl 01 suture. The effectivity was simply judged by the stoppage of bleeding after B-Lynch suture application. Any complications during the first five days were recorded from the patient file which included further treatment if bleeding re-occurs, postoperative fever, ICU/Critical Care Unit (CCU) / Ventilator support, DIC, hysterectomy, and maternal death. The collected data were entered and analyzed using IBM SPSS Statistics version 22.0. Point estimate and 90% CI were calculated. Among 72 patients with post-partum haemorrhage, 19 (26.39%) (17.85-34.93, 90% CI) underwent B-Lynch suture management. Among the patients with B-Lynch application, in 1 (5.26%) women during the cesarean section for short spacing, the uterus became flabby and did not respond to uterotonics and uterine massage. B-Lynch suture was applied following which bleeding was reduced. After 6 hours again patient had an intractable haemorrhage, and ultimately cesarean hysterectomy was performed to save her life ( ). There was 1 (5.26%) maternal mortality in heart disease patient with pulmonary stenosis with severe mitral regurgitation 11 hours later due to cardiac arrest. However, the bleeding stopped after the B-Lynch suture application. A total of 3 (15.79%) patients developed wound infection and 15 (78.95%) were discharged without complications. None of the patients reported to the hospital for any complications. The mean age of patients was 28.5 (Range: 19-39) years. The mean birth weight of the baby was 3.42 kg with 12 (60%) babies having birth weight >3.0 kg. A total of 11 (57.98%) were primipara and most of the patients 8 (42.10%) were of the 26-30 years age group ( ). A total of 14 (73.68%) women had an emergency cesarean delivery. Fetal distress seen in 6 (31.58%) was the commonest indication of the cesarean section followed by previous cesarean section in 4 (21.05%) ( ). The average blood loss was 1784 ml and 13 (68.42%) patients had blood loss between 1000-2000 ml, whereas 5 (26.32%) cases had blood loss of more than 2000 ml. Blood transfusion was not required in 4 (21.05%) patients, whereas 3 (15.78%) required more than or equal to three units of blood transfusion ( ). Atonic PPH has always been a challenging task for obstetricians to manage. Recently emerged B-Lynch suture has been quite valuable in treating atonic PPH refractory to uterotonics, which not only preserves future fertility but also is life-saving In our present study the prevalence of B-Lynch suture was 26.4% and 94.7% successful in controlling atonic PPH refractory to uterotonics. PPH often leads to litigation issues if maternal death happens. Obstetricians always have to work under this fear of deadly complications which if uncontrolled can lead to worse consequences. The emergence of B-Lynch suture recently has been of great help in controlling intractable haemorrhage. It is simple and effective, leads to satisfactory hemostasis after application, and if it fails we still have other radical procedures. This technique is simple, requires less time, and can be used in emergencies to preserve fertility and life. WHO guidelines state that after the failure of conservative management, compression sutures should be attempted before vessel ligations. There are some reports where B-Lynch was used only for control of postpartum haemorrhage, without any vessel ligation and the pregnancy outcomes in these cases were favourable. Uterine compressive sutures are a well-established measure for control of haemorrhage following atonic postpartum haemorrhage when medical and nonmedical interventions fail. The absorbable suture can be left in situ, and would typically not lead to problems with future pregnancies. In the present study, the mean age of patients was quite similar to the study done in India (26.8 years), but contrary to the study conducted in Singapore (35 years). This may be due to early marriages and childbearing in our developing country. This is also similar to a study from India, in which the mean age of patients was 26.6 years. This age difference may be due to early marriages and childbearing in our society due to cultural customs as per religion, socioeconomic status of the population, and country. The mean gestational age was 37.8 weeks (32-41). The results are similar to the following studies. - , Atonic PPH was most common in primipara, whereas in a study conducted in Mumbai, India, atonicity was equal in both primipara and multiparous though the difference is not much in the present study too. This is in contrast to findings in a study from India where the majority of patients were multigravida. This can be attributed to different causative factors in different populations and indications of cesarean section in such women. The average birth weight in the present study is similar to a study conducted in Scotland in which the birth weight was 3.5 kg. In the present study emergency, cesarean section was the commonest mode of delivery which is quite similar to a study done in India which showed that 76% of patients had emergency LSCS and only 24% had elective LSCS. This indicates that PPH most commonly occurs in emergency LSCS. Similarly in a study from India, 76% had emergency CS and 23.52% had an elective cesarean section. Fetal distress followed by previous cesarean section was the commonest indication of cesarean section in our study, whereas a study from India showed prolonged labour (33%) followed by antepartum haemorrhage (30%) and prelabour rupture of membrane (20%) were the causative factors of atonic PPH. In another study from India Preeclampsia was the most common cause, whereas in our study fetal distress was the commonest cause of cesarean delivery resulting in atonic PPH later. The average blood loss in the present study was 1784 ml. The maximum number of patients had blood loss in the range of 1000-2000 ml. It indicates timely application and thus reduced blood transfusion, with only a few patients exceeding blood loss ≥2000 ml. The mean blood loss was higher in comparison to studies (1363 ml, 1480 ml), , this difference may be due to difficulty in accurate assessment of blood loss. Assessment of blood loss may not be accurate by measuring the mops soaked as there is blood loss in the patient sheet and also some amount of blood is mixed with amniotic fluid during delivery. Extensive blood loss of >2000 ml was seen in one patient. In our patients, B Lynch suture was timely applied so probably due to that the amount of blood loss and need for blood transfusion is less in our present study. There was a high success rate (94.7%) of B-Lynch suture in controlling atonic PPH, only one case needed a cesarean hysterectomy as bleeding persisted despite the application of B Lynch suture. There was one maternal mortality though atonic PPH was managed with B-Lynch suture patient but patient died in ICU due to heart disease. A study from Pakistan showed a success rate of 83% of B-Lynch suture in controlling PPH whereas another study showed a success rate of 91% in control of PPH. , Although various studies showed B lynch was 100% effective in controlling atonic PPH, - few others in the systematic review showed a 91.7% success rate in controlling PPH, - thus most of them showing success rate between 82-95%. , The difference in success rate may be due to different reasons, time of application, technique, patient selection criteria and disseminated intravascular coagulopathy features in patients. The B lynch brace suture has the advantage of being applied easily and rapidly. It should be attempted as early as possible in order to maximise its success, and prophylactic application should be considered in patients with high risk. Due to a higher success rate B Lynch suture can achieve remarkable results in the treatment of PPH, and can stop the bleeding quickly by its timely application. The post-graduate students, all trainees, and registrars in obstetrics and gynaecology should be taught the procedure so that it can be effectively used during emergencies. It is a single-centred study with small sample size. However, it highlights the prevalence of B Lynch suture and success in controlling atonic PPH. Despite the collection of various data regarding complications, it may not reflect the complication rate of the procedure. Thus larger population-based studies with long-term follow-up are recommended in future. The prevalence of the use of B-Lynch suture was similar to other studies done in similar settings. The B-Lynch suture is an easy method in controlling atonic primary PPH when medical management fails to control the haemorrhage and should always be considered before attempting a hysterectomy.
Subclinical Hypothyroidism among Chronic Kidney Disease Patients Admitted to Nephrology Department of a Tertiary Care Centre: A Descriptive Cross-sectional Study
670ff25a-3970-400f-a97c-abdb1fc8201a
10089008
Internal Medicine[mh]
Chronic kidney disease (CKD) is a condition that is quickly spreading throughout the world's population. According to estimates, the disease affects over 9% of the world's population, especially in developing nations. , Maintaining healthy renal function regulates thyroid hormone metabolism and elimination. , The kidney is a crucial end-organ for thyroid hormonal activity. CKD has a variety of effects on thyroid function. Reports suggest that CKD is largely associated with thyroid dysfunction; especially most common is primary and subclinical hypothyroidism. , The main objective of the study is to find out the prevalence of subclinical hypothyroidism among chronic kidney disease admitted to the Nephrology Department of a tertiary care centre. A descriptive cross-sectional study was conducted at Nobel Medical College Teaching Hospital (NMCTH), Biratnagar, Morang, Nepal on the patients diagnosed with CKD from 15 May 2022 to 10 October 2022. The study was conducted after receiving ethical approval from the Institutional Review Committee (IRC) of the NMCTH (Reference number: 621/2022). All CKD patients who were admitted to the nephrology ward during the study period were included in this study. Patients with other comorbidities and who do not give consent were excluded. A convenience sampling method was used. The sample size was calculated by using the formula: n = Z 2 × p × q e 2 = 1.96 2 × 0.272 × 0.728 0.07 2 = 156 Where, n = minimum required sample size Z = 1.96 at 95% Confidence interval (CI) p = prevalence of subclinical hypothyroidism taken from the previous study as, 27.2% q = 1-p e = margin of error, 7% The calculated final sample size was 156. Predesigned proforma was used to gather personal information like age, sex, height, and weight. The formula used to determine body mass index (BMI) is BMI= Weight in Kg/ (Height in m). The level of triiodothyronine (T3), thyroxine (T4) and thyroid stimulating hormone (TSH) was estimated in the blood samples of these patients by chemiluminescence immunoassay (CLIA) in a fully automatic analyzer (Maglumi 800) at the clinical laboratory services, NMCTH. The level of T3/T4 in the blood sample was estimated by a competitive chemiluminescence immunoassay. The sample, ABEI labelled anti-T3/T4 monoclonal antibody, buffer and solution of the magnetic microbeads coated withT3/T4 antigens were incubated at 37° C. T3/T4 present in the sample competed with T3/T4 antigen immobilised on the magnetic microbeads for a limited number of binding sites on the ABEI labelled anti-T3/T4 antibody forming immune-complexes. After washing, the starter was added to initiate a chemiluminescent reaction. The light was measured by a photomultiplier within 3 seconds as relative light units, which was inversely proportional to the concentration of T3/T4 present in the sample. The TSH level in the blood sample was measured by a sandwich chemiluminescence immunoassay. The sample, ABEI labelled anti-TSH monoclonal antibody, magnetic microbeads coated with another anti-TSH monoclonal antibody were mixed and incubated at 37°C, forming sandwich immune complexes. After washing, the starter was added to initiate a chemiluminescent reaction. The light was measured by a photomultiplier within 3 seconds as relative light units, which was proportional to the concentration of TSH present in the sample. The collected data were entered into and analysed using Microsoft Excel version 2010. Point estimate and 95% CI were calculated. Out of 156 patients with chronic kidney disease, subclinical hypothyroidism was present in 34 (21.79%) (15.31-28.27, 95% CI) patients. Among them, 14 (41.17%) were male and 20 (58.82%) were female. The mean age was 53.47±16.33 years. The mean value of body mass index (BMI) of the CKD patients with subclinical hypothyroidism was 25.31±5.28 kg/m 2 (Range: 19.50-40.90) . The mean value of T3 among patient with subclinical hypothyroidism was 3.01±0.35 . The maximum number 8 (23.52%) of male patients were reported from age group 41-60 years; whereas the maximum number 8 (23.52%) of female patients was reported from age group 61-80 years . Out of 156 CKD patients, subclinical hypothyroidism was observed in 21.79% of patients. Amongst them, the number of males and females in the study was 14 (41.17%) and 20 (58.82%) respectively with a mean age of 53.5 years. Females were more sufferers than males. A higher prevalence of subclinical hypothyroidism among CKD patients was observed in a study carried out in BPKIHS, Dharan, Nepal, which showed that subclinical hypothyroidism was seen in 27.2% of CKD patients. The mean age of all patients was 44.1±16.4 years with 53.8% male and 46.1% female. In another research conducted on hemodialysis patients in western Nepal revealed that 26.6% of them had both subclinical and clinical hypothyroidism. Another study from North India reported that subclinical hypothyroidism was observed in 39.9% of total chronic kidney disease patients. A research from Oman revealed a result, reporting a prevalence of subclinical hypothyroidism of 62.9% among CKD patients. In literature, it is reported that when compared to people with euthyroidism, those with hypothyroidism had a CKD or (95% CI) of 1.25 (1.21-1.29) and therefore concluded that people with CKD were more likely to have hypothyroidism. Almost similar finding was observed in large cohort research, 22% of CKD patients with eGFRs ≤60 had hypothyroidism. A lower rate of prevalence of subclinical hypothyroidism among CKD patients was observed in a study carried out in Saudi Arabia, which reported that amongst CKD patients, 16.9% suffered from subclinical hypothyroidism. A study from Italy concluded that in people with CKD who do not need chronic dialysis, subclinical primary hypothyroidism is a rather frequent disease (18%) and is independently linked to steadily declining estimated GFR in a sizable population of unselected outpatient adults. A study was conducted in the Japanese population with CKD, which showed the prevalence of subclinical hypothyroidism as 14.9% among the patients. In the present study, we also observed the mean value of T3, T4 and TSH amongst the patients of CKD with subclinical hypothyroidism. The mean±SD value of T3, T4 and TSH in the blood of the CKD patients with subclinical hypothyroidism was 3.01±0.35 pg/ml, 1.06±0.24 ng/dl and 12.76±22.54 uIU/ml respectively. In a study, it has been reported that the mean±SD value of TSH among CKD patients with subclinical hyperthyroidism was 9.01±4.40 uIU/ml respectively. Similarly, the mean±SD value of T4 was 1.22±0.13 ng/ dl. In the same way, in another report, it was seen that the mean value of TSH in CKD patients with subclinical hyperthyroidism was 5.40 uIU/ml. A study from Bangalore, India reported the mean±SD value of TSH as 7.23± 4.21 among CKD patients. The mean±SD value of TSH was reported as 7.15± 5.94 among CKD stage-IV patients in Gujarat, India in research. Thyroid dysfunction was observed in CKD patients ages more than 40 in the current study. In the age group 4160 and 61-80, the patients diagnosed with subclinical hyperthyroidism were 41.17% and 29.41%. In a study, it was observed that CKD patients from 36-45 years and 45-55 years were 48% and 32% respectively. The data were only gathered from one centre and at one specific moment, which is one of the study's limitations. If data on CKD patients had also been collected from other centres in Nepal and the patients had been followed up for a longer period, the findings would have been more generalisable. The prevalence of subclinical hyperthyroidism among CKD patients was lower than the other similar studies done in similar settings. Subclinical hypothyroidism was observed more in females and also in advanced-age patients with chronic kidney disease.
Occluded Coronary Artery among Non-ST Elevation Myocardial Infarction Patients in Department of Cardiology of a Tertiary Care Centre: A Descriptive Cross-sectional Study
a0717054-249b-43a0-be64-4bc466eac6ba
10089046
Internal Medicine[mh]
Non-ST Elevation Myocardial Infarction (NSTEMI) is frequently thought to be caused by incomplete blockage of the culprit artery, whereas STEMI is frequently thought to be caused by total occlusion of the culprit artery. - According to research, around a quarter of NSTEMI, is caused by full occlusion of the culprit artery, which is identical to the findings of STEMI on coronary angiography. , Despite this, NSTEMI occluded artery is frequently viewed as a less serious condition than STEMI. There was very little data on the differences in clinical features and outcomes between NSTEMI with the occluded artery (NSTEMIOA) and NSTEMI with patent artery/non-occluded (NSTEMIPA), especially in the context of early vs late percutaneous revascularization. The objective of the study was to find out the prevalence of occluded coronary arteries among non-ST elevation myocardial infarction patients department of cardiology of a tertiary care centre. This was a descriptive cross-sectional study conducted at a tertiary hospital of Manmohan cardiothoracic vascular and transplant centre (MCVTC) with a primary percutaneous coronary intervention (PCI) facility. The patients with ages greater than 18 years old and NSTEMI who underwent coronary angiography from 22 June 2020 to 21 June 2021 were included in the study. The Institutional Review Committee [Reference number: 4271 (6-11) E2 076/077)] of the institute of medicine approved the study. The sample size was calculated using n = Z 2 × p × q e 2 = 1.96 2 × 0.90 × 0.10 0.07 2 = 71 However, after clearing the data only 126 met the inclusion criteria. Where, n = minimum required sample size Z = 1.96 at 95% of Confidence Interval (CI) p = prevalence of occluded coronary arteries in NSTEMI patients, 90% q = 1-p e = margin of error, 7% The patients with ST-elevation myocardial infarction, left bundle branch block (LBBB), and troponin I rise following PCI/coronary artery bypass graft surgery (CABG) were included from the study. Similarly, patients with NSTEMI who did not undergo coronary angiography and patients who did not give written consent were also excluded. An occluded coronary artery was defined as the presence of a lesion with 100% stenosis or thrombolysis in myocardial infarction (TIMI) flow grade 0 to 1 in one or more major coronary vessels on invasive coronary angiography. Major branch occlusion was incorporated into the major vessel territory. A non-occluded or patent coronary artery was defined as TIMI flow grade 2/3. The culprit artery of the NSTEMI was determined by the cardiologist performing the coronary angiography based on the findings of electrocardiogram (ECG) changes, angiography, and echocardiography. After ascertaining the severity of coronary artery disease, the mode of revascularization is determined and if PCI is indicated, the procedure is done, and the patient is shifted to the coronary care unit (CCU) for observation and further management. Sometimes, thrombosuction and plain old balloon angioplasty (POBA) may also be done according to requirement. In the case of complex lesions, the revascularization procedure (PCI/CABG) is decided by the heart team approach. Data were collected from patients using questionnaires, physical examination, investigation parameters (cardiac biomarkers, ECG, echocardiography), coronary angiographic, and revascularization details, and in-hospital complications. Data were compiled, edited, and checked to maintain consistency. The data were recorded in a Microsoft Excel 2014 and analyzed using IBM SPSS Statistics version 24.0. Point estimate and 95% CI were calculated. A total of 126 patients were included with the diagnosis of NSTEMI who underwent coronary angiography during the study period. Among 126 NSTEMI patients, the prevalence of occluded coronary arteries (OCA) was 41 (32.54%) (24.36-40.72, 95% CI). However, in patients with OCA, there was female predominance. The mean age of presentation in patients with occluded coronary arteries was 61.02±14.16 years. Dyslipidemia was present in 15 (36.59%) NSTEMI patients with occluded coronary arteries. The baseline characteristics of the study population and NSTEMI patients with occluded arteries presented about 24 hours after the development of symptoms . The mean level of Troponin I was 6.59±12.58. Heart failure developed in 17 (41.46%) of NSTEMI patients with occluded arteries. The in-hospital mortality was 1(2.44%) patient with OCA and the baseline laboratory results and complications developed among patients . TVD was also the predominant lesion in NSTEMI with the occluded artery 18 (43.90%). In overall NSTEMI patients culprit artery was as follows: circumflex branch of left coronary artery (LCX) 20 (48.78%), RCA 6 (14.63%) . In this study out of 126 patients, 32.50% were NSTEMI with occluded coronary arteries while 67.5% were NSTEMI with non-occluded coronary arteries. In our study, 32.50% of NSTEMI patients had occluded coronary arteries which were similar to previously conducted studies. In one study done in the USA, the frequency of NSTEMI with occluded coronary artery was around 24% but the study population was limited only to patients undergoing PCI. Further studies have shown variable findings and, depending upon the difference in patient selection, the percentage of occluded coronaries was found to be around 29% to 63%. The reason for having higher OCA prevalence in our study might be a late presentation of patients to a medical facility, prior undetected MI, or missed STEMI due to a lack of early identification of ischemic heart disease. There was male predominance (59.5%) in our study presenting with NSTEMI. Also, the majority of the patients were male in NSTEMI with the occluded coronary artery (61% vs 39%). Men have a 2.4-fold overall risk for NSTEMI compared with women. - The proportion of female patients being lower compared to male patients in our study may be due to fewer females attending hospitals for medical aid, tolerance of symptoms, and less likely to have coronary angiography due to gender-based inequalities in treatment intensity of NSTEMI. Studies have shown that the south Asian population had a higher rate of MI at a younger age (mean age 53 years) compared to the western population explained by multiple risk factors at ages<60 years. , However, our study showed that the mean age of presentation was 65 years. Large-scale studies are required to explain this difference. On the contrary, patients with occluded coronary arteries were younger in our study similar to the other study. Hypertension, diabetes mellitus, smoking and dyslipidemia are the major risk factors for NSTEMI. In our study, a family history of CAD was found to be statistically significant. Family history has been emphasized as a major nonmodifiable risk factor by the National Cholesterol Education Program Adult Treatment Panel (NCEP-ATP) III guidelines. Family history of MI is an important risk marker for increased MI risk, and particular weight should be placed on the number of affected first- and second-degree relatives and the age of the relative at the time of presentation. According to the Danish national health registers, a detailed family history could be very useful in assessing MI risk, especially in persons aged 35-55 years. In our study, NSTEMI patients with occluded coronary arteries presented earlier to the hospital in comparison to non-occluded coronary artery patients. However, there was still a delay in the presentation (>24 hours). This delay in a presentation can lead to delayed revascularization in an occluded artery which can lead to both early and long-term adverse outcomes. The Occluded Artery Trial (OAT), showed no benefit to revascularization for patients with an occluded infarct artery 24 hours after symptom onset. The enhanced survival in STEMI patients by rapid reperfusion of the infarct-related artery was noted compared to NSTEMI patients. Therefore, these NSTEMI patients with an occluded coronary artery may represent STEMI equivalents who could benefit from earlier revascularization. There were some limitations of our study. Our study population was small and hence it was difficult to achieve statistical significance for the differences in clinical outcomes between NSTEMI patients with OCA and NOCA. Our study was also single-centred and hence the results obtained may not be generalized to other populations. Follow up study was not done in our study as a result long-term effect of revascularization could not be ascertained so a study with long-term follow-up is required. The prevalence of occluded coronary arteries was similar to the studies done in similar settings. There is as yet no reliable tool to identify this group of patients before performing angiography. These patients have similar presentations and angiographic findings as ST-segment Elevation myocardial infarction. So whether timely reperfusion will benefit this group of patients requires further studies.
Bangpungtongsung-san for patients with major depressive disorder: study protocol for a randomized controlled phase II clinical trial
b7d3917a-4856-4fc7-9855-91f3f79481f7
10091324
Pharmacology[mh]
Major depressive disorder (MDD) is a common mental disorder whose estimated global prevalence after the coronavirus disease-2019 pandemic is 3,153 cases per 100,000 population . One of the most critical problems of MDD is its repeated relapse and recurrence, with a recurrence rate of 50% and 85% in 6 months and 10 years, respectively . Most patients with MDD experience repetitive episodes and undergo chronic progress. The etiology of MDD comprises severe social problems, such as suicide, which is the fifth leading cause of death in Korea , or a negative perception of the disease, which results in a passive approach to receiving treatment . Another problem is the high discontinuation rate of antidepressant treatment; 43.5% of patients with MDD discontinue treatment at 6 weeks . Accordingly, a new drug for depression with few side effects and a low risk of drug dependence is necessary. Patients with MDD share a concomitance of depressed mood and lethargy. However, symptoms of changes in sleep and appetite occur differently among patients with MDD . These symptoms can be divided into melancholic depression, atypical depression, anxious depression, and a mix of the aforementioned manifestations . Atypical depression is characterized by mood reactivity and at least two of the following symptoms: increased appetite or weight gain, hypersomnia, leaden paralysis, and interpersonal rejection sensitivity . In literature, atypical depression is defined according to two symptoms–increased appetite/weight gain and hypersomnia . In the UK Biobank Mental Health Survey, atypical depression showed earlier onset, more recurrent episodes, and higher severity. Patients with atypical depression had higher rates of comorbid obesity, cardiovascular disease, and metabolic syndrome . According to a meta-analysis of anthropometric studies on subtypes of depression, the atypical depression group showed 2.55 times higher body mass index (BMI) than the typical depression group . These epidemiological traits imply that an approach different than that for typical depression should be used for atypical depression, considering its comorbidity and progress. As one of the major differences in mechanism between atypical and melancholic depression is increased inflammation shown by increased levels of proinflammatory cytokines and C-reactive protein , a novel medicine should be suggested for patients with atypical depression. Bangpungtongseong-san (BTS) is one of the most used formulas in traditional east Asian medicine and has been widely sold as an over-the-counter drug for weight control in Korea . In Korea, about 40 products containing BTS extract have been approved as over-the-counter drugs by the Ministry of Food and Drug Safety. The approved indications for BTS extract are accompanying symptoms of hypertension (palpitations, stiff shoulders, and flushing), obesity, swelling, and constipation. The novel efficacies of BTS other than those described in traditional medical bibliographies are being continuously unveiled through clinical and preclinical studies. BTS has been reported to be efficient in various diseases, ranging from metabolic diseases, such as hypertension, lipid abnormalities, and diabetes, to skin diseases, such as herpes zoster, chronic urticaria, and inflammatory dermatitis . Accordingly, BTS is considered to have the potential for its indications to be expanded to other metabolic and inflammatory diseases. Furthermore, the antidepressant and anti-neuroinflammatory effects of BTS extract were found in in vivo and in vitro studies . This clinical trial aimed to test the efficacy of BTS for patients with MDD in a human study. Particularly, considering the characteristics of BTS, which has been widely used for obesity, we plan to include normal-weight or overweight patients with MDD, excluding those who are underweight. Several clinical trials on BTS in obese patients have demonstrated positive results . Moreover, we identified that BTS extract is a herbal medicine that shows a high expression of anti-depressant-like effects among the approved herbal medicine products in Korea . Accordingly, we plan to study BTS as an effective alternative for patients with MDD who are over normal weight, have an excessive appetite, and suffer from weight gain. The main objective of this phase II trial is to find the appropriate dose of BTS granules in patients with MDD for a further confirmative phase III trial. The primary efficacy endpoint is the change from baseline of the 17-item Hamilton Depression Rating Scale (HDRS) total score at 8 weeks. The mean difference of the primary efficacy endpoint will be compared between the high-dose and low-dose BTS groups and the placebo group. If either two doses of BTS show superiority over the placebo, a further confirmative phase III trial will be planned. The secondary efficacy endpoints include the response and remission rate of depression, as defined by the 17-item HDRS total score, and depression severity, as measured by the Beck Depression Inventory-II (BDI-II). Moreover, the safety of administrating BTS granules for 8 weeks compared to that of placebo granules will be evaluated. Trial design and setting This clinical trial is designed as a randomized, controlled, investigator- and participant-blinded multicenter trial. Three groups will be included: the high-dose BTS, low-dose BTS, and placebo groups. The enrolled participants will be randomly allocated to each group in a 1:1:1 ratio. The superiority of the high- and low-dose BTS granules to placebo granules will be tested. This clinical trial will be conducted in two academic hospitals in the Republic of Korea. This protocol follows the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) Statement (see Additional file ). Eligibility criteria Inclusion criteria This clinical trial will enroll men and women aged 19–65 years diagnosed with MDD according to the Diagnostic and Statistical Manual of Mental Disorders-5 (DSM-5) criteria. The baseline 17-item HDRS total score of enrolled participants should be ≥ 18, and the baseline BMI should be ≥ 18.5 kg/m 2 . Only participants who provide informed consent can be included. Exclusion criteria The exclusion criteria are as follows: participants at high risk of suicide; requiring hospitalization due to MDD; diagnosed and being treated for panic disorder, obsessive disorder, post-traumatic stress disorder, or personality disorder; having a history of manic, schizophrenic, or mixed episodes; having current or lifetime alcohol or other substance abuse/dependence disorders; having a medical condition that may affect depression severity, such as hypothyroidism or hyperparathyroidism; or having an unstable medical condition, such as uncontrolled hypertension or diabetes, liver dysfunction, or renal impairment. Participants who received nonpharmacological treatments for depression, such as electroconvulsive therapy, vagal nerve stimulation, or deep brain stimulation within 3 months, or those who took medicines that may affect depression severity, such as anxiolytics, antipsychotics, corticosteroids, or hormone replacement therapy within 4 weeks, will be also excluded. To prevent adverse events, participants demonstrating loose stool for more than 3 times a day within 7 days, taking other laxatives, or with symptoms of abdominal pain, vomiting, or loss of appetite due to digestive disorders will be excluded. Moreover, pregnant or lactating women or participants determined to be unsuitable by the investigators will be excluded. The detailed inclusion and exclusion criteria are presented on the clinical trial registration webpage ( https://cris.nih.go.kr/cris/search/detailSearch.do/23192 ). Interventions BTS and placebo granules Three BTS or placebo granule sachets (1 g for each sachet; total, 3 g) will be orally administered twice a day for 8 weeks. Participants in the high-dose BTS group will take three BTS granule sachets, those in the low-dose BTS group will take one BTS and two placebo granule sachets, and those in the placebo group will take three placebo granule sachets for one dosage. The BTS granule was approved by the Ministry of Food and Drug Safety (product code: 197,900,572). One gram of the BTS granule sachet contains 0.5 g of soft-extract BTS as an active ingredient. Soft-extract BTS is prepared by decocting the 18 herbs presented in Table together with 8 to 10 times the amount of water boiling at approximately 80–100 °C for 2–3 h. After vacuum concentration under 60 °C, approximately 3.0 g of soft-extract BTS was obtained. The aforementioned amount, which will be taken on the daily be the high-dose BTS group, contains at least 3.8 mg of Glycyrrhizic acid , 2.7 mg of Paeoniflorin , 0.6 mg of total alkaloid ( Ephedrine and Pseudoephedrine ), 15.4 mg of Baicalin , and 5.4 mg of Geniposide . Three-dimensional chromatogram of BTS sample based on High-performance liquid chromatography-photodiode array analysis can be found in the previous report . The placebo granule does not contain any active ingredients and has been developed to have an identical appearance (lemon yellow granule) and scent to those of the BTS granule. The BTS and placebo granules are manufactured and packaged by Hanpoong pharmaceuticals (Jeonju, Republic of Korea) according to the good manufacturing practice guideline for medicinal products. Criteria for discontinuing allocated interventions If a serious adverse event occurs or a participant wants to discontinue administration owing to an adverse event, the administration of the investigational product will be stopped. Moreover, in case depression becomes too severe or the risk of suicide has become high in a participant during the trial, the investigators are to determine whether administration should be discontinued. The risk of suicide will be closely assessed using the Columbia Suicide Severity Rating Scale at every visit. In case of discontinuing allocated interventions, the safety assessment will be conducted as planned, if possible. Procedure for monitoring adherence The compliance rate of administering investigational products, defined as the percentage of the number of BTS or placebo granule sachets actually taken according to the number of sachets that should be taken, will be checked at every visit by pharmacists. The number of sachets actually taken will be checked by the number of empty sachets returned from the participant. Participants will be educated on how to take the BTS or placebo granules by pharmacists at every visit. Participants whose compliance rate is ≥ 75% will be included in the per-protocol set. Permitted and prohibited concomitant interventions The medications taken 4 weeks before participating in the trial and not among the following prohibited medications can be permitted as concomitant medications; antidepressants, anxiolytics, antipsychotics, corticosteroids, and hormone replacement therapy are prohibited. Medications or herbal medicine that can affect depressive symptoms, as well as non-pharmacological interventions for improving depression, including acupuncture, meditation, electrical stimulation, and magnetic stimulation, are also prohibited. Moreover, bulk-forming and osmotic laxatives are prohibited. Meanwhile, medications administered for the purpose of the transient treatment of diseases other than depression can be permitted following assessment by investigators. Outcomes Primary outcome The primary objective is to evaluate the effect of BTS on depressive symptoms compared to that of the placebo granules. The difference between the two treatment arms (high-dose vs. placebo and low-dose vs. placebo) in the change from baseline of the 17-item HDRS total score at 8 weeks is the primary endpoint of this trial . Secondary outcomes Secondary objectives include evaluating the effect of BTS compared to that of the placebo granules on depressive symptoms with clinician-rating and self-rating outcomes at different time points. The change from baseline of the 17-item HDRS total score at 2, 4, 6, and 12 weeks, as well as the response and remission rates defined by the 17-item HDRS total score at 8 weeks and 12 weeks, will be assessed as secondary outcomes. The response rate will be defined as the percentage of participants in each group whose 17-item HDRS total score improved by more than 50%. The remission rate will be defined as the percentage of participants in each group whose 17-item HDRS total score is under 7 points . Secondary outcomes also include the change in the participants’ self-rated BDI-II total score from baseline to at 4, 8, and 12 weeks . The secondary objectives also include evaluating the effect of BTS on anxiety, anger, insomnia, and quality of life compared to that of the placebo granules. State and trait of anxiety will be assessed using State-Trait Anxiety Inventory (STAI) . Moreover, state anger, trait anger, and anger expression will be assessed using the State-Trait Anger Expression Inventory (STAXI) . The severity of insomnia will be assessed by Insomnia Severity Index (ISI) , and the quality of life will be assessed using the 3-level version of the EuroQol-5 Dimension (EQ-5D-3L) index . These outcomes will be measured at baseline, and at 4, 8, and 12 weeks. Exploratory outcomes The exploratory objectives of this trial include exploring predictive factors for the treatment response. The height and weight of each participant will be measured at the screening visit, and weight will be measured every visit to calculate BMI. Moreover, Korean Symptom Check List 95 and Pattern Identifications Tool for Depression will be assessed at baseline, and at 4, 8, and 12 weeks. A schematic diagram of participant timelines is presented in Fig. . Sample size and recruitment The total sample size has been estimated to be 126, with 42 participants for each group. To our knowledge, no previous clinical trial has compared the effect of BTS on depressive symptoms with that of placebo granules, and the effect size of BTS was estimated based on the result of a previous clinical trial on another herbal medicine in patients with MDD . The sample size was calculated based on the hypothesis that the mean change from baseline of the 17-item HDRS total score in the high-dose BTS group is higher to that in placebo group. The mean difference in score between the high-dose BTS and placebo group was estimated to be 4.0 at 8 weeks, and the pooled standard deviation was estimated to be 5.78. With the significance level (α) of 0.05, statistical power (β) of 0.80, allocation ratio of 1:1, and drop-out rate of 0.05, the required sample size for each group was determined to be 42 participants. The participants for this clinical trial will be recruited from two university hospitals in Korea. The recruitment will be posted on the hospital bulletin boards and online homepage. Moreover, local advertisements on the subway and online advertisements will be conducted to reach the target sample size within the planned period. All recruitment posters and methods have prior institutional review board (IRB) approval. Random allocation and blinding An independent statistician generated a random allocation sequence using SAS® version 9.4 (SAS Institute Inc., Cary, NC, USA). The manager of the random allocation sequence provided the generated random allocation sequence to the pharmaceutical company to pack the investigational product. The high-dose BTS, low-dose BTS, and placebo granules were packed into the allocated random number based on the sequence provided. The investigators will assign a random number to each participant according to the order of enrollment in visit 2. Allocation will be concealed to the investigators by sequential numbering. The participants, investigators, pharmacist, and outcome assessor will be blinded to the allocated group of each participant. The placebo granule has been developed to have identical color, scent, and taste to those of the BTS granule. In case of a serious medical emergency, unblinding of the group allocation of the participant can be considered. When sub-investigators or principal investigators judge that code-breaking is required, the principal investigator will quickly hold a meeting among investigators and make a decision as to whether to unblind through discussion. A case of unblinding and related medical issues will be reported to the IRB until 24 h. Data collection and management To increase the reliability and validity of the HDRS measured as the primary outcome in this study, investigators in charge of assessing the HDRS at the two sites were trained using a structured interview guide for the HDRS . The validated Korean version of HDRS , as well as those for BDI-II , STAI , STAXI , ISI , and EQ-5D-5L , will be used in this trial. The investigators will check the participants’ understanding and the missing values for all responded questionnaires. The list of measurements that will be used in this clinical trial is presented in Table . The investigators will send regular messages to participants to encourage them to complete the clinical trial. In case a participant discontinues administration or drops out from the trial, the investigators will attempt to have an assessment visit within 1 week for safety and HDRS follow-up. The data will be entered into a case report form (CRF) on an electronic data capture system. In the system, the ranges for data values were set to avoid the entry of obvious outliers. The clinical research associates will conduct 100% source document verification between the data recorded in the CRF and data in the source documents. The system and manual query will be reviewed monthly after the first participant is enrolled. Comorbidities, medical history, and adverse events will be coded using the MedDRA dictionary, and drug history will be coded using ATC code. Statistical methods Data will be analyzed using SAS® version 9.4 (SAS Institute Inc., Cary, NC, USA). In the efficacy analysis, the full analysis (FA) set will be used as the main analysis set, and the per protocol (PP) set will be used as the supplementary analysis set. The FA set will include randomized participants and minimize the use of those excluded from the analysis. A participant who has never taken the investigational product or has never been evaluated since the random allocation will be excluded from the FA set. The analyses of data containing missing values will be handled with the multiple imputation method. The PP set will include participants who completed the trial without major protocol violations. Participants who drop out during the intervention period (8 weeks), who are found to be inappropriate for inclusion according to the eligibility criteria, or whose total compliance rate is under 75% will be excluded from the PP set. In the safety analysis, the safety set will include all participants who have ever taken the investigational product. The primary efficacy endpoint is the change from baseline of the 17-item HDRS total score at 8 weeks. The mean difference in outcomes will be compared between the two groups using an analysis of covariance with the site and baseline value as covariates. Tests will be conducted twice to compare the outcomes of the high- and low-dose BTS groups with that of the placebo group. For multiple parallel-group comparisons, a significance level (α) of 0.025 and statistical power (β) of 0.80 will be used for each test. To analyze continuous outcomes among the secondary outcomes (e.g., BDI-II, SRAI, STAXI, ISI, and EQ-5D-3L index scores), an identical analysis method to the primary outcome will be used. To analyze binary outcomes among the secondary outcomes, response to treatment and remission rate of depression will be assessed, and a logistic regression analysis will be conducted with the site and baseline value as covariates. The method to handle multiple comparisons is identical to that used for analyzing continuous outcomes. Additionally, a subgroup analysis with various criteria will be conducted for exploratory purposes. First, subgroups will be classified based on whether participants are of healthy weight or overweight, as assessed by BMI. Second, other subgroups will be defined using the baseline response to the “changes in appetite” item in BDI-II. Third, some subgroups will be defined using the baseline Korean medicine pattern identification of depression. Moreover, in case of significant differences in the baseline demographic information between groups, adjusted analyses can be conducted. Data monitoring and auditing This clinical trial uses an approved herbal medicine product for another indication. The BTS granules do not have any serious adverse events in the real world, and the risk of this trial is expected to be low. Moreover, it is a phase II trial, meaning that a data monitoring committee is not needed. Interim analyses are not planned. Adverse events will be carefully collected on every visit after the administration of the investigational product. The severity of adverse events will be rated as mild, moderate, and severe. The causality of adverse events with the investigational product will be categorized as follows: definitely related, probably related, possibly related, probably not related, and definitely not related. Adverse events of dyspepsia, diarrhea, and abdominal pain can occur after the administration of BTS granules, and these symptoms will be carefully checked. Regular monitoring is planned, with frequent visits planned on being conducted for each 4–5 participants enrolled. After the initiation visit, the first regular monitoring visit will be conducted within 7 working days after the first participant is enrolled. Compliance with the IRB-approved clinical protocol, collection of data, written informed consent of participants, recording and reporting of adverse events, management of the investigational product, data entered into the Electronic Data Capture system, and study materials will be checked in the regular monitoring visits. Protocol amendments Protocol modifications will be determined after sufficient discussion by investigators at the hospital and Korea Institute of Oriental Medicine and will be applied to the study after obtaining approval for the amendment from the IRB. The current version of the protocol is 1.8 (date: 2022–06-27). Confidentiality and post-trial care The personal information of each participant will not be entered into an electronic CRF, and the data of each participant will be collected under screening and random numbers. Follow-up observation will be conducted at week 12, that is, 4 weeks after the completion of intervention. Compensation criteria and planning regarding those who suffer harm from participation in this trial are prepared. The occurrence of adverse events will be checked at every visit, and required treatment and observation will be applied until the symptoms disappear. Dissemination policy The clinical study information and results will be registered to the Clinical Research Information Service. The findings of this study will be presented at conferences and published in peer-reviewed journals. The participant-level dataset will be uploaded to the Korean Medicine Data Repository (kmdr.kiom.re.kr) after completing the study. Moreover, we will report the final data to the Ministry of Health and Welfare, Republic of Korea, through the Korea Health Industry Development Institute. Results will also be published following completion of the study. This clinical trial is designed as a randomized, controlled, investigator- and participant-blinded multicenter trial. Three groups will be included: the high-dose BTS, low-dose BTS, and placebo groups. The enrolled participants will be randomly allocated to each group in a 1:1:1 ratio. The superiority of the high- and low-dose BTS granules to placebo granules will be tested. This clinical trial will be conducted in two academic hospitals in the Republic of Korea. This protocol follows the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) Statement (see Additional file ). Inclusion criteria This clinical trial will enroll men and women aged 19–65 years diagnosed with MDD according to the Diagnostic and Statistical Manual of Mental Disorders-5 (DSM-5) criteria. The baseline 17-item HDRS total score of enrolled participants should be ≥ 18, and the baseline BMI should be ≥ 18.5 kg/m 2 . Only participants who provide informed consent can be included. Exclusion criteria The exclusion criteria are as follows: participants at high risk of suicide; requiring hospitalization due to MDD; diagnosed and being treated for panic disorder, obsessive disorder, post-traumatic stress disorder, or personality disorder; having a history of manic, schizophrenic, or mixed episodes; having current or lifetime alcohol or other substance abuse/dependence disorders; having a medical condition that may affect depression severity, such as hypothyroidism or hyperparathyroidism; or having an unstable medical condition, such as uncontrolled hypertension or diabetes, liver dysfunction, or renal impairment. Participants who received nonpharmacological treatments for depression, such as electroconvulsive therapy, vagal nerve stimulation, or deep brain stimulation within 3 months, or those who took medicines that may affect depression severity, such as anxiolytics, antipsychotics, corticosteroids, or hormone replacement therapy within 4 weeks, will be also excluded. To prevent adverse events, participants demonstrating loose stool for more than 3 times a day within 7 days, taking other laxatives, or with symptoms of abdominal pain, vomiting, or loss of appetite due to digestive disorders will be excluded. Moreover, pregnant or lactating women or participants determined to be unsuitable by the investigators will be excluded. The detailed inclusion and exclusion criteria are presented on the clinical trial registration webpage ( https://cris.nih.go.kr/cris/search/detailSearch.do/23192 ). This clinical trial will enroll men and women aged 19–65 years diagnosed with MDD according to the Diagnostic and Statistical Manual of Mental Disorders-5 (DSM-5) criteria. The baseline 17-item HDRS total score of enrolled participants should be ≥ 18, and the baseline BMI should be ≥ 18.5 kg/m 2 . Only participants who provide informed consent can be included. The exclusion criteria are as follows: participants at high risk of suicide; requiring hospitalization due to MDD; diagnosed and being treated for panic disorder, obsessive disorder, post-traumatic stress disorder, or personality disorder; having a history of manic, schizophrenic, or mixed episodes; having current or lifetime alcohol or other substance abuse/dependence disorders; having a medical condition that may affect depression severity, such as hypothyroidism or hyperparathyroidism; or having an unstable medical condition, such as uncontrolled hypertension or diabetes, liver dysfunction, or renal impairment. Participants who received nonpharmacological treatments for depression, such as electroconvulsive therapy, vagal nerve stimulation, or deep brain stimulation within 3 months, or those who took medicines that may affect depression severity, such as anxiolytics, antipsychotics, corticosteroids, or hormone replacement therapy within 4 weeks, will be also excluded. To prevent adverse events, participants demonstrating loose stool for more than 3 times a day within 7 days, taking other laxatives, or with symptoms of abdominal pain, vomiting, or loss of appetite due to digestive disorders will be excluded. Moreover, pregnant or lactating women or participants determined to be unsuitable by the investigators will be excluded. The detailed inclusion and exclusion criteria are presented on the clinical trial registration webpage ( https://cris.nih.go.kr/cris/search/detailSearch.do/23192 ). BTS and placebo granules Three BTS or placebo granule sachets (1 g for each sachet; total, 3 g) will be orally administered twice a day for 8 weeks. Participants in the high-dose BTS group will take three BTS granule sachets, those in the low-dose BTS group will take one BTS and two placebo granule sachets, and those in the placebo group will take three placebo granule sachets for one dosage. The BTS granule was approved by the Ministry of Food and Drug Safety (product code: 197,900,572). One gram of the BTS granule sachet contains 0.5 g of soft-extract BTS as an active ingredient. Soft-extract BTS is prepared by decocting the 18 herbs presented in Table together with 8 to 10 times the amount of water boiling at approximately 80–100 °C for 2–3 h. After vacuum concentration under 60 °C, approximately 3.0 g of soft-extract BTS was obtained. The aforementioned amount, which will be taken on the daily be the high-dose BTS group, contains at least 3.8 mg of Glycyrrhizic acid , 2.7 mg of Paeoniflorin , 0.6 mg of total alkaloid ( Ephedrine and Pseudoephedrine ), 15.4 mg of Baicalin , and 5.4 mg of Geniposide . Three-dimensional chromatogram of BTS sample based on High-performance liquid chromatography-photodiode array analysis can be found in the previous report . The placebo granule does not contain any active ingredients and has been developed to have an identical appearance (lemon yellow granule) and scent to those of the BTS granule. The BTS and placebo granules are manufactured and packaged by Hanpoong pharmaceuticals (Jeonju, Republic of Korea) according to the good manufacturing practice guideline for medicinal products. Criteria for discontinuing allocated interventions If a serious adverse event occurs or a participant wants to discontinue administration owing to an adverse event, the administration of the investigational product will be stopped. Moreover, in case depression becomes too severe or the risk of suicide has become high in a participant during the trial, the investigators are to determine whether administration should be discontinued. The risk of suicide will be closely assessed using the Columbia Suicide Severity Rating Scale at every visit. In case of discontinuing allocated interventions, the safety assessment will be conducted as planned, if possible. Procedure for monitoring adherence The compliance rate of administering investigational products, defined as the percentage of the number of BTS or placebo granule sachets actually taken according to the number of sachets that should be taken, will be checked at every visit by pharmacists. The number of sachets actually taken will be checked by the number of empty sachets returned from the participant. Participants will be educated on how to take the BTS or placebo granules by pharmacists at every visit. Participants whose compliance rate is ≥ 75% will be included in the per-protocol set. Permitted and prohibited concomitant interventions The medications taken 4 weeks before participating in the trial and not among the following prohibited medications can be permitted as concomitant medications; antidepressants, anxiolytics, antipsychotics, corticosteroids, and hormone replacement therapy are prohibited. Medications or herbal medicine that can affect depressive symptoms, as well as non-pharmacological interventions for improving depression, including acupuncture, meditation, electrical stimulation, and magnetic stimulation, are also prohibited. Moreover, bulk-forming and osmotic laxatives are prohibited. Meanwhile, medications administered for the purpose of the transient treatment of diseases other than depression can be permitted following assessment by investigators. Three BTS or placebo granule sachets (1 g for each sachet; total, 3 g) will be orally administered twice a day for 8 weeks. Participants in the high-dose BTS group will take three BTS granule sachets, those in the low-dose BTS group will take one BTS and two placebo granule sachets, and those in the placebo group will take three placebo granule sachets for one dosage. The BTS granule was approved by the Ministry of Food and Drug Safety (product code: 197,900,572). One gram of the BTS granule sachet contains 0.5 g of soft-extract BTS as an active ingredient. Soft-extract BTS is prepared by decocting the 18 herbs presented in Table together with 8 to 10 times the amount of water boiling at approximately 80–100 °C for 2–3 h. After vacuum concentration under 60 °C, approximately 3.0 g of soft-extract BTS was obtained. The aforementioned amount, which will be taken on the daily be the high-dose BTS group, contains at least 3.8 mg of Glycyrrhizic acid , 2.7 mg of Paeoniflorin , 0.6 mg of total alkaloid ( Ephedrine and Pseudoephedrine ), 15.4 mg of Baicalin , and 5.4 mg of Geniposide . Three-dimensional chromatogram of BTS sample based on High-performance liquid chromatography-photodiode array analysis can be found in the previous report . The placebo granule does not contain any active ingredients and has been developed to have an identical appearance (lemon yellow granule) and scent to those of the BTS granule. The BTS and placebo granules are manufactured and packaged by Hanpoong pharmaceuticals (Jeonju, Republic of Korea) according to the good manufacturing practice guideline for medicinal products. If a serious adverse event occurs or a participant wants to discontinue administration owing to an adverse event, the administration of the investigational product will be stopped. Moreover, in case depression becomes too severe or the risk of suicide has become high in a participant during the trial, the investigators are to determine whether administration should be discontinued. The risk of suicide will be closely assessed using the Columbia Suicide Severity Rating Scale at every visit. In case of discontinuing allocated interventions, the safety assessment will be conducted as planned, if possible. The compliance rate of administering investigational products, defined as the percentage of the number of BTS or placebo granule sachets actually taken according to the number of sachets that should be taken, will be checked at every visit by pharmacists. The number of sachets actually taken will be checked by the number of empty sachets returned from the participant. Participants will be educated on how to take the BTS or placebo granules by pharmacists at every visit. Participants whose compliance rate is ≥ 75% will be included in the per-protocol set. The medications taken 4 weeks before participating in the trial and not among the following prohibited medications can be permitted as concomitant medications; antidepressants, anxiolytics, antipsychotics, corticosteroids, and hormone replacement therapy are prohibited. Medications or herbal medicine that can affect depressive symptoms, as well as non-pharmacological interventions for improving depression, including acupuncture, meditation, electrical stimulation, and magnetic stimulation, are also prohibited. Moreover, bulk-forming and osmotic laxatives are prohibited. Meanwhile, medications administered for the purpose of the transient treatment of diseases other than depression can be permitted following assessment by investigators. Primary outcome The primary objective is to evaluate the effect of BTS on depressive symptoms compared to that of the placebo granules. The difference between the two treatment arms (high-dose vs. placebo and low-dose vs. placebo) in the change from baseline of the 17-item HDRS total score at 8 weeks is the primary endpoint of this trial . Secondary outcomes Secondary objectives include evaluating the effect of BTS compared to that of the placebo granules on depressive symptoms with clinician-rating and self-rating outcomes at different time points. The change from baseline of the 17-item HDRS total score at 2, 4, 6, and 12 weeks, as well as the response and remission rates defined by the 17-item HDRS total score at 8 weeks and 12 weeks, will be assessed as secondary outcomes. The response rate will be defined as the percentage of participants in each group whose 17-item HDRS total score improved by more than 50%. The remission rate will be defined as the percentage of participants in each group whose 17-item HDRS total score is under 7 points . Secondary outcomes also include the change in the participants’ self-rated BDI-II total score from baseline to at 4, 8, and 12 weeks . The secondary objectives also include evaluating the effect of BTS on anxiety, anger, insomnia, and quality of life compared to that of the placebo granules. State and trait of anxiety will be assessed using State-Trait Anxiety Inventory (STAI) . Moreover, state anger, trait anger, and anger expression will be assessed using the State-Trait Anger Expression Inventory (STAXI) . The severity of insomnia will be assessed by Insomnia Severity Index (ISI) , and the quality of life will be assessed using the 3-level version of the EuroQol-5 Dimension (EQ-5D-3L) index . These outcomes will be measured at baseline, and at 4, 8, and 12 weeks. Exploratory outcomes The exploratory objectives of this trial include exploring predictive factors for the treatment response. The height and weight of each participant will be measured at the screening visit, and weight will be measured every visit to calculate BMI. Moreover, Korean Symptom Check List 95 and Pattern Identifications Tool for Depression will be assessed at baseline, and at 4, 8, and 12 weeks. A schematic diagram of participant timelines is presented in Fig. . The primary objective is to evaluate the effect of BTS on depressive symptoms compared to that of the placebo granules. The difference between the two treatment arms (high-dose vs. placebo and low-dose vs. placebo) in the change from baseline of the 17-item HDRS total score at 8 weeks is the primary endpoint of this trial . Secondary objectives include evaluating the effect of BTS compared to that of the placebo granules on depressive symptoms with clinician-rating and self-rating outcomes at different time points. The change from baseline of the 17-item HDRS total score at 2, 4, 6, and 12 weeks, as well as the response and remission rates defined by the 17-item HDRS total score at 8 weeks and 12 weeks, will be assessed as secondary outcomes. The response rate will be defined as the percentage of participants in each group whose 17-item HDRS total score improved by more than 50%. The remission rate will be defined as the percentage of participants in each group whose 17-item HDRS total score is under 7 points . Secondary outcomes also include the change in the participants’ self-rated BDI-II total score from baseline to at 4, 8, and 12 weeks . The secondary objectives also include evaluating the effect of BTS on anxiety, anger, insomnia, and quality of life compared to that of the placebo granules. State and trait of anxiety will be assessed using State-Trait Anxiety Inventory (STAI) . Moreover, state anger, trait anger, and anger expression will be assessed using the State-Trait Anger Expression Inventory (STAXI) . The severity of insomnia will be assessed by Insomnia Severity Index (ISI) , and the quality of life will be assessed using the 3-level version of the EuroQol-5 Dimension (EQ-5D-3L) index . These outcomes will be measured at baseline, and at 4, 8, and 12 weeks. The exploratory objectives of this trial include exploring predictive factors for the treatment response. The height and weight of each participant will be measured at the screening visit, and weight will be measured every visit to calculate BMI. Moreover, Korean Symptom Check List 95 and Pattern Identifications Tool for Depression will be assessed at baseline, and at 4, 8, and 12 weeks. A schematic diagram of participant timelines is presented in Fig. . The total sample size has been estimated to be 126, with 42 participants for each group. To our knowledge, no previous clinical trial has compared the effect of BTS on depressive symptoms with that of placebo granules, and the effect size of BTS was estimated based on the result of a previous clinical trial on another herbal medicine in patients with MDD . The sample size was calculated based on the hypothesis that the mean change from baseline of the 17-item HDRS total score in the high-dose BTS group is higher to that in placebo group. The mean difference in score between the high-dose BTS and placebo group was estimated to be 4.0 at 8 weeks, and the pooled standard deviation was estimated to be 5.78. With the significance level (α) of 0.05, statistical power (β) of 0.80, allocation ratio of 1:1, and drop-out rate of 0.05, the required sample size for each group was determined to be 42 participants. The participants for this clinical trial will be recruited from two university hospitals in Korea. The recruitment will be posted on the hospital bulletin boards and online homepage. Moreover, local advertisements on the subway and online advertisements will be conducted to reach the target sample size within the planned period. All recruitment posters and methods have prior institutional review board (IRB) approval. An independent statistician generated a random allocation sequence using SAS® version 9.4 (SAS Institute Inc., Cary, NC, USA). The manager of the random allocation sequence provided the generated random allocation sequence to the pharmaceutical company to pack the investigational product. The high-dose BTS, low-dose BTS, and placebo granules were packed into the allocated random number based on the sequence provided. The investigators will assign a random number to each participant according to the order of enrollment in visit 2. Allocation will be concealed to the investigators by sequential numbering. The participants, investigators, pharmacist, and outcome assessor will be blinded to the allocated group of each participant. The placebo granule has been developed to have identical color, scent, and taste to those of the BTS granule. In case of a serious medical emergency, unblinding of the group allocation of the participant can be considered. When sub-investigators or principal investigators judge that code-breaking is required, the principal investigator will quickly hold a meeting among investigators and make a decision as to whether to unblind through discussion. A case of unblinding and related medical issues will be reported to the IRB until 24 h. To increase the reliability and validity of the HDRS measured as the primary outcome in this study, investigators in charge of assessing the HDRS at the two sites were trained using a structured interview guide for the HDRS . The validated Korean version of HDRS , as well as those for BDI-II , STAI , STAXI , ISI , and EQ-5D-5L , will be used in this trial. The investigators will check the participants’ understanding and the missing values for all responded questionnaires. The list of measurements that will be used in this clinical trial is presented in Table . The investigators will send regular messages to participants to encourage them to complete the clinical trial. In case a participant discontinues administration or drops out from the trial, the investigators will attempt to have an assessment visit within 1 week for safety and HDRS follow-up. The data will be entered into a case report form (CRF) on an electronic data capture system. In the system, the ranges for data values were set to avoid the entry of obvious outliers. The clinical research associates will conduct 100% source document verification between the data recorded in the CRF and data in the source documents. The system and manual query will be reviewed monthly after the first participant is enrolled. Comorbidities, medical history, and adverse events will be coded using the MedDRA dictionary, and drug history will be coded using ATC code. Data will be analyzed using SAS® version 9.4 (SAS Institute Inc., Cary, NC, USA). In the efficacy analysis, the full analysis (FA) set will be used as the main analysis set, and the per protocol (PP) set will be used as the supplementary analysis set. The FA set will include randomized participants and minimize the use of those excluded from the analysis. A participant who has never taken the investigational product or has never been evaluated since the random allocation will be excluded from the FA set. The analyses of data containing missing values will be handled with the multiple imputation method. The PP set will include participants who completed the trial without major protocol violations. Participants who drop out during the intervention period (8 weeks), who are found to be inappropriate for inclusion according to the eligibility criteria, or whose total compliance rate is under 75% will be excluded from the PP set. In the safety analysis, the safety set will include all participants who have ever taken the investigational product. The primary efficacy endpoint is the change from baseline of the 17-item HDRS total score at 8 weeks. The mean difference in outcomes will be compared between the two groups using an analysis of covariance with the site and baseline value as covariates. Tests will be conducted twice to compare the outcomes of the high- and low-dose BTS groups with that of the placebo group. For multiple parallel-group comparisons, a significance level (α) of 0.025 and statistical power (β) of 0.80 will be used for each test. To analyze continuous outcomes among the secondary outcomes (e.g., BDI-II, SRAI, STAXI, ISI, and EQ-5D-3L index scores), an identical analysis method to the primary outcome will be used. To analyze binary outcomes among the secondary outcomes, response to treatment and remission rate of depression will be assessed, and a logistic regression analysis will be conducted with the site and baseline value as covariates. The method to handle multiple comparisons is identical to that used for analyzing continuous outcomes. Additionally, a subgroup analysis with various criteria will be conducted for exploratory purposes. First, subgroups will be classified based on whether participants are of healthy weight or overweight, as assessed by BMI. Second, other subgroups will be defined using the baseline response to the “changes in appetite” item in BDI-II. Third, some subgroups will be defined using the baseline Korean medicine pattern identification of depression. Moreover, in case of significant differences in the baseline demographic information between groups, adjusted analyses can be conducted. This clinical trial uses an approved herbal medicine product for another indication. The BTS granules do not have any serious adverse events in the real world, and the risk of this trial is expected to be low. Moreover, it is a phase II trial, meaning that a data monitoring committee is not needed. Interim analyses are not planned. Adverse events will be carefully collected on every visit after the administration of the investigational product. The severity of adverse events will be rated as mild, moderate, and severe. The causality of adverse events with the investigational product will be categorized as follows: definitely related, probably related, possibly related, probably not related, and definitely not related. Adverse events of dyspepsia, diarrhea, and abdominal pain can occur after the administration of BTS granules, and these symptoms will be carefully checked. Regular monitoring is planned, with frequent visits planned on being conducted for each 4–5 participants enrolled. After the initiation visit, the first regular monitoring visit will be conducted within 7 working days after the first participant is enrolled. Compliance with the IRB-approved clinical protocol, collection of data, written informed consent of participants, recording and reporting of adverse events, management of the investigational product, data entered into the Electronic Data Capture system, and study materials will be checked in the regular monitoring visits. Protocol modifications will be determined after sufficient discussion by investigators at the hospital and Korea Institute of Oriental Medicine and will be applied to the study after obtaining approval for the amendment from the IRB. The current version of the protocol is 1.8 (date: 2022–06-27). The personal information of each participant will not be entered into an electronic CRF, and the data of each participant will be collected under screening and random numbers. Follow-up observation will be conducted at week 12, that is, 4 weeks after the completion of intervention. Compensation criteria and planning regarding those who suffer harm from participation in this trial are prepared. The occurrence of adverse events will be checked at every visit, and required treatment and observation will be applied until the symptoms disappear. The clinical study information and results will be registered to the Clinical Research Information Service. The findings of this study will be presented at conferences and published in peer-reviewed journals. The participant-level dataset will be uploaded to the Korean Medicine Data Repository (kmdr.kiom.re.kr) after completing the study. Moreover, we will report the final data to the Ministry of Health and Welfare, Republic of Korea, through the Korea Health Industry Development Institute. Results will also be published following completion of the study. The proposed study will examine the efficacy and safety of BTS administration for 8 weeks among patients with MDD with a BMI ≥ 18.5 kg/m 2 compared with those of the placebo. We designed the trial to investigate both low-dose and high-dose BTS by allocating the participants into high-dose BTS, low-dose BTS, and placebo groups in a 1:1:1 ratio. In addition to assessing depression using the HDRS total score as the primary outcome, the response and remission rates, anxiety, anger, insomnia, and quality of life will be measured. The current study will compare both low and high doses of BTS, as low-dose BTS showed better efficacy for depression in animal studies, even though the commonly used dose for obese human patients is that equivalent to the high dose used in this trial. As obesity and depression share biological pathways, BTS is expected to work on both obesity and depression . Obesity causes hypothalamic–pituitary–adrenal dysregulation and changes in the plasma levels of cortisol, leptin, adiponectin, resistin, and insulin, which are hormones involved in emotional and mood regulation . Obesity and depression are vulnerable when an imbalance in appetite and homeostasis dysregulation of the central nervous system occurs. The association between both diseases is complex and bidirectional . These findings suggest that the medications with indications for obesity can be repositioned as new antidepressants as there is a possibility of common mechanisms for both diseases. Thus, it is expected that BTS may be particularly effective for patients with atypical depression, especially those who have bulimia or weight gain. These aspects were not only reflected in inclusion criteria, but also in the additional analyses that were planned, which included subgroup analyses of patients who are overweight based on BMI and of patients who responded that they have increased appetite in the depression symptom evaluation during screening. Moreover, this clinical trial considered pattern identification , an important feature of diagnosis and clinical decision-making in Korean medicine (KM). Herbal medicines for depression are often prescribed according to the individuals’ pattern identification . To design clinical trials on herbal medicine, the disease studied should be defined clearly in both conventional medical and traditional eastern Asian medical approaches . This clinical trial will recruit patients with MDD following the DSM-5 diagnostic approach, and pattern identification will be partially implemented using the objective measurement of BMI. By excluding patients with underweight BMI, we intended to exclude patients who are not adequate for BTS prescription. This study has some limitations. First, even though we planned to develop and use BTS in patients with atypical depression, we did not adopt the diagnosis of the atypical depression subtype and only excluded patients who are underweight. The feasibility for recruiting each participant in the clinical trial was considered, and we attempted to use objective criteria for recruitment as much as possible. Second, as this is a placebo-controlled phase II trial, comparison of BTS with commonly used antidepressants, such as selective serotonin reuptake inhibitors, is warranted in a future phase III definitive trial. This phase II trial will provide information on the efficacy and safety of BTS in patients with MDD who are of healthy weight or overweight. The findings of this randomized controlled study are expected to provide evidence for a novel approach to depression with fewer side effects and a low risk of drug dependence, especially in the atypical depression subtype. Additional file 1. SPIRIT checklist.
Primary healthcare competencies needed in the management of person-centred integrated care for chronic illness and multimorbidity: Results of a scoping review
73b13639-c133-403a-8475-255f9ebd2cb1
10091550
Patient-Centered Care[mh]
Chronic diseases such as cardiovascular- and pulmonary disease and diabetes mellitus type 2 are the leading causes of death and disability worldwide. According to the World Health Organization these diseases kill 41 million people each year, equivalent to 71% of all deaths globally . Approximately one in three adults suffer from more than one chronic disease . This is called multimorbidity, which is defined as the coexistence of two or more chronic conditions in the same individual . The management of chronic diseases and multimorbidity is complex and the challenge is recognized worldwide. Patients with multimorbidity are at higher risk of safety issues for instance due to polypharmacy, more frequent and complex medication interactions and the involvement of different healthcare professionals resulting in competing priorities and lack of coordination of care . Within the Health Education Framework (2017) it is stated: “A one-size-fits-all health care system simply cannot meet the increasing complexity of people’s needs and expectations” . A broader perspective on the management of chronic disease seems necessary. A dominant focus on medical treatment is too limited, as the disease affects the daily living of patients . Therefore it is argued that treatment programmes should include other domains of life as well, to meet the specific needs of individuals resulting in greater satisfaction with care and the physical and social well-being of patients . This perspective has led to the development of personalised strategies to replace disease-management strategies, which can be referred to as Person-centred Integrated care (PC-IC) . Person-centred care or Patient-centred care means that individuals’ values and preferences are elicited and that these preferences guide all aspects of their health care . According to the HEE (Health Education England) Framework Patient- centred care means that people feel free to speak out about what is important to them and the healthcare professional listens to what matters to people . There is no unifying definition or common conceptual understanding of integrated care due to the fact that there are different perspectives that construct the concept , However, there is consensus that integrated care is an approach to overcome care fragmentations, especially where this fragmentation and the disconnect between the different healthcare providers is leading to an adverse impact on people’s care experiences and care outcomes . Integrated care is suitable for people with complex or long-term care needs. In this review we use the term person-centred integrated care as an umbrella term comprising person-centred or patient-centred care and integrated care, as this refers to the holistic, individualized approach, empowering the patient to make effective care plans together with their healthcare providers, who collaborate interprofessionally, and patients as an equal partner. PC-IC is believed to improve outcomes and experience for persons with long-term and complex conditions . Multimorbidity is predominantly dealt with in primary care . The PC-IC approach of giving patients more choice and control in their lives is particularly suitable in this setting where general practitioners (GPs) often have a life-long relationship with patients . Specific disease management programmes improve quality of care and patient outcomes in chronic disease . However considering the complexity of care for patients with one or more chronic diseases their care needs often cannot be met by one single professional as different areas of expertise are necessary to optimize care for this large group of patients . The primary care team consists of different professionals such as GPs, nurses, physical therapists, psychologists and dieticians who work side by side and rely on each other’s expertise and where necessary collaborate with professionals from other sectors, for instance hospitals and social welfare organizations. Involved healthcare professionals should be equipped to be a part of a collaborative, interprofessional team where the focus lies within the concept of PC-IC. It requires a specific skillset from team members. Being a member of such a collaborative team means working together and jointly setting achievable goals, which are based on the needs and preferences of the individual patient. Shifting from regular disease management towards PC-IC also means a shift in professional competencies due to the holistic approach that underlies it, which considers the different domains of the patient’s life. A competency is defined as an observable ability of a health professional, integrating multiple components such as knowledge, skills, values, and attitudes . It is however still unclear which competencies these primary healthcare professionals should have or obtain in order to be able to deliver PC-IC in the primary care setting. In this scoping review our primary objective was to provide an overview of the current scientific knowledge on which competencies healthcare professionals who provide PC-IC to patients with one or more chronic disease should have. Our second aim was to get insight into how these competencies can be acquired. Study design We performed a scoping review guided by the methodological framework proposed by Arksey and O’Malley ; (I) identifying the research question, (II) identifying relevant studies, (III) selection of eligible studies, (IV) charting the data, and (V) collating, summarizing and reporting the results. A scoping review does not typically involve quality assessment of the methodology of empirical studies but are specifically designed to identify gaps in the evidence base . We did not perform a critical appraisal based on study design as we aimed to include all available evidence. I. identifying the research question Our primary research question for the literature review was: Which interprofessional competencies do primary care professionals need to offer person-centred integrated care for patients with one or more chronic diseases? Our secondary research question was: How can these competencies be acquired? II. Identifying relevant studies We developed a comprehensive search strategy with the assistance of a librarian (TP) of the HAN University of Applied Sciences. The search included an extensive search string using Boolean operators and truncations to combine all relevant keywords and we checked the results of our search strategy against key publications. We chose a sensitive search strategy rather than a specific strategy, to ensure we would not miss relevant guidelines or peer-reviewed papers of our interest. Different definitions and concepts were included in the search string. For instance, multimorbidity and comorbidity were both added as they both refer to multiple chronic conditions (MCC). The difference we found is on how healthcare systems view patients with MCC. A hospital setting mostly looks at the one disease and then the comorbidities whereas the primary care setting or other generalist setting can easily change focus according to patient’s priorities . There is also a variation in the terminology used to describe team collaboration; terms include ‘multidisciplinary’, ‘interdisciplinary’, ‘interprofessional’ and ‘multiprofessional’. The term interprofessional applies when two or more professions learn or practice together to improve health outcomes in patients whereas multiprofessional applies when professions practice together but not necessarily on shared goals . The search was conducted from onset of the respective literature databases till September 2020. In January 2023 we updated our search, using the same search strategy, to see if there were any new articles or guidelines that could be added to this scoping review. First, we searched for chronic disease guidelines and chronic disease management programmes that involved the primary care setting. The search for the guidelines took place in in the Trip medical database ( https://www.tripdatabase.com ) with the following terms including their linguistic variations; (a) primary care, (b) integrated care, (c) chronic illness, (d) multimorbidity, (e) shared decision making and (f) competencies (Appendix 1). For this search no filters were applied. Next, using the same keywords, we searched for peer-reviewed articles in the following scientific literature databases: Cinahl, Embase, PubMed, Medline, and Web of Science (Appendix 1). Grey literature was hand-searched through websites of relevant national and international journals, scanning reference lists and through Google and Google Scholar by the main researcher (LM). We included all literature without date restrictions. We only searched for articles written in English or Dutch as these languages were covered by the authors. Search records were downloaded, combined and de-duplicated using EndNote bibliographic software (Clarivate Analytics, Philadelphia, PA, U.S.A.). Afterwards, we exported our search records to Rayyan QCRI which facilitates process of blind screening . All titles, abstracts and full texts were reviewed against inclusion and exclusion criteria, see Table . III. Study selection The titles and abstracts of both the guidelines and peer-reviewed articles were screened blind by pairs of two out of four researchers (LM, AT, EB, ML, NvD) of which the main researcher (LM) screened all identified guidelines and peer-reviewed articles. First the titles and abstracts were screened for relevance. Publications considered relevant only by one of the two reviewers were discussed until consensus was reached. Secondly the full text publications were read, and data were extracted by one author and checked by a second. We included published, peer-reviewed and grey literature. All types of study designs describing competencies could be included. IV. Charting the data Two reviewers (LM, ML) jointly developed a data charting form in Excel to describe relevant information. One reviewer extracted data from the included empirical studies and guidelines. The form included information on study design, country, aim or objective, participants and the described competencies. The main researcher (LM) filled in the data forms, which were subsequently checked by one of the other researchers (ML or EB). The authors frequently met to discuss the charting of the data. At the first stage of analysis, we collected descriptions of any statement potentially related to the competencies for the execution of PC-IC excluding disease specific competencies. This resulted in four overarching themes. At the second stage of the analysis one reviewer identified the underlying core concepts, i.e., skills, knowledge and attitudes that emerged under the four main themes These were then summarised under the interdependent themes. The extracted details were cross-checked by a second researcher (ML). She read all the notes and coding of the first researcher (LM) and cross-checked this against the papers and guidelines. If any discrepancy was discovered, this was discussed until consensus was reached. V. Collating, summarizing and reporting the results In this final step a narrative report was produced to summarize the extracted data. The Prisma checklist for scoping reviews was used to make sure we covered all essential items . We performed a scoping review guided by the methodological framework proposed by Arksey and O’Malley ; (I) identifying the research question, (II) identifying relevant studies, (III) selection of eligible studies, (IV) charting the data, and (V) collating, summarizing and reporting the results. A scoping review does not typically involve quality assessment of the methodology of empirical studies but are specifically designed to identify gaps in the evidence base . We did not perform a critical appraisal based on study design as we aimed to include all available evidence. I. identifying the research question Our primary research question for the literature review was: Which interprofessional competencies do primary care professionals need to offer person-centred integrated care for patients with one or more chronic diseases? Our secondary research question was: How can these competencies be acquired? II. Identifying relevant studies We developed a comprehensive search strategy with the assistance of a librarian (TP) of the HAN University of Applied Sciences. The search included an extensive search string using Boolean operators and truncations to combine all relevant keywords and we checked the results of our search strategy against key publications. We chose a sensitive search strategy rather than a specific strategy, to ensure we would not miss relevant guidelines or peer-reviewed papers of our interest. Different definitions and concepts were included in the search string. For instance, multimorbidity and comorbidity were both added as they both refer to multiple chronic conditions (MCC). The difference we found is on how healthcare systems view patients with MCC. A hospital setting mostly looks at the one disease and then the comorbidities whereas the primary care setting or other generalist setting can easily change focus according to patient’s priorities . There is also a variation in the terminology used to describe team collaboration; terms include ‘multidisciplinary’, ‘interdisciplinary’, ‘interprofessional’ and ‘multiprofessional’. The term interprofessional applies when two or more professions learn or practice together to improve health outcomes in patients whereas multiprofessional applies when professions practice together but not necessarily on shared goals . The search was conducted from onset of the respective literature databases till September 2020. In January 2023 we updated our search, using the same search strategy, to see if there were any new articles or guidelines that could be added to this scoping review. First, we searched for chronic disease guidelines and chronic disease management programmes that involved the primary care setting. The search for the guidelines took place in in the Trip medical database ( https://www.tripdatabase.com ) with the following terms including their linguistic variations; (a) primary care, (b) integrated care, (c) chronic illness, (d) multimorbidity, (e) shared decision making and (f) competencies (Appendix 1). For this search no filters were applied. Next, using the same keywords, we searched for peer-reviewed articles in the following scientific literature databases: Cinahl, Embase, PubMed, Medline, and Web of Science (Appendix 1). Grey literature was hand-searched through websites of relevant national and international journals, scanning reference lists and through Google and Google Scholar by the main researcher (LM). We included all literature without date restrictions. We only searched for articles written in English or Dutch as these languages were covered by the authors. Search records were downloaded, combined and de-duplicated using EndNote bibliographic software (Clarivate Analytics, Philadelphia, PA, U.S.A.). Afterwards, we exported our search records to Rayyan QCRI which facilitates process of blind screening . All titles, abstracts and full texts were reviewed against inclusion and exclusion criteria, see Table . III. Study selection The titles and abstracts of both the guidelines and peer-reviewed articles were screened blind by pairs of two out of four researchers (LM, AT, EB, ML, NvD) of which the main researcher (LM) screened all identified guidelines and peer-reviewed articles. First the titles and abstracts were screened for relevance. Publications considered relevant only by one of the two reviewers were discussed until consensus was reached. Secondly the full text publications were read, and data were extracted by one author and checked by a second. We included published, peer-reviewed and grey literature. All types of study designs describing competencies could be included. IV. Charting the data Two reviewers (LM, ML) jointly developed a data charting form in Excel to describe relevant information. One reviewer extracted data from the included empirical studies and guidelines. The form included information on study design, country, aim or objective, participants and the described competencies. The main researcher (LM) filled in the data forms, which were subsequently checked by one of the other researchers (ML or EB). The authors frequently met to discuss the charting of the data. At the first stage of analysis, we collected descriptions of any statement potentially related to the competencies for the execution of PC-IC excluding disease specific competencies. This resulted in four overarching themes. At the second stage of the analysis one reviewer identified the underlying core concepts, i.e., skills, knowledge and attitudes that emerged under the four main themes These were then summarised under the interdependent themes. The extracted details were cross-checked by a second researcher (ML). She read all the notes and coding of the first researcher (LM) and cross-checked this against the papers and guidelines. If any discrepancy was discovered, this was discussed until consensus was reached. V. Collating, summarizing and reporting the results In this final step a narrative report was produced to summarize the extracted data. The Prisma checklist for scoping reviews was used to make sure we covered all essential items . Our primary research question for the literature review was: Which interprofessional competencies do primary care professionals need to offer person-centred integrated care for patients with one or more chronic diseases? Our secondary research question was: How can these competencies be acquired? We developed a comprehensive search strategy with the assistance of a librarian (TP) of the HAN University of Applied Sciences. The search included an extensive search string using Boolean operators and truncations to combine all relevant keywords and we checked the results of our search strategy against key publications. We chose a sensitive search strategy rather than a specific strategy, to ensure we would not miss relevant guidelines or peer-reviewed papers of our interest. Different definitions and concepts were included in the search string. For instance, multimorbidity and comorbidity were both added as they both refer to multiple chronic conditions (MCC). The difference we found is on how healthcare systems view patients with MCC. A hospital setting mostly looks at the one disease and then the comorbidities whereas the primary care setting or other generalist setting can easily change focus according to patient’s priorities . There is also a variation in the terminology used to describe team collaboration; terms include ‘multidisciplinary’, ‘interdisciplinary’, ‘interprofessional’ and ‘multiprofessional’. The term interprofessional applies when two or more professions learn or practice together to improve health outcomes in patients whereas multiprofessional applies when professions practice together but not necessarily on shared goals . The search was conducted from onset of the respective literature databases till September 2020. In January 2023 we updated our search, using the same search strategy, to see if there were any new articles or guidelines that could be added to this scoping review. First, we searched for chronic disease guidelines and chronic disease management programmes that involved the primary care setting. The search for the guidelines took place in in the Trip medical database ( https://www.tripdatabase.com ) with the following terms including their linguistic variations; (a) primary care, (b) integrated care, (c) chronic illness, (d) multimorbidity, (e) shared decision making and (f) competencies (Appendix 1). For this search no filters were applied. Next, using the same keywords, we searched for peer-reviewed articles in the following scientific literature databases: Cinahl, Embase, PubMed, Medline, and Web of Science (Appendix 1). Grey literature was hand-searched through websites of relevant national and international journals, scanning reference lists and through Google and Google Scholar by the main researcher (LM). We included all literature without date restrictions. We only searched for articles written in English or Dutch as these languages were covered by the authors. Search records were downloaded, combined and de-duplicated using EndNote bibliographic software (Clarivate Analytics, Philadelphia, PA, U.S.A.). Afterwards, we exported our search records to Rayyan QCRI which facilitates process of blind screening . All titles, abstracts and full texts were reviewed against inclusion and exclusion criteria, see Table . The titles and abstracts of both the guidelines and peer-reviewed articles were screened blind by pairs of two out of four researchers (LM, AT, EB, ML, NvD) of which the main researcher (LM) screened all identified guidelines and peer-reviewed articles. First the titles and abstracts were screened for relevance. Publications considered relevant only by one of the two reviewers were discussed until consensus was reached. Secondly the full text publications were read, and data were extracted by one author and checked by a second. We included published, peer-reviewed and grey literature. All types of study designs describing competencies could be included. Two reviewers (LM, ML) jointly developed a data charting form in Excel to describe relevant information. One reviewer extracted data from the included empirical studies and guidelines. The form included information on study design, country, aim or objective, participants and the described competencies. The main researcher (LM) filled in the data forms, which were subsequently checked by one of the other researchers (ML or EB). The authors frequently met to discuss the charting of the data. At the first stage of analysis, we collected descriptions of any statement potentially related to the competencies for the execution of PC-IC excluding disease specific competencies. This resulted in four overarching themes. At the second stage of the analysis one reviewer identified the underlying core concepts, i.e., skills, knowledge and attitudes that emerged under the four main themes These were then summarised under the interdependent themes. The extracted details were cross-checked by a second researcher (ML). She read all the notes and coding of the first researcher (LM) and cross-checked this against the papers and guidelines. If any discrepancy was discovered, this was discussed until consensus was reached. In this final step a narrative report was produced to summarize the extracted data. The Prisma checklist for scoping reviews was used to make sure we covered all essential items . The initial searches identified 327 guidelines and 1,810 articles. In January 2023 the search was updated which resulted in 139 guidelines and 421 new articles to be screened. After removing duplicates, posters and conference abstracts a total of 464 guidelines and 1,153 articles were screened for inclusion (Fig. ). Disagreements were solved in discussion between the two researchers, and it was not necessary to include a third researcher as referee. The screening resulted in 17 guidelines and 104 articles which were selected for full text review. After reading the full text publications, a total of 4 guidelines and 21 articles met our inclusion criteria and therefor were included in the data synthesis. Study characteristics Table reports the study characteristics of the included studies and guidelines. The four guidelines included were from United States (n = 2), Australia (n = 1) and Switzerland (n = 1). Publication dates ranged between 2014 and 2021. The guidelines covered different patient populations, one was on COPD (chronic obstructive pulmonary disease) , one on elderly people , one on Palliative and End of Life care in stroke patients , and one on primary prevention of chronic disease in the general practice setting . The 21 included peer-reviewed papers used quantitative, qualitative and mixed research methods. The designs varied from one randomized controlled trial , four literature reviews [ – ], two expert opinions , and two studies were mixed methods studies . The remaining twelve studies were qualitative studies [ – ]. The included studies were performed in the United States (n = 9), the Netherlands (n = 5), Australia (n = 2) and one study in each of the following countries: Belgium, Canada, Ireland, New Zealand and the United Kingdom. Publication dates ranged between 2006 and 2020. The specific healthcare professionals involved in the execution of PC-IC varied. Seven studies involved PC-IC from the perspective of one profession: nurses [ , , ], nurse practitioners , general practitioners , behavioral health consultants , or primary care internal medicine residents . Three studies involved a mix of healthcare professionals including general practitioners, nurses, occupational therapists, pharmacists, physiotherapists, social workers and speech language therapists [ , , ]. In the remaining eleven studies the authors did not specify the profession ( – – [ , , – , ]– ). The scope of the studies involved different patient populations: patients with multimorbidity [ , , , ], frail elderly or elderly with serious illness [ , , ], multimorbidity or aging population , palliative care (28 40), prevention of chronic illness , or COPD . In nine studies the chronic illness was not specified ( – , , , – , ). All studies involved a form of person-centred care described as ‘a whole person approach’, ‘shared decision making’ or ‘improving self-management’. Identified competencies All competencies concerning PC-IC as described in the included documents were extracted. The data synthesis identified four main themes: 1* patient-centred communication 2* interprofessional communication; 3* collaborative teamwork and 4* Leadership. In Appendix 2 we report the code tree with examples from included studies. Person-centred competencies Person-centred communication All guidelines [ – ] and 18 articles [ – , – , – , – ] describe professional’s communication with patients to be an important competency within PC-IC. Open communication is central to person-centred care ( , , – [ , – , ]– , ). Communication with patients should also be based on equality . Professionals with good communication skills conduct person-centred assessments to identify what matters most to the patient [ – , , , , , ]. In patient-centred communication professionals support their messages by evidence-based information tailored to the patient’s needs . Professionals should also be skilled in relational communication techniques for communication with caregiver(s), family members or a delegated decision-maker . Good listening skills are strongly highlighted within the PC-IC approach ( , , , , – – ). Professionals should recognize nonverbal signals and strive for clarity of communication 30, 37–38). It is important that professionals take the level of understanding due to, for instance, language barriers, physical impairments and possible cultural differences into consideration ( – , , , ). They also respond to patient’s emotions and needs and follow-up by providing tailored responses to these needs [ , , ]. Furthermore, professionals should be able to apply motivational interviewing techniques, as research has shown that this improves the quality of professional – patient interaction and shared decision making ( , – , , , , , ). Interprofessional competencies Interprofessional communication Two guidelines and 8 articles described communication to be an important competency when offering PC-IC ( , , , – – , ). Communication requires a two way and open dialoge between professionals, in team meetings as well as in bilateral conversations ( , , – , – , )Decision-making, problem-solving and goal setting are important issues to be discussed with each other ( – , , ). Also, this should be an interdisciplinary team effort ( , , – , ). It is essential that the collaborating healthcare professionals are able to discover shared patient goals during team meetings ( , , , , – – ). Each healthcare professional should have the ability to communicate with colleagues and other disciplines in a bidirectional manner . This means that each party is aware of the other’s professional backgrounds, strengths and boundaries and points in which professionals can reinforce each other. Team consensus is reached by dialoguing and discussing issues with all team members on an equal level . In the communication own professional perspectives and expertise are highly valued and contribute to the quality of PC-IC plans . Good communication skills are not only necessary within the primary care team, but it is equally important that these healthcare professionals show good communication skills towards external organizations such as other healthcare services or community agencies [ , , ]. The heart association American stroke association describes the importance of effective communication between professionals, but no further explanation what competences are needed for effective communication. Collaborative teamwork All guidelines [ – ] and 12 articles ( , , , – [ , – , ]– ) described interprofessional teamwork or team collaboration skills. Healthcare professionals should have the ability and motivation to work collaboratively with others and share pertinent information [ , , , , , , , ] and also important to share knowledge of each other’s involvement when sharing the same goals for their patients [ , , , ]. Person-centred care is a team effort and is achieved through teamwork [ – , , , , ] Another critical competency is the intrinsic motivation of professionals to collaborate with others . This is essential as interprofessional collaboration is often considered to be time consuming, while time is scarce. Interpersonal factors may also cause barriers to collaboration and therefore it is important to define a shared language and discuss the diversity of personal perspectives . Healthcare professionals should know who else is on the team and there should be a clear understanding of the professional’s own roles as well as a clear understanding of the other profession’s roles and competencies [ , , , – , ]. It could be helpful if the professionals within the collaborative team invest in getting to know each other. Research has shown that professionals knowing each other well are better able to take advantage of each other’s discipline-specific competencies . Knowing each other also contributes to an atmosphere of mutual trust and respect which creates an open and safe environment in which the professionals involved dare to think and act broader than their own discipline [ , , ]. Leadership Two guideline ( – ) and four articles [ , , , ] mark good leadership as an important competency for sustainable and effective collaboration in interprofessional teams. Team leadership characteristics include modelling and advocating of interprofessional teamwork, providing resources and infrastructure, and promoting shared team leadership, goals and decision making ( – , , , , ). Leadership skills are also required for bringing the interprofessional team together and to support professionals to adopt the shift in values and attitudes towards collaborative working [ , , , ]. Leadership skills are also necessary for attaining efficient and successful team meetings (i.e., planning, agenda setting, structuring, chairing) . Although all team members should have leadership skills, within the collaborative team one team member should take the role as leader or coordinator and monitor the team’s shared goals and objectives [ , , ]. Professionals with strong leadership competencies show to be patient care advocates; they ensure that the team discusses the patient’s goals and needs and that patients are put in the centre of care . Acquiring the competencies necessary to offer person-centred integrated care for patients with one or more chronic diseases Three guidelines [ – ] and 17 articles ( – , – , , – ) mentioned the need for ongoing education or training for professionals, either for communication, interprofessional collaboration or for the execution of the PC-IC approach. This requires new knowledge and skills, but a change in attitude is also necessary. Most articles considered education to be a major facilitating factor to ensure that (future) professionals are equipped to provide care for patients with chronic illness and multimorbidity. Professional education to develop knowledge and skills should be incorporated in undergraduate programmes as well as in postgraduate programmes and be part of on-the-job training ( , – ). In interprofessional education two or more professions learn with, about, and from each other to enable effective collaboration and improve health outcomes in patients [ – , ]. Learning together with other healthcare professionals will also improve the understanding of each other’s roles . Two papers specified the training needs. Van der Pol et al. and Helitzer et al. reported that professionals need specific training on communication. In particular professionals need more skills in asking open ended questions. Rocker et al. emphasized that during medical training, by effective mentorship and observation, medical students should obtain in depth skills on how to discover patient’s needs. Table reports the study characteristics of the included studies and guidelines. The four guidelines included were from United States (n = 2), Australia (n = 1) and Switzerland (n = 1). Publication dates ranged between 2014 and 2021. The guidelines covered different patient populations, one was on COPD (chronic obstructive pulmonary disease) , one on elderly people , one on Palliative and End of Life care in stroke patients , and one on primary prevention of chronic disease in the general practice setting . The 21 included peer-reviewed papers used quantitative, qualitative and mixed research methods. The designs varied from one randomized controlled trial , four literature reviews [ – ], two expert opinions , and two studies were mixed methods studies . The remaining twelve studies were qualitative studies [ – ]. The included studies were performed in the United States (n = 9), the Netherlands (n = 5), Australia (n = 2) and one study in each of the following countries: Belgium, Canada, Ireland, New Zealand and the United Kingdom. Publication dates ranged between 2006 and 2020. The specific healthcare professionals involved in the execution of PC-IC varied. Seven studies involved PC-IC from the perspective of one profession: nurses [ , , ], nurse practitioners , general practitioners , behavioral health consultants , or primary care internal medicine residents . Three studies involved a mix of healthcare professionals including general practitioners, nurses, occupational therapists, pharmacists, physiotherapists, social workers and speech language therapists [ , , ]. In the remaining eleven studies the authors did not specify the profession ( – – [ , , – , ]– ). The scope of the studies involved different patient populations: patients with multimorbidity [ , , , ], frail elderly or elderly with serious illness [ , , ], multimorbidity or aging population , palliative care (28 40), prevention of chronic illness , or COPD . In nine studies the chronic illness was not specified ( – , , , – , ). All studies involved a form of person-centred care described as ‘a whole person approach’, ‘shared decision making’ or ‘improving self-management’. All competencies concerning PC-IC as described in the included documents were extracted. The data synthesis identified four main themes: 1* patient-centred communication 2* interprofessional communication; 3* collaborative teamwork and 4* Leadership. In Appendix 2 we report the code tree with examples from included studies. Person-centred communication All guidelines [ – ] and 18 articles [ – , – , – , – ] describe professional’s communication with patients to be an important competency within PC-IC. Open communication is central to person-centred care ( , , – [ , – , ]– , ). Communication with patients should also be based on equality . Professionals with good communication skills conduct person-centred assessments to identify what matters most to the patient [ – , , , , , ]. In patient-centred communication professionals support their messages by evidence-based information tailored to the patient’s needs . Professionals should also be skilled in relational communication techniques for communication with caregiver(s), family members or a delegated decision-maker . Good listening skills are strongly highlighted within the PC-IC approach ( , , , , – – ). Professionals should recognize nonverbal signals and strive for clarity of communication 30, 37–38). It is important that professionals take the level of understanding due to, for instance, language barriers, physical impairments and possible cultural differences into consideration ( – , , , ). They also respond to patient’s emotions and needs and follow-up by providing tailored responses to these needs [ , , ]. Furthermore, professionals should be able to apply motivational interviewing techniques, as research has shown that this improves the quality of professional – patient interaction and shared decision making ( , – , , , , , ). All guidelines [ – ] and 18 articles [ – , – , – , – ] describe professional’s communication with patients to be an important competency within PC-IC. Open communication is central to person-centred care ( , , – [ , – , ]– , ). Communication with patients should also be based on equality . Professionals with good communication skills conduct person-centred assessments to identify what matters most to the patient [ – , , , , , ]. In patient-centred communication professionals support their messages by evidence-based information tailored to the patient’s needs . Professionals should also be skilled in relational communication techniques for communication with caregiver(s), family members or a delegated decision-maker . Good listening skills are strongly highlighted within the PC-IC approach ( , , , , – – ). Professionals should recognize nonverbal signals and strive for clarity of communication 30, 37–38). It is important that professionals take the level of understanding due to, for instance, language barriers, physical impairments and possible cultural differences into consideration ( – , , , ). They also respond to patient’s emotions and needs and follow-up by providing tailored responses to these needs [ , , ]. Furthermore, professionals should be able to apply motivational interviewing techniques, as research has shown that this improves the quality of professional – patient interaction and shared decision making ( , – , , , , , ). Interprofessional communication Two guidelines and 8 articles described communication to be an important competency when offering PC-IC ( , , , – – , ). Communication requires a two way and open dialoge between professionals, in team meetings as well as in bilateral conversations ( , , – , – , )Decision-making, problem-solving and goal setting are important issues to be discussed with each other ( – , , ). Also, this should be an interdisciplinary team effort ( , , – , ). It is essential that the collaborating healthcare professionals are able to discover shared patient goals during team meetings ( , , , , – – ). Each healthcare professional should have the ability to communicate with colleagues and other disciplines in a bidirectional manner . This means that each party is aware of the other’s professional backgrounds, strengths and boundaries and points in which professionals can reinforce each other. Team consensus is reached by dialoguing and discussing issues with all team members on an equal level . In the communication own professional perspectives and expertise are highly valued and contribute to the quality of PC-IC plans . Good communication skills are not only necessary within the primary care team, but it is equally important that these healthcare professionals show good communication skills towards external organizations such as other healthcare services or community agencies [ , , ]. The heart association American stroke association describes the importance of effective communication between professionals, but no further explanation what competences are needed for effective communication. Two guidelines and 8 articles described communication to be an important competency when offering PC-IC ( , , , – – , ). Communication requires a two way and open dialoge between professionals, in team meetings as well as in bilateral conversations ( , , – , – , )Decision-making, problem-solving and goal setting are important issues to be discussed with each other ( – , , ). Also, this should be an interdisciplinary team effort ( , , – , ). It is essential that the collaborating healthcare professionals are able to discover shared patient goals during team meetings ( , , , , – – ). Each healthcare professional should have the ability to communicate with colleagues and other disciplines in a bidirectional manner . This means that each party is aware of the other’s professional backgrounds, strengths and boundaries and points in which professionals can reinforce each other. Team consensus is reached by dialoguing and discussing issues with all team members on an equal level . In the communication own professional perspectives and expertise are highly valued and contribute to the quality of PC-IC plans . Good communication skills are not only necessary within the primary care team, but it is equally important that these healthcare professionals show good communication skills towards external organizations such as other healthcare services or community agencies [ , , ]. The heart association American stroke association describes the importance of effective communication between professionals, but no further explanation what competences are needed for effective communication. All guidelines [ – ] and 12 articles ( , , , – [ , – , ]– ) described interprofessional teamwork or team collaboration skills. Healthcare professionals should have the ability and motivation to work collaboratively with others and share pertinent information [ , , , , , , , ] and also important to share knowledge of each other’s involvement when sharing the same goals for their patients [ , , , ]. Person-centred care is a team effort and is achieved through teamwork [ – , , , , ] Another critical competency is the intrinsic motivation of professionals to collaborate with others . This is essential as interprofessional collaboration is often considered to be time consuming, while time is scarce. Interpersonal factors may also cause barriers to collaboration and therefore it is important to define a shared language and discuss the diversity of personal perspectives . Healthcare professionals should know who else is on the team and there should be a clear understanding of the professional’s own roles as well as a clear understanding of the other profession’s roles and competencies [ , , , – , ]. It could be helpful if the professionals within the collaborative team invest in getting to know each other. Research has shown that professionals knowing each other well are better able to take advantage of each other’s discipline-specific competencies . Knowing each other also contributes to an atmosphere of mutual trust and respect which creates an open and safe environment in which the professionals involved dare to think and act broader than their own discipline [ , , ]. Two guideline ( – ) and four articles [ , , , ] mark good leadership as an important competency for sustainable and effective collaboration in interprofessional teams. Team leadership characteristics include modelling and advocating of interprofessional teamwork, providing resources and infrastructure, and promoting shared team leadership, goals and decision making ( – , , , , ). Leadership skills are also required for bringing the interprofessional team together and to support professionals to adopt the shift in values and attitudes towards collaborative working [ , , , ]. Leadership skills are also necessary for attaining efficient and successful team meetings (i.e., planning, agenda setting, structuring, chairing) . Although all team members should have leadership skills, within the collaborative team one team member should take the role as leader or coordinator and monitor the team’s shared goals and objectives [ , , ]. Professionals with strong leadership competencies show to be patient care advocates; they ensure that the team discusses the patient’s goals and needs and that patients are put in the centre of care . Three guidelines [ – ] and 17 articles ( – , – , , – ) mentioned the need for ongoing education or training for professionals, either for communication, interprofessional collaboration or for the execution of the PC-IC approach. This requires new knowledge and skills, but a change in attitude is also necessary. Most articles considered education to be a major facilitating factor to ensure that (future) professionals are equipped to provide care for patients with chronic illness and multimorbidity. Professional education to develop knowledge and skills should be incorporated in undergraduate programmes as well as in postgraduate programmes and be part of on-the-job training ( , – ). In interprofessional education two or more professions learn with, about, and from each other to enable effective collaboration and improve health outcomes in patients [ – , ]. Learning together with other healthcare professionals will also improve the understanding of each other’s roles . Two papers specified the training needs. Van der Pol et al. and Helitzer et al. reported that professionals need specific training on communication. In particular professionals need more skills in asking open ended questions. Rocker et al. emphasized that during medical training, by effective mentorship and observation, medical students should obtain in depth skills on how to discover patient’s needs. This scoping review identified and described interprofessional competencies as well as patient-centred competencies which are needed when professionals aim to provide PC-IC in primary care. The overall findings contained limited information about specific qualifications and competencies. The descriptions of the competencies are mostly described as general competencies for instance; ‘communication skills’ and are rarely defined in detail. The HEE framework describes in more detail which competencies are shown when a professional delivers person-centred care. The aim of the framework is to set out core, transferable behaviours, knowledge and skills . With regard to communicative competencies, we also found some details, similar to the HEE framework, such as asking open-ended questions but just asking open ended questions does not make that a healthcare professional delivers person-centred care. Asking open-ended questions to explore and understand the patient, his or her personal situation and what matters to him or her does make it more person-centred . We did not find details on how the competencies can be trained. Nonetheless, we were able to derive important competencies from the findings. Communication, collaborative teamwork and leadership seem to be essential competencies that healthcare professionals in primary care should either have or make sure to acquire when delivering PC-IC. The communication competencies that would be expected from healthcare professionals apply to interprofessional communication as well as to patient-centred communication, and both should be based on equality and respect for the interlocutor(s). This is also confirmed by a recent literature review on competencies to promote collaboration between primary and secondary care physicians . This particular review also showed, similar to our findings, that team members should be open minded and willing to look beyond one’s own position . We found that healthcare professionals should know who else is on the team and there should be a clear understanding of the other profession’s roles and competencies. Knowing each other also contributes to an atmosphere of mutual trust and respect. Perceived hierarchy is the main conceptual barrier hindering collaboration between professionals. A new approach leads to a shift from subordination to complementarity in order to meet patients’ needs and to strengthen interprofessional collaboration. Patient-centred care requires physicians and other healthcare professionals to have communication skills to elicit patients’ true wishes and to recognize and respond to both their needs and emotional concerns . As described in the HEE framework the workforce listens to what matters to the patients and giving them the opportunity to speak out freely . Our findings show that asking open ended questions, listening, recognizing nonverbal signs and the ability to adjust to the level of understanding of the patient are the most important communication skills needed to accomplish this. We also found that leadership skills are needed to facilitate interprofessional collaboration in more than one way. Leadership skills are needed by professionals within the primary care setting, but also in relation to collaboration with professionals from external organizations. Jansen et al. described three levels on which leadership can be demonstrated; 1* in relation with other persons, 2* to facilitate collaboration, and 3* showing leadership at a system level to create an environment in which primary and secondary care collaboration is promoted and facilitated. In the included articles the factor ‘time’ is important to facilitate interprofessional collaboration and the execution of PC-IC. Time is important during consultation in order to build a relationship with the patient and meet their needs ( , , – , – ). The lack of time and the large number of patients to see daily are important barriers when dealing with patients with multimorbidity. Other research also shows that seeing more than 3 or 4 patients per hour may lead to suboptimal content of consultations, lower patient satisfaction, increased patient turnover, or inappropriate prescribing . This points to the direction that, besides competencies, also a different way of practice organization (extra consultation time) is necessary for successful execution of PC-IC ( – , , ). Besides time for patient consultations, the current payment systems may hinder collaboration between healthcare professionals as interprofessional meetings are often not reimbursed . In preparing health care professionals to take on this task, establishing standards for training in PC-IC is important. The HEE framework describes core, transferable behaviours, knowledge and skills for becoming a person-centred healthcare professional. The framework focus on communicative competences and interventions that can be implemented. It also describes learning outcomes which can be used to educate healthcare professionals. However the scope of this framework is not specific to a certain practice and additional content therefor may be required for some roles and context . Our findings can be seen as the additional content, specifically in the context of the primary care practice. The prevalence of chronic illness is growing worldwide, and management is increasingly undertaken by interprofessional teams, yet education is still generally provided monodisciplinary . Educational training of both undergraduate as well as graduated healthcare professionals is needed to better prepare healthcare professionals to meet the needs of ageing patients with multiple chronic conditions in a way that is person-centred, effective and sustainable . Patients’ personal goals can be used as a guide in interprgessionl collaboration as it might have the porential to integrat different care plans with each other . However, there is still a need for professionals to acquire competencies to discuss patients’ personal goals through training . Interprofessional education has an important role to play in professionals developing the competencies required to collaborate successfully . Future research on education should guide professionals in acquiring different qualifications and competencies. Strengths and Limitations To our knowledge this is the first review to provide an overview of competencies that healthcare professionals should possess to deliver PC-IC in primary care. Another strength of our review is that we used various and broad search terms, allowing inclusion of all types of literature, both scientific and grey. The aim of this study is to provide a comprehensive list of competencies. We deliberately chose to include all types of study designs and guidelines without limitations in order to capture relevant guidelines as well as scientific articles. This study was also subject to some limitations. We excluded studies in languages other than English and Dutch. Although we might have missed some studies, most studies are likely to be published in English. While performing this review, we noted rather heterogenous terminology describing the concept of the PC-IC approach as well as for interprofessional collaboration. Therefore, to optimize our search strategy we thoroughly explored different definitions and concepts before finalizing the search strategy. Nonetheless we may have missed relevant studies that report PC-IC related competencies due to the use of different terminology. According to guidelines for scoping review we did not undertake a methodology quality assessment of the included articles, although critical appraisal of methodology and ranking the evidence by level of evidence is commonly used in systematic reviews and meta-analysis of the literature. We deliberately chose to include all types of study designs and guidelines without limitations in order to capture all required competencies. We gave equal weight to all included guidelines and articles, regardless of the robustness of the underlying methodology. We consider this justified given the purpose of the scoping study, i.e., providing a narrative account of competencies for executing PC-IC and how these can be acquired. To our knowledge this is the first review to provide an overview of competencies that healthcare professionals should possess to deliver PC-IC in primary care. Another strength of our review is that we used various and broad search terms, allowing inclusion of all types of literature, both scientific and grey. The aim of this study is to provide a comprehensive list of competencies. We deliberately chose to include all types of study designs and guidelines without limitations in order to capture relevant guidelines as well as scientific articles. This study was also subject to some limitations. We excluded studies in languages other than English and Dutch. Although we might have missed some studies, most studies are likely to be published in English. While performing this review, we noted rather heterogenous terminology describing the concept of the PC-IC approach as well as for interprofessional collaboration. Therefore, to optimize our search strategy we thoroughly explored different definitions and concepts before finalizing the search strategy. Nonetheless we may have missed relevant studies that report PC-IC related competencies due to the use of different terminology. According to guidelines for scoping review we did not undertake a methodology quality assessment of the included articles, although critical appraisal of methodology and ranking the evidence by level of evidence is commonly used in systematic reviews and meta-analysis of the literature. We deliberately chose to include all types of study designs and guidelines without limitations in order to capture all required competencies. We gave equal weight to all included guidelines and articles, regardless of the robustness of the underlying methodology. We consider this justified given the purpose of the scoping study, i.e., providing a narrative account of competencies for executing PC-IC and how these can be acquired. We identified interprofessional as well as patient care-related competencies to be relevant for the execution of person-centred integrated primary healthcare. Nonetheless, guidelines and articles mostly lack a detailed description of the competencies in terms of skills, knowledge and attitudes. Insight in these core concepts are necessary to properly educate healthcare professionals in primary care to deliver PC-IC. Further research in which the core concepts of the required competencies are clearly described is still necessary to properly prepare primary healthcare professionals to offer high value care to patients with chronic diseases and multimorbidity. Educational programmes, both undergraduate and postgraduate, should take these competencies into account. A shift towards interprofessional education is necessary to acquire these competencies. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3
Anticipatory banking of samples enables diagnosis of adenylosuccinase deficiency following molecular autopsy in an infant with vacuolating leukoencephalopathy
da282fd7-fdae-4e06-884a-c3fa2411577e
10091700
Forensic Medicine[mh]
INTRODUCTION Next‐generation sequencing has transformed the diagnosis of genetic conditions. However, interpretation of variants of uncertain significance (VUS) remains a major challenge. Reverse phenotyping through clinical history, examination, imaging or functional studies can help classify VUSs (De Goede et al., ; Landini et al., ). However, reverse phenotyping can be challenging in prenatal settings, or of very young or deceased individuals. Here, we present a case of an infant with adenylosuccinase deficiency (OMIM 103050) that expands the clinical spectrum of this rare disease and shows the value pre‐mortem banking of a range of tissue samples for anticipatory reverse phenotyping from individuals whose demise is expected. CASE REPORT 2.1 Clinical presentation The proband (male) was the first child of a nonconsanguineous couple of white British ethnicity with a previous early miscarriage of unknown cause. Excess in utero fetal movements were noted during pregnancy. The child was born by emergency Caesarean section, after induction at 41 weeks and 4 days of gestation with his APGAR scores of 6 at 1 min and 9 at 5 min. His birth weight was 2390 g (0.45 SD) and head circumference was 33 cm (1.24 SD). The newborn was noted to be severely hypotonic, with absence of spontaneous movements, and had a protruding tongue and bilateral deep creases between the great and the first toes. On Day 2 of life, he developed seizures, which were predominantly myoclonic jerks. He had several seizures per day and several episodes of hypothermia and hypoglycaemia. Seizures were refractory to treatment with levetiracetam, phenytoin, pyridoxal phosphate, vigabatrin, phenobarbitone, and benzodiazepines. The child's clinical features were suggestive of severe epileptic encephalopathy. Electroencephalogram showed burst suppression. Magnetic resonance imaging (MRI) of the brain obtained at 14 days of age showed signal abnormality of the entire cerebral white matter and to a lesser extent in the cerebellar hemispheres (Figure ). The signal abnormality extended to involve external capsules and the claustrum bilaterally. Diffusion restriction was noted in the posterior limbs of the internal capsules bilaterally extending into the dorsal brainstem. Reduced diffusion was also seen in the adjacent lateral thalami bilaterally. There was no evidence of myelin deposition in the posterior limbs of the internal capsules bilaterally. Susceptibility weighted imaging sequences showed focal micro‐hemorrhages in the right occipital sub‐cortical white matter. The corpus callosum was diffusely thin in caliber and the temporal horns of the lateral ventricles appeared prominent. These results suggested a likely leukodystrophy, but its pattern and biochemical investigations (Table ) did not lead to detection of the underlying cause. He developed pan‐enteric necrotizing enterocolitis on Day 51. Due to continued deterioration and extremely poor prognosis, he was too unstable for surgical management and after discussion with family, he was extubated electively and died within a few hours, at 52 days of age. As accurate diagnosis was not known, skin fibroblasts, skeletal muscle, plasma, and cerebrospinal fluid samples were stored. Parental consent was taken to gather samples. 2.2 Post‐mortem investigations Macroscopically the brain had a marked lack of differentiation between the white and gray matter. White matter within the brain stem, cerebellum, and cerebrum showed an unusual vacuolar appearance. These findings in combination with the clinical features suggested a vacuolating leukoencephalopathy. A clinical exome analysis was performed on stored DNA sample as described previously (Patricia Molina‐Ramírez et al., ; Stoyle et al., ). This revealed two ADSL c.632T>A (p.(Leu211His)) and c.1277G>A (p.(Arg426His)) (NM_000026.2) variants that were confirmed to be in trans by bi‐directional Sanger sequencing of parental samples. Bi‐allelic loss‐of‐function ADSL variants cause adenylosuccinase deficiency (Georges & Berghe, ; Jurecka et al., ; Stenson et al., ; Stone et al., ). Several patients with the p.(Arg426His) variant have been previously reported (Donti et al., ; Mao et al., ) and this variant was, therefore, classed as pathogenic according to the American College of Medical Genetics and Genomics (ACMG) criteria (PS1 PS3 PM1 PM2 PP3; Richards et al., ). At the time of the discovery, the p.(Leu211His) variant was novel and classed as a VUS (PM1 PM2 PP3). As the variant was proven to be in trans with a pathogenic variant, it could potentially be reclassified to likely pathogenic. However, vacuolating encephalopathy has never been described with this condition previously. 2.3 Reverse biochemical phenotyping Adenylosuccinase (EC 4.3.2.2) catalyzes two steps involving β‐elimination of fumarate in the de novo synthesis of purine nucleotides, converting succinylaminoimidazole carboxamide ribotide into aminoimidazole carboxamide ribotide (Kmoch et al., ). ADSL is also involved in the purine nucleotide cycle by forming adenosine monophosphate (AMP) from succinyladenosine monophosphate, which prevents AMP accumulation after adenosine triphosphate catabolism (Swain et al., ). Adenylosuccinase deficiency results in presence of succinylaminoimidazole carboxamide riboside and succinyladenosine (S‐Ado) in cerebrospinal fluid and plasma (Georges & Berghe, ). Following the genetic results, purine and pyrimidine metabolites were separated in previously stored plasma samples from the proband and quantitated by reversed‐phase UPLC with diode array UV detection on a Waters Acquity UPLC system (Waters). This showed elevated S‐Ado (1398.7 μmol/L) and SAICAr (696.0 μmol/L). Next, adenylosuccinase activity was measured in stored fibroblasts as described previously (Bierau et al., ) and was found to be 0.013 nmol/(μg prot × h) which was only ~5% of control. In ACMG classification, PP4 can be used as a supporting piece of evidence when the patient's phenotype is in its entirety consistent with a specific genetic etiology. These results could, therefore, be used to reclassify the VUS as likely pathogenic and thus confirming the diagnosis of adenylosuccinase deficiency in the child. 2.4 Subsequent pregnancy Following the confirmation of the diagnosis in their deceased child, the parents opted for genetic testing via chorionic villus biopsy in the next pregnancy, which showed the fetus to be unlikely to be affected by adenylosuccinase deficiency. They now have a healthy 2‐year‐old child. Clinical presentation The proband (male) was the first child of a nonconsanguineous couple of white British ethnicity with a previous early miscarriage of unknown cause. Excess in utero fetal movements were noted during pregnancy. The child was born by emergency Caesarean section, after induction at 41 weeks and 4 days of gestation with his APGAR scores of 6 at 1 min and 9 at 5 min. His birth weight was 2390 g (0.45 SD) and head circumference was 33 cm (1.24 SD). The newborn was noted to be severely hypotonic, with absence of spontaneous movements, and had a protruding tongue and bilateral deep creases between the great and the first toes. On Day 2 of life, he developed seizures, which were predominantly myoclonic jerks. He had several seizures per day and several episodes of hypothermia and hypoglycaemia. Seizures were refractory to treatment with levetiracetam, phenytoin, pyridoxal phosphate, vigabatrin, phenobarbitone, and benzodiazepines. The child's clinical features were suggestive of severe epileptic encephalopathy. Electroencephalogram showed burst suppression. Magnetic resonance imaging (MRI) of the brain obtained at 14 days of age showed signal abnormality of the entire cerebral white matter and to a lesser extent in the cerebellar hemispheres (Figure ). The signal abnormality extended to involve external capsules and the claustrum bilaterally. Diffusion restriction was noted in the posterior limbs of the internal capsules bilaterally extending into the dorsal brainstem. Reduced diffusion was also seen in the adjacent lateral thalami bilaterally. There was no evidence of myelin deposition in the posterior limbs of the internal capsules bilaterally. Susceptibility weighted imaging sequences showed focal micro‐hemorrhages in the right occipital sub‐cortical white matter. The corpus callosum was diffusely thin in caliber and the temporal horns of the lateral ventricles appeared prominent. These results suggested a likely leukodystrophy, but its pattern and biochemical investigations (Table ) did not lead to detection of the underlying cause. He developed pan‐enteric necrotizing enterocolitis on Day 51. Due to continued deterioration and extremely poor prognosis, he was too unstable for surgical management and after discussion with family, he was extubated electively and died within a few hours, at 52 days of age. As accurate diagnosis was not known, skin fibroblasts, skeletal muscle, plasma, and cerebrospinal fluid samples were stored. Parental consent was taken to gather samples. Post‐mortem investigations Macroscopically the brain had a marked lack of differentiation between the white and gray matter. White matter within the brain stem, cerebellum, and cerebrum showed an unusual vacuolar appearance. These findings in combination with the clinical features suggested a vacuolating leukoencephalopathy. A clinical exome analysis was performed on stored DNA sample as described previously (Patricia Molina‐Ramírez et al., ; Stoyle et al., ). This revealed two ADSL c.632T>A (p.(Leu211His)) and c.1277G>A (p.(Arg426His)) (NM_000026.2) variants that were confirmed to be in trans by bi‐directional Sanger sequencing of parental samples. Bi‐allelic loss‐of‐function ADSL variants cause adenylosuccinase deficiency (Georges & Berghe, ; Jurecka et al., ; Stenson et al., ; Stone et al., ). Several patients with the p.(Arg426His) variant have been previously reported (Donti et al., ; Mao et al., ) and this variant was, therefore, classed as pathogenic according to the American College of Medical Genetics and Genomics (ACMG) criteria (PS1 PS3 PM1 PM2 PP3; Richards et al., ). At the time of the discovery, the p.(Leu211His) variant was novel and classed as a VUS (PM1 PM2 PP3). As the variant was proven to be in trans with a pathogenic variant, it could potentially be reclassified to likely pathogenic. However, vacuolating encephalopathy has never been described with this condition previously. Reverse biochemical phenotyping Adenylosuccinase (EC 4.3.2.2) catalyzes two steps involving β‐elimination of fumarate in the de novo synthesis of purine nucleotides, converting succinylaminoimidazole carboxamide ribotide into aminoimidazole carboxamide ribotide (Kmoch et al., ). ADSL is also involved in the purine nucleotide cycle by forming adenosine monophosphate (AMP) from succinyladenosine monophosphate, which prevents AMP accumulation after adenosine triphosphate catabolism (Swain et al., ). Adenylosuccinase deficiency results in presence of succinylaminoimidazole carboxamide riboside and succinyladenosine (S‐Ado) in cerebrospinal fluid and plasma (Georges & Berghe, ). Following the genetic results, purine and pyrimidine metabolites were separated in previously stored plasma samples from the proband and quantitated by reversed‐phase UPLC with diode array UV detection on a Waters Acquity UPLC system (Waters). This showed elevated S‐Ado (1398.7 μmol/L) and SAICAr (696.0 μmol/L). Next, adenylosuccinase activity was measured in stored fibroblasts as described previously (Bierau et al., ) and was found to be 0.013 nmol/(μg prot × h) which was only ~5% of control. In ACMG classification, PP4 can be used as a supporting piece of evidence when the patient's phenotype is in its entirety consistent with a specific genetic etiology. These results could, therefore, be used to reclassify the VUS as likely pathogenic and thus confirming the diagnosis of adenylosuccinase deficiency in the child. Subsequent pregnancy Following the confirmation of the diagnosis in their deceased child, the parents opted for genetic testing via chorionic villus biopsy in the next pregnancy, which showed the fetus to be unlikely to be affected by adenylosuccinase deficiency. They now have a healthy 2‐year‐old child. DISCUSSION Collectively, the clinical and post‐mortem features along with the radiological, genetic, and biochemical findings confirmed a diagnosis of adenylosuccinase deficiency in the proband. The fatal neonatal form of the condition is characterized by variable combinations of impaired intrauterine growth, decreased fetal movements, loss of fetal heart rate variability, neonatal‐onset encephalopathy, microcephaly, intractable seizures, absence of spontaneous movements, respiratory failure, and death within the first weeks of life (Jurecka et al., ). In addition to the fatal neonatal form, ADSL deficiency is known to occur in Type I and Type II forms (Jurecka et al., ). Type I, the most common form, is characterized by onset within first few months of life with severe psychomotor delay, seizures, developmental arrest, severe cortical visual impairment, and microcephaly. Type II is a moderate or milder form of the disease and is characterized by onset within the first years of life with mild‐to‐moderate psychomotor delay, seizures, and ataxia in some patients. We expand the known phenotype spectrum of the condition by demonstrating vacuolating leukodystrophy in an individual with neonatal‐onset adenylosuccinase deficiency. Notably, spongiosis has been previously described in adenylosuccinase deficiency (Mierzewska et al., ). The early onset of vacuolization in a fatal neonatal case is a novel finding for this condition and reflects the severity of the defect as supported by the biochemical results. Of note, the white matter vacuolation was not seen in the MRI performed at Day 14 potentially suggesting of the progressive nature of the disorder. Additionally, the mother reported excessive fetal movements in contrast to usually found reduced movements. This could, therefore, be another feature of ADSL deficiency where the onset of seizures occurs prenatally, as demonstrated in this case. This report shows the power of combined biochemical and genomic studies in molecular autopsies and in accurate diagnosis of inborn errors of metabolism (Ghosh et al., ). This was enabled by the previously banked plasma samples and fibroblasts from the deceased child. Without these samples, it might not have been possible to confidently give the diagnosis because vacuolating leukodystrophy is not a known feature of adenylosuccinase deficiency. Prenatal diagnosis, in the next pregnancy, therefore, may not have been possible. Reverse phenotyping has an important role in correlating variants with clinical features (De Goede et al., ). However, in deceased individuals, reverse phenotyping can be challenging and can limit the ability of diagnostic laboratories in providing prenatal or cascade testing. This case demonstrates the importance of having appropriate consent and anticipatory banking of biological samples for future reverse phenotyping in individuals with undiagnosed disorders who may not survive. Clinicians should consider this possibility, especially with increasingly earlier application of WES in the diagnostic pathways of neonates with unexplained severe diseases. Grace Vasallo, Julija Pavaine, Lydia Bowden, Bernd Schwahn, and Siddharth Banka provided clinical details. Adele Fairclough and Ronnie Wright performed genetic analysis. Hetalika C. Banka, Lynnette Fairbanks, Jörgen Bierau, and Alistair Horman performed biochemical analyses. Spatikha Sitaram, Hetalika C. Banka, and Siddharth Banka wrote the article. All co‐authors read and approved the manuscript. Table S1 Extensive biochemical investigations and results Click here for additional data file.
Superficial low‐grade fibromyxoid sarcoma
14ad595e-b25e-436b-bdcf-5b317671cc64
10091772
Anatomy[mh]
INTRODUCTION Low‐grade fibromyxoid sarcoma (LGFMS) is a distinctive malignant fibroblastic neoplasm characterized by alternating fibrous and myxoid areas containing deceptively bland spindled cells classically exhibiting a short fascicular and whorled growth pattern. , , The spectrum includes cases with giant rosettes, originally designated as “hyalinizing spindle cell tumor with giant rosettes.” , These tumors consistently have either FUS::CREB3L2 or FUS::CREB3L1 gene fusions, and rarely EWSR1::CREB3L1 . Multiple studies have shown recurrence rates of 1%–9% and metastasis in 6%–27%, primarily to lungs, pleura, and chest wall. , , , However, these rates are higher with long‐term follow‐up; Evans's study with the longest follow‐up (at least 5 years) reported local recurrence of 64% (up to 15 years after diagnosis), metastases in 45% (up to 45 years after diagnosis), and death of disease in 42% (from 3 to 42 years after diagnosis). LGFMS typically presents as a slowly growing asymptomatic mass on the lower extremities, usually the thigh, followed by the groin/perineum and trunk. Most lesions are localized to the deep soft tissues, including the skeletal muscle. , Only one previous large series focusing on superficial LGFMS suggested superficial tumors were disproportionately more common in children and might have a better prognosis, but this study was limited by the lack of available confirmatory testing at the time (e.g., MUC4 immunohistochemistry). Herein, we report an additional 23 cases of superficial LGFMS in order to confirm these findings and increase general awareness of superficial LGFMS. MATERIALS AND METHODS The Institutional Review Board Committee of the authors' institutions approved this study. Our electronic surgical pathology files were retrospectively reviewed to identify all cases of LGFMS diagnosed from January 2008 to January 2021. For inclusion, the tumors had to be confined to superficial soft tissue (dermis and/or subcutis only) without fascial or skeletal muscle involvement. Twenty‐three cases met the criteria, of which 16 were consultation cases. Demographic and clinical information, clinical diagnosis, imaging studies, and follow‐up were obtained from medical records or by communication with referring pathologists and primary physicians. The cutoff to be considered a pediatric patient was ≤18 years. Histopathologic parameters including location, mitotic figures, borders of the tumor, and presence of necrosis were collected. Results of immunohistochemical stains and molecular studies for FUS rearrangement by FISH were obtained. RESULTS 3.1 Clinical data The clinicopathologic features are summarized in Table . Twenty‐three patients (nine males; 14 females) with a median age of 29 years (range: 2–65 years) constituted the cohort. Eight (35%) patients were children and five (22%) were young adults (18–30 years). The majority involved the lower extremity (65%), including the gluteal region (five cases), thigh (four cases), inguinal region (three cases), leg, great toe, and pretibial location (one each). The remaining sites included flank (three cases), occipital scalp (two cases), abdominal wall, axilla, and paraspinal (one each). The lesions range from 1 to 9.2 cm in greatest dimension (median of 2.8 cm). When available, the pre‐operative clinical impression was mainly benign, with cyst being the most common diagnosis followed by lipoma, nodular fasciitis, pilomatricoma, hematoma, peripheral nerve sheath tumor, and vascular malformation. In three patients (Cases 1, 7, and 12), the clinical differential diagnosis included malignant entities: metastatic squamous cell carcinoma, synovial sarcoma, and sarcoma not otherwise specified. A contributor's histopathologic diagnosis was provided for four cases and included LGFMS, solitary fibrous tumor (SFT), calcifying fibrous pseudotumor, fibrohistiocytic lesion, and spindle cell/myxoid lipoma. Follow‐up available on 14 cases ranged from 11 to 148 months (median 61 months). None developed recurrence or documented metastases. One patient died from metastatic ovarian cancer. 3.2 Pathologic features Histopathologically, the tumors were primarily centered in the subcutis (21/23; 91%), with two centered in the dermis (2/23; 9%) (Figure ). The majority were circumscribed (17/18; 94%) with eight having a thick fibrous pseudocapsule (8/18; 44%). All had classic features of LGFMS characterized by alternating areas of a collagenized stroma admixed with myxoid zones, both containing a population of bland spindled cells displaying a storiform to whorled growth pattern (Figure ). A prominent vascular network of curvilinear to arborizing vessels was more prominently observed in the myxoid zones (Figure ). Perivascular sclerosis or hypercellularity was noted in some of the cases (Figure ). The tumor cells had bland, oval to spindled, slightly hyperchromatic nuclei (Figure ). The cellularity of the lesions varied widely from low to moderate, with rare cases showing hypercellular areas with sheets of cells and mild to moderate nuclear pleomorphism (Figure ). Significant pleomorphism was absent, and necrosis was seen in only one tumor (1/23; 4%). Mitotic figures were mostly lacking (18/23; 78%) and, when present (4/23; 17%), scarce with one per 10 high‐power fields. Only one case (case nine) had conspicuous mitotic activity (six mitotic figures per 10 high‐power fields) (Figure ). This same case also exhibited non‐perivascular hypercellular areas with more round cells, mild‐to‐moderate pleomorphism, a vaguely storiform growth pattern, and focal necrosis (Figure ). Case 11 showed morphologic patterns of both LGFMS and sclerosing epithelioid fibrosarcoma (SEF) with prominent hyalinized sclerotic collagen matrix associated with bland epithelioid cells arranged in vague cords (Figure ). This case also displayed unusual features, including multinucleated giant cells and osseous metaplasia (Figure ). Collagen rosettes were not identified in any case. Of 23 cases, 10 showed a positive margin at the excision specimen. Case 10 was a biopsy and information on excision margin status is not available. Immunohistochemical stains for MUC4 showed strong and diffuse positivity in 16/16 cases tested (Figure ). The remaining seven cases, including two that were also strongly and diffusely positive for MUC4 by immunohistochemistry, showed FUS rearrangement by fluorescent in situ hybridization (FISH). Additional immunohistochemical stains performed showed positive expression for Bcl‐2 (2/2) and variable expression for smooth muscle actin (SMA, 2/14) and epithelial membrane antigen (EMA, 1/9). The neoplastic cells were uniformly negative for S100 protein (18/18), CD34 (10/10), desmin (8/8), CD99 (4/4), STAT6 (4/4), beta‐catenin (4/4), pankeratin (3/3), neurofilament (2/2), CD68 (2/2), cytokeratin 903 (1/1), CD56 (1/1), HMB45 (1/1), CD117 (1/1), SOX10 (1/1), Melan‐A (1/1), glial fibrillary acidic protein (GFAP, 1/1), Factor XIIIA (1/1), and CD163 (1/1). Clinical data The clinicopathologic features are summarized in Table . Twenty‐three patients (nine males; 14 females) with a median age of 29 years (range: 2–65 years) constituted the cohort. Eight (35%) patients were children and five (22%) were young adults (18–30 years). The majority involved the lower extremity (65%), including the gluteal region (five cases), thigh (four cases), inguinal region (three cases), leg, great toe, and pretibial location (one each). The remaining sites included flank (three cases), occipital scalp (two cases), abdominal wall, axilla, and paraspinal (one each). The lesions range from 1 to 9.2 cm in greatest dimension (median of 2.8 cm). When available, the pre‐operative clinical impression was mainly benign, with cyst being the most common diagnosis followed by lipoma, nodular fasciitis, pilomatricoma, hematoma, peripheral nerve sheath tumor, and vascular malformation. In three patients (Cases 1, 7, and 12), the clinical differential diagnosis included malignant entities: metastatic squamous cell carcinoma, synovial sarcoma, and sarcoma not otherwise specified. A contributor's histopathologic diagnosis was provided for four cases and included LGFMS, solitary fibrous tumor (SFT), calcifying fibrous pseudotumor, fibrohistiocytic lesion, and spindle cell/myxoid lipoma. Follow‐up available on 14 cases ranged from 11 to 148 months (median 61 months). None developed recurrence or documented metastases. One patient died from metastatic ovarian cancer. Pathologic features Histopathologically, the tumors were primarily centered in the subcutis (21/23; 91%), with two centered in the dermis (2/23; 9%) (Figure ). The majority were circumscribed (17/18; 94%) with eight having a thick fibrous pseudocapsule (8/18; 44%). All had classic features of LGFMS characterized by alternating areas of a collagenized stroma admixed with myxoid zones, both containing a population of bland spindled cells displaying a storiform to whorled growth pattern (Figure ). A prominent vascular network of curvilinear to arborizing vessels was more prominently observed in the myxoid zones (Figure ). Perivascular sclerosis or hypercellularity was noted in some of the cases (Figure ). The tumor cells had bland, oval to spindled, slightly hyperchromatic nuclei (Figure ). The cellularity of the lesions varied widely from low to moderate, with rare cases showing hypercellular areas with sheets of cells and mild to moderate nuclear pleomorphism (Figure ). Significant pleomorphism was absent, and necrosis was seen in only one tumor (1/23; 4%). Mitotic figures were mostly lacking (18/23; 78%) and, when present (4/23; 17%), scarce with one per 10 high‐power fields. Only one case (case nine) had conspicuous mitotic activity (six mitotic figures per 10 high‐power fields) (Figure ). This same case also exhibited non‐perivascular hypercellular areas with more round cells, mild‐to‐moderate pleomorphism, a vaguely storiform growth pattern, and focal necrosis (Figure ). Case 11 showed morphologic patterns of both LGFMS and sclerosing epithelioid fibrosarcoma (SEF) with prominent hyalinized sclerotic collagen matrix associated with bland epithelioid cells arranged in vague cords (Figure ). This case also displayed unusual features, including multinucleated giant cells and osseous metaplasia (Figure ). Collagen rosettes were not identified in any case. Of 23 cases, 10 showed a positive margin at the excision specimen. Case 10 was a biopsy and information on excision margin status is not available. Immunohistochemical stains for MUC4 showed strong and diffuse positivity in 16/16 cases tested (Figure ). The remaining seven cases, including two that were also strongly and diffusely positive for MUC4 by immunohistochemistry, showed FUS rearrangement by fluorescent in situ hybridization (FISH). Additional immunohistochemical stains performed showed positive expression for Bcl‐2 (2/2) and variable expression for smooth muscle actin (SMA, 2/14) and epithelial membrane antigen (EMA, 1/9). The neoplastic cells were uniformly negative for S100 protein (18/18), CD34 (10/10), desmin (8/8), CD99 (4/4), STAT6 (4/4), beta‐catenin (4/4), pankeratin (3/3), neurofilament (2/2), CD68 (2/2), cytokeratin 903 (1/1), CD56 (1/1), HMB45 (1/1), CD117 (1/1), SOX10 (1/1), Melan‐A (1/1), glial fibrillary acidic protein (GFAP, 1/1), Factor XIIIA (1/1), and CD163 (1/1). DISCUSSION LGFMS is a rare, histopathologically low‐grade sarcoma first described by Dr Harry L. Evans in 1987. LGFMS accounts for fewer than 5% of soft tissue sarcomas. It equally involves men and women and typically affects young adults (mean age 35–45 years), but can be seen in patients of any age. , The tumor usually arises in the proximal extremities or trunk, but rare locations, including the abdominal and thoracic cavity, visceral organs, and intracranial sites, have been reported. , , The majority occur in a subfascial location, but superficial cases have been described. , Our current study describes 23 additional patients with superficial LGFMS confirmed by immunohistochemistry or FISH, promoting awareness for this tumor and expanding upon its clinicopathologic features. LGFMS classically has alternating hypercellular and hypocellular areas of tumor cells in the background of collagenous, myxocollagenous, or myxoid stroma. There is usually an abrupt transition between collagenous and myxoid zones. The lesional cells are bland, with small angulated nuclei, scant wispy cytoplasm, and limited nuclear atypia, and are arranged in swirling growth patterns. , Mitotic figures are scarce to absent. Curvilinear blood vessels characterized by long, sinuous vessels with a collapsed lumen, with perivascular sclerosis are observed, particularly in the myxoid areas. The tumors typically had classic histopathologic features of LGFMS. In addition, a few cases showed other morphologic variations described in the literature, including a loose storiform pattern, hypercellular areas with more round cells and increased mitotic activity, multinucleated giant cells, and osseous metaplasia. , , One case (Case 11) also had overlapping features of SEF in addition to classic LGFMS features. SEF was first described by Dr Meis‐Kindblom in 1995 and is characterized by cords and strands of large epithelioid cells with clear or eosinophilic cytoplasm surrounded by densely sclerotic stroma. In some cases, it can show areas indistinguishable from LGFMS and rarely harbor FUS rearrangements suggesting a biological relationship between the two entities. Studies have also illustrated SEF‐like areas in some cases of LGFMS, as seen in one of our cases. , , , Despite the overlapping morphologic, immunophenotype, and molecular features, SEF is still considered a separate entity in the current WHO classification. In contrast to LGFMS, SEF has more diverse and complex molecular findings, occurs in somewhat older patients (median age of 48 years), and is more aggressive with a higher death rate and shorter survival. , , , , A study raised the possibility that tumors with focal sclerosing‐epithelioid‐sarcoma‐like areas should be regarded as within the spectrum of LGFMS because their study showed that pure SEF (tumors that lack recognizable LGFMS‐like areas) do not usually harbor FUS rearrangement. This distinction provided a strong genetic basis. Our case with sclerosing‐epithlioid‐sarcoma‐like areas did occur in an older patient that had no evidence of disease with limited follow‐up of 61 months. The cytogenetic hallmark of LGFMS is t(7;16)(q33;p11), resulting in oncogenic fusion gene of FUS::CREB3L2 (cAMP‐responsive element‐binding protein 3‐like 2) seen in 75%–95% of cases and t(11;16)(p11;p11) resulting in FUS::CREB3L1 fusion genes seen in approximately 5% of patients. , , , Furthermore, two cases with similar morphologic features to the classic LGFMS were found to harbor a novel EWSR1::CREB3L1 gene fusion. This translocation was previously shown in two LGFMS‐SEF hybrid cases. LGFMS exhibit strong and diffuse granular cytoplasmic immunoreactivity with immunohistochemical stains for MUC4. The MUC4 gene is one of the top upregulated genes in LGFMS located on the long arm of chromosome three (3q29). MUC4 protein is a high‐molecular‐weight transmembrane glycoprotein that functions in cell growth signaling pathways through interactions with the ERBB2 (HER2) family. It is normally expressed on many epithelial surfaces, including respiratory and colonic epithelium, where it is believed to have a protective role. MUC4 is a highly sensitive marker for the diagnosis of LGFMS. In our study, 16/16 were positive for MUC4. It is important to note that aberrant expression or overexpression of MUC4 has also been reported in various carcinomas, including pancreas, ovary, lung, breast, colon, prostate, and myoepithelial carcinoma, and in mesenchymal tumors, including SEF, synovial sarcoma, ossifying fibromyxoid tumor, epithelioid gastrointestinal stromal tumors, and PAX3/7::FOXO1 fusion‐positive rhabdomyosarcomas. , , , , , With the exception of MUC4, other immunohistochemical stains are non‐specific in diagnosing LGFMS. EMA positivity has been the most consistent finding with expression ranging from 43% to 91%. , Our study shows only 11% (1/9) of the cases positive for EMA. CD99 and Bcl‐2 expression has also been shown in the majority of LGFMS. In our series, 2/2 cases were positive for BCL2 and 4/4 cases were negative for CD99. The tumor can show variable expression of cytokeratins, CD34, desmin, SMA, claudin 1, and muscle‐specific actin and is consistently negative for S100 protein, Kit, and GFAP. The largest series to date focusing on superficial LGFMS consists of 19 cases, but this series predated routinely available confirmatory testing. Besides that series, there were only a few previous studies that included cases of superficial LGFMS, but those articles had too few patients or did not separate superficial from deep LGFMS. , The patients in that study included 12 males and seven females with a mean age of 29 years; 37% (7/19) of patients were children with the lower extremity being the most common location. Their study reported 14/16 (88%) patients with no evidence of disease recurrence and 2/16 (12%) patients with local recurrence at 5 and 16 months (mean follow‐up of 44 months) but no distant metastasis. Our series, consisting of LGFMS confirmed by ancillary tests, largely corroborates these previous findings: 35% of cases occurred in children, most commonly involving the lower extremity, and none of the patients developed metastasis. In contrast, this series had no episodes of local recurrence and had a modest female predominance (61%). The difference of sex predilection may reflect bias in the original series of superficial LGFMS, as the majority of cases in that series were from the Armed Forces Institute of Pathology. Our data further suggest that superficial LGFMS may have a better overall prognosis than deep LGFMS, likely the result of early recognition of smaller lesions that are amenable to complete excision. It is also well known that children, in contrast to adults, are inclined to have an overall better outcome with low‐grade sarcomas. However, longer‐term follow‐up is still necessary to confirm this, given the propensity for late metastasis in deep LGFMS. Given the rarity of the tumor, bland cytology, and variable morphology, LGFMS can be difficult to distinguish from some benign mesenchymal tumors and other low‐grade sarcomas. An accurate diagnosis of LGFMS is essential because these patients require complete excision and long‐term follow‐up. Perhaps the closest histopathologic simulant is perineurioma. , Like LGFMS, perineuriomas have bland spindled morphology, whorled pattern, often have variably collagenous to myxoid stroma, and may exhibit collagen rosettes. Unlike LGFMS, they typically lack a prominent vasculature and abrupt transitions from collagenous to myxoid areas. Immunohistochemical stains can help differentiate the two diagnoses. While both LGFMS and perineurioma may exhibit immunoreactivity for EMA and claudin‐1, perineuriomas are negative for MUC4. , One of the benign entities considered by the contributing pathologist was spindle cell lipoma, especially the “low‐fat” and “fat‐free” variants. Although both tumors present as superficial lesions, spindle cell lipoma is usually present on the upper truck/neck of older men and is extremely rare in the lower extremity. None of our cases were present on the upper truck/neck. Additionally, in spindle cell lipoma, the spindle cells are characterized by parallel arrays of ropy collagen, and most cases have a significant amount of admixed mature adipocytes. Both lesions can have CD34 + cells, but spindle cell lipoma usually has strong CD34 expression. Superficial fibromatosis most commonly involves the hands or feet. Only one of our cases involved an acral site. Superficial fibromatosis typically lacks myxoid stroma, has a more fascicular growth pattern, and is negative for MUC4. Cutaneous myxomas, also known as superficial angiomyxoma, are characterized by the presence of bland spindle cells in an abundant myxoid stroma without alternating collagenous zones. , Myxomas may also have stromal neutrophils and follicular induction, features not seen in superficial LGFMS. Cutaneous myxomas are negative for MUC4. Nodular fasciitis may have myxoid stroma, but is overall more cellular than superficial LGFMS with stromal edema, granulation‐tissue‐like vascular proliferation, extravasated red blood cells, and inflammatory cells. It also has SMA immunoreactivity. SFT is characterized by a spindle cell proliferation that is often very bland‐appearing and devoid of cytologic atypia against a background of variable degrees of collagenized stroma. Prominent dilated and branching blood vessels in the fibrous component of LGFMS can resemble SFT. However, SFT is negative for MUC4, harbors the NAB2::STAT6 gene fusion, and is positive for STAT6 by immunohistochemistry. , Myxoid DFSP frequently has areas of conventional DFSP in up to 60% of cases. Myxoid DFSP still has the honeycomb pattern of fat infiltration, has more delicate vessels, and is positive for CD34 and negative for MUC4. , In summary, we present a large series of superficial LGFMS and confirm the findings that children are disproportionately affected by superficial LGFMS, and that superficial LGFMS may be less aggressive than deep LGFMS. It can be challenging to separate from other benign entities and, therefore, should be in the histopathologic differential diagnosis of bland spindled cell tumor of the dermis and subcutaneous tissue. The main limitation of our study is the somewhat short follow‐up (range: 11 months to 12.3 years and median of 61 months). Further studies with longer follow‐up would help support these findings. The authors declare no conflict of interest.
Calcium‐channel blockers: Clinical outcome associations with reported pharmacogenetics variants in 32 000 patients
b50446b2-07d7-4f50-9f2e-1ba03c802f89
10091789
Pharmacology[mh]
Antihypertensives are amongst the most commonly prescribed medications. The pharmacogenomics knowledge base PharmGKB documents genetic variants reported to influence antihypertensives effectiveness or adverse events. The levels of supporting evidence are variable and evidence of impact on clinical outcomes, especially in routine primary care (rather than in acute hospital care settings) is limited. We estimated the extent to which 23 commonly occurring pharmacogenetic variants reported to affect calcium‐channel blockers effectiveness are associated with clinical outcomes in the UK Biobank community cohort. We used a novel pharmacogenetic causal inference approach to estimate of the outcome if all participants had the low‐risk genotype. We found that if carriers of RYR3 variant rs877087 could experience the same treatment effect as noncarriers the incidence of heart failure in patients prescribed calcium channel blockers would reduce by 9.2%. INTRODUCTION High blood pressure—hypertension—is a key modifiable risk factor for cardiovascular morbidity and mortality. While reducing raised blood pressures is the goal, only 1/3 of hypertensive patients treated with antihypertensive medications are estimated to reach target blood pressures. The reasons for failure to control raised blood pressure are complex, but genetic factors are proposed to play a role, either directly on blood pressure or indirectly by influencing antihypertensive medication response, adverse events or medication adherence. Calcium‐channel blockers (CCBs) are the first line recommended antihypertensive for most adults with hypertension, and their use is widespread across the world. , , There are 2 subgroups of CCBs, the most common being dihydropyridines (dCCBs), which are regarded as relatively safe and cost‐effective. Oedema is a common dCCB adverse effect, with incidence rates of 22%, , , that affects the quality of life of patients and can lead to discontinuation of treatment. , The presence of oedema can result in additional prescribing, which in turn can cause additional adverse outcomes including falls, over diuresis, acute kidney injury and polypharmacy. , Genetic factors can predispose to side effects, as well as further complications. The pharmacogenomics knowledge base (PharmGKB) documents genetic variants reported to influence dCCB effectiveness or adverse events. , , , The levels of supporting evidence for each variant is variable, with many only having limited clinical evidence. Such evidence includes reported genes containing single‐nucleotide polymorphisms (SNPs) include those encoding calcium channel subunits themselves, such as the voltage‐gated calcium channels α1C ( CACNA1C ). SNPs in other ion channels are reported to alter dCCB responses (including PICALM , TANC2 , NUMA1 , APCDD1 , GNB3 , SLC14A2 , ADRA1A , ADRB2 and CYP3A4 ). SNPs in ATP‐binding cassette subfamily B member 1 ( ABCB1 ) and in cytochrome p450 3A5 ( CYP3A5 ) are reported to affect the clearance of dCCBs, and SNPs in cytochrome p450 ox reductase ( POR ) reportedly influence the plasma concentration of medicines. SNPs in nitric oxide synthase 1 adaptor protein ( NOS1AP ) increase risk of cardiovascular death, and SNPs in ryanodine receptor 3 ( RYR3 ) and in atrial natriuretic precursor A ( NPPA ) are reported to increase the risk of cardiovascular disease. In particular, RYR3 (in intracellular calcium channels) was found to be associated with heart failure (HF) and there is a need to examine its effect on stroke, and effects on heart disease in risky groups as it is unknown. Evidence of impact on clinical outcomes, especially in routine primary care (rather than in acute hospital care settings) is currently limited for most pharmacogenetics variants reported to affect dCCBs. Here we analyse the UK Biobank (UKB) community volunteer cohort with linked genetic and medical records. We aimed to determine the extent to which 23 commonly occurring (minor allele frequency >3%) pharmacogenetic variants in 16 genes reported to affect dCCB effectiveness or rates of adverse events are associated with clinical outcomes. METHODS 2.1 UKB cohort The UKB enrolled 503 325 community‐based volunteers aged 40–70 years who visited 1 of 22 assessment centres in Wales, Scotland or England in 2006–2010. Extensive questionnaires on demographic, lifestyle and health information data were collected at the baseline assessment. Blood samples for genetic and biochemical analyses, and anthropometric measures were gathered. This study of dihydropyridines was conducted using the linked GP (primary care) data available in 230 096 participants. Data were available between January 1990 and August 2017 (see below for details). Participants gave consent to receive relevant information about clinical findings at baseline only: therefore, UKB data on individual genetic status were not reported to participants or their clinicians and could not therefore have influenced prescribing. 2.2 General practice data More than 57 million prescriptions for 230 096 (45.7%) participants in the primary care data were recorded. The GP data were available up to 31 May 2016 (England TPP system supplier) and 31 August 2017 (Wales EMIS/Vision system). Drug name, quantity, date of prescription and drug code (in clinical Read v2, British National Formulary [BNF] or dm + d [Dictionary of Medicines and Devices] format, depending on suppler) are available. We used the UK National Institute for Health and Care Excellence (NICE) BNF database ( https://bnf.nice.org.uk ) to identify medication drug and brand names prescribed in the NHS that matched our search criteria for antihypertensives. Where another study included specific medications/brands we also include these. We included participants prescribed dihydropyridines, AND hence identified prescribing records for these medications (see for details), including date of each prescription. We also identified antihypertensive prescriptions apart from dCCB; diuretics, β‐blockers, α‐blockers, angiotensin converting enzyme inhibitors and other antihypertensives using the Read 2 codes and BNF codes (see Table for details). We defined the censoring date for GP prescribing as either the date of deduction (removal from GP list, where available) or 31 May 2016 where no deduction date was present (i.e., still registered at an available practice). Data after 31 May 2016 are incomplete, depending on GP provider (see UKB documentation ). 2.3 Disease ascertainment Primary and secondary care health records were used to examine the dCCB‐related adverse events. Peripheral oedema diagnoses were ascertained from ICD‐10 and ICD‐9 codes: and converted to Read codes used in UK primary care records using UKB‐provided diagnostic code maps. Cardiovascular events from hospital admissions records were available up to 14 years follow‐up after baseline assessment (HES in England up to 30 September 2020: data from Scotland and Wales censored to 31 August 2020 and 28 February 2018, respectively), covering the entire period up to the date of censoring of primary care prescribing data. Diagnosis of myocardial infarction (MI)/angina, stroke, chronic kidney disease (CKD), HF and ischemic stroke were ascertained using ICD‐10 codes (see for further details). 2.4 Genetic variants We utilized genotype data from UKB, as described previously (see for details). Our analysis included 451 367 participants (93%) identified as genetically European (identified by genetic clustering, as described previously ): unfortunately, sample sizes from other ancestry groups were too small to analyse separately. We analysed the genetic variants with documented effects on dCCBs effectiveness in the literature and in the PharmGKB database (March 2022). This included 29 SNPs in the following genes: NPPA , NOS1AP , CYP3A4 , GNB3 , RYR3 , CACNA1C , ABCB1 , ADRA1A , SLC14A2 , ADRB2 , POR , PICALM , TANC2 , NUMA1 and APCDD1 (see Table for details). Genotype status for 23 variants could be ascertained from the available UKB imputed data (release version 3) and minor allele frequencies were common enough to study (frequency varying from 3 to 46%—see Table for details) in the UKB cohort: results for all 23 studied SNPs are reported. We calculated correlation coefficient to check linkage disequilibrium. 2.5 Primary analysis Associations between genotypes and outcomes (GP‐diagnosed oedema and hospital‐diagnosed coronary heart disease [CHD; MI/angina], HF and CKD) were estimated using Cox proportional hazards regression models. See for further details of model specifics. To estimate the genetically moderated treatment effect (GMTE), we used TWIST (Triangulation with a Study), a novel pharmacogenetic causal inference approach. This enables estimation of the predicted outcome if all participants were reassigned the low‐risk genotype, therefore providing an estimate of the genetic effect. In brief, the methods uses Aalen additive hazards regression models to test several assumptions common to pharmacogenetic analysis; primarily that the genetic variants do not predict whether an individual receives dCCB treatment; are not associated with any measured confounders predicting dCCB use or the studied outcome; and only affect the outcome through the interaction with dCCBs (see Bowden et al . for details). From this analysis, the most efficient and robust estimate of the GMTE is derived. Of note, the GMTE estimate may be the result of applying a single method, or instead be the combination of 2 or more estimates from different methods. The TWIST framework explicitly tests the association between genotype and outcome in the treated and untreated groups separately, to determine the GMTE independent of any effect in untreated individuals. We used R version 4.0.2 and R package twistR ( https://github.com/lukepilling/twistR ) v.0.1.3. We also investigated the association between genotype and likelihood of switching dCCB for an alternative antihypertensive prescription using Cox's proportional hazards regression models, with adjustment for age at first prescription, sex and genotyping principal components of ancestry 1–10 in patients. See for further details. To adjust for multiple statistical testing and control the false discovery rate, we applied Benjamini–Hochberg correction to P values for the associations between 23 SNPs and each outcome (using R function p.adjust. 2.6 Secondary analysis in patients with heart disease diagnosis prior to dCCB treatment We only included patients who had any heart diseases prior to the dCCB treatment for MI/angina/HF outcomes models as secondary analysis, as worsening angina and acute MI are reported as a caution for patients with coronary artery disease by the Food and Drug Administration in the prescribing information. We also tested associations for stroke and the RYR3 calcium channel gene variant in patients on dCCBs, to examine the treatment effect, as stroke was associated with RYR3 in a genome‐wide association study (GWAS) regardless of use of dCCBs. 2.7 Nomenclature of targets and ligands Key protein targets and ligands in this article are hyperlinked to corresponding entries in http://www.guidetopharmacology.org and are permanently archived in the Concise Guide to PHARMACOLOGY 2019/20 (Alexander et al ., 2019a,b). UKB cohort The UKB enrolled 503 325 community‐based volunteers aged 40–70 years who visited 1 of 22 assessment centres in Wales, Scotland or England in 2006–2010. Extensive questionnaires on demographic, lifestyle and health information data were collected at the baseline assessment. Blood samples for genetic and biochemical analyses, and anthropometric measures were gathered. This study of dihydropyridines was conducted using the linked GP (primary care) data available in 230 096 participants. Data were available between January 1990 and August 2017 (see below for details). Participants gave consent to receive relevant information about clinical findings at baseline only: therefore, UKB data on individual genetic status were not reported to participants or their clinicians and could not therefore have influenced prescribing. General practice data More than 57 million prescriptions for 230 096 (45.7%) participants in the primary care data were recorded. The GP data were available up to 31 May 2016 (England TPP system supplier) and 31 August 2017 (Wales EMIS/Vision system). Drug name, quantity, date of prescription and drug code (in clinical Read v2, British National Formulary [BNF] or dm + d [Dictionary of Medicines and Devices] format, depending on suppler) are available. We used the UK National Institute for Health and Care Excellence (NICE) BNF database ( https://bnf.nice.org.uk ) to identify medication drug and brand names prescribed in the NHS that matched our search criteria for antihypertensives. Where another study included specific medications/brands we also include these. We included participants prescribed dihydropyridines, AND hence identified prescribing records for these medications (see for details), including date of each prescription. We also identified antihypertensive prescriptions apart from dCCB; diuretics, β‐blockers, α‐blockers, angiotensin converting enzyme inhibitors and other antihypertensives using the Read 2 codes and BNF codes (see Table for details). We defined the censoring date for GP prescribing as either the date of deduction (removal from GP list, where available) or 31 May 2016 where no deduction date was present (i.e., still registered at an available practice). Data after 31 May 2016 are incomplete, depending on GP provider (see UKB documentation ). Disease ascertainment Primary and secondary care health records were used to examine the dCCB‐related adverse events. Peripheral oedema diagnoses were ascertained from ICD‐10 and ICD‐9 codes: and converted to Read codes used in UK primary care records using UKB‐provided diagnostic code maps. Cardiovascular events from hospital admissions records were available up to 14 years follow‐up after baseline assessment (HES in England up to 30 September 2020: data from Scotland and Wales censored to 31 August 2020 and 28 February 2018, respectively), covering the entire period up to the date of censoring of primary care prescribing data. Diagnosis of myocardial infarction (MI)/angina, stroke, chronic kidney disease (CKD), HF and ischemic stroke were ascertained using ICD‐10 codes (see for further details). Genetic variants We utilized genotype data from UKB, as described previously (see for details). Our analysis included 451 367 participants (93%) identified as genetically European (identified by genetic clustering, as described previously ): unfortunately, sample sizes from other ancestry groups were too small to analyse separately. We analysed the genetic variants with documented effects on dCCBs effectiveness in the literature and in the PharmGKB database (March 2022). This included 29 SNPs in the following genes: NPPA , NOS1AP , CYP3A4 , GNB3 , RYR3 , CACNA1C , ABCB1 , ADRA1A , SLC14A2 , ADRB2 , POR , PICALM , TANC2 , NUMA1 and APCDD1 (see Table for details). Genotype status for 23 variants could be ascertained from the available UKB imputed data (release version 3) and minor allele frequencies were common enough to study (frequency varying from 3 to 46%—see Table for details) in the UKB cohort: results for all 23 studied SNPs are reported. We calculated correlation coefficient to check linkage disequilibrium. Primary analysis Associations between genotypes and outcomes (GP‐diagnosed oedema and hospital‐diagnosed coronary heart disease [CHD; MI/angina], HF and CKD) were estimated using Cox proportional hazards regression models. See for further details of model specifics. To estimate the genetically moderated treatment effect (GMTE), we used TWIST (Triangulation with a Study), a novel pharmacogenetic causal inference approach. This enables estimation of the predicted outcome if all participants were reassigned the low‐risk genotype, therefore providing an estimate of the genetic effect. In brief, the methods uses Aalen additive hazards regression models to test several assumptions common to pharmacogenetic analysis; primarily that the genetic variants do not predict whether an individual receives dCCB treatment; are not associated with any measured confounders predicting dCCB use or the studied outcome; and only affect the outcome through the interaction with dCCBs (see Bowden et al . for details). From this analysis, the most efficient and robust estimate of the GMTE is derived. Of note, the GMTE estimate may be the result of applying a single method, or instead be the combination of 2 or more estimates from different methods. The TWIST framework explicitly tests the association between genotype and outcome in the treated and untreated groups separately, to determine the GMTE independent of any effect in untreated individuals. We used R version 4.0.2 and R package twistR ( https://github.com/lukepilling/twistR ) v.0.1.3. We also investigated the association between genotype and likelihood of switching dCCB for an alternative antihypertensive prescription using Cox's proportional hazards regression models, with adjustment for age at first prescription, sex and genotyping principal components of ancestry 1–10 in patients. See for further details. To adjust for multiple statistical testing and control the false discovery rate, we applied Benjamini–Hochberg correction to P values for the associations between 23 SNPs and each outcome (using R function p.adjust. Secondary analysis in patients with heart disease diagnosis prior to dCCB treatment We only included patients who had any heart diseases prior to the dCCB treatment for MI/angina/HF outcomes models as secondary analysis, as worsening angina and acute MI are reported as a caution for patients with coronary artery disease by the Food and Drug Administration in the prescribing information. We also tested associations for stroke and the RYR3 calcium channel gene variant in patients on dCCBs, to examine the treatment effect, as stroke was associated with RYR3 in a genome‐wide association study (GWAS) regardless of use of dCCBs. Nomenclature of targets and ligands Key protein targets and ligands in this article are hyperlinked to corresponding entries in http://www.guidetopharmacology.org and are permanently archived in the Concise Guide to PHARMACOLOGY 2019/20 (Alexander et al ., 2019a,b). SENSITIVITY ANALYSIS 3.1 Prescribed additional antihypertensives We identified the BNF and Read 2 codes for the antihypertensive medication classes (β‐blockers, α‐blockers, diuretics, angiotensin receptor blockers, angiotensin converting enzyme inhibitors and vasodilators) and determined whether patients receiving dCCB prescriptions also received another antihypertensive within the dCCB prescribing time period. We then included this variable as a covariate in analyses. 3.2 Amlodipine and other dCCBs We performed sensitivity analysis of our primary results splitting the dCCBs into 2 categories: in just those patients prescribed amlodipine (by far the most common dCCB) and other dCCB only and repeated the analyses described in the previous sections. Further splitting of nonamlodipine dCCBs was not feasible due to low numbers. 3.3 Analysis of unrelated participants only We identified participants related to the third degree or closer using KING kinship analysis. We then repeated our primary results only in unrelated participants of European descent by randomly excluding 1 of each pair of related to the third degree or closer. Prescribed additional antihypertensives We identified the BNF and Read 2 codes for the antihypertensive medication classes (β‐blockers, α‐blockers, diuretics, angiotensin receptor blockers, angiotensin converting enzyme inhibitors and vasodilators) and determined whether patients receiving dCCB prescriptions also received another antihypertensive within the dCCB prescribing time period. We then included this variable as a covariate in analyses. Amlodipine and other dCCBs We performed sensitivity analysis of our primary results splitting the dCCBs into 2 categories: in just those patients prescribed amlodipine (by far the most common dCCB) and other dCCB only and repeated the analyses described in the previous sections. Further splitting of nonamlodipine dCCBs was not feasible due to low numbers. Analysis of unrelated participants only We identified participants related to the third degree or closer using KING kinship analysis. We then repeated our primary results only in unrelated participants of European descent by randomly excluding 1 of each pair of related to the third degree or closer. RESULTS 4.1 Characteristics of the sample There were 32 360 (45.6% female) patients who were prescribed dCCB in primary care. The mean age was 61.3 years (standard deviation [SD] 7.7). The number of prescriptions in a year varied from 1 to 25, with a mean of 9.2 (SD 4.6) and a median of 7.9 (interquartile range 6.3 to 13). The mean prescription period was 5.9 (SD 5.2) years, the median was 4.4 (interquartile range 1.6 to 9.1) (see Table for details). The allele frequencies for the 23 studied genetic variants range from 3 to 50% (for details, see Table ). We found no pairs of variants in high ( R 2 < .8) linkage disequilibrium (see Table ). 4.2 Associations with prior evidence We investigated 23 genetic variants with reported pharmacogenetic effects on dCCB effectiveness or adverse events. We found supporting evidence in the UKB for 5 of the 23 reported dCCB pharmacogenetic associations (Table ). Details of the 5 genes reported below, including secondary analysis of other adverse outcomes. 4.2.1 RYR3 The RYR3 rs877087 T allele prevalence in people on dCCB treatment in UKB was 46%, and TT homozygotes was 21.3%. Of the 32 360 patients prescribed dCCBs, 2292 developed HF during the follow‐up period. Diagnoses were more common in RYR3 rs877087 TT homozygotes ( n = 404, 6.1% of 6607) and CT heterozygotes ( n = 943, 6.1% of 15 377) compared to common CC homozygotes ( n = 491, 5.4% of 9090; Figure and Table ; see Table for details). The increased risk of hospital diagnosed HF was significant in Cox's proportional hazards regression (HR) models adjusted for age at first dCCB prescription, sex and genetic ancestry (HR TT vs. CC 1.15, 95% confidence interval [CI] 1.01 to 1.31, P = .04 and HR CT vs. CC 1.12, 95% CI 1.01 to 1.25, P = .04), and length of treatment is explicitly modelled in the time‐to‐event analysis methods. We also performed an analysis of rs877087 assuming a dominant model of inheritance, given the similarity in estimates between the CT and TT groups: T‐allele carriers had 13% increased risks of HF compared to CC homozygotes (HR 1.13: 95% CI 1.02 to 1.25, P = .02). These results were not significant after Benjamini–Hochberg adjustment for multiple statistical testing (adjusted P > .05). Heterozygotes (11.5%) were also more likely to have incident MI, angina or HF compared to common CC homozygotes (10.3%; HR 1.12, 95% CI 1.03 to 1.21, P = .007; Table ). Heterozygotes were more likely to have incident stroke compared to CC homozygotes (HR 1.22, 95% CI 1.04 to 1.45, P = .02; Table ). We used the TWIST framework to estimate that the overall incidence of HF in patients prescribed dCCBs could be reduced by 9.2% (95% CI 3.1 to 15.4) if rs877087 T allele carriers received the same treatment benefit as noncarriers, that is, were switched to an alternative antihypertensive medication unaffected by rs877087 genotype. To give further details on the TWIST results: because the association with HF was similar between heterozygotes and minor allele homozygotes we estimated the GMTE in carriers (any rs877087 T allele) compared to CC homozygotes. rs877087 was not associated with HF in individuals never prescribed dCCBs (GMTE0 estimate P > .05; Table ). From TWIST we found the robust GMTE and the Mendelian randomization estimates could be combined to give a more efficient and precise estimate. The risk of HF was 0.069% greater per year after treatment initiation in carriers compared to noncarriers ( P = .003; Table ). When multiplied by the number of genotype‐carrier patient‐years in the model (244 818) and divided by the total number of diagnoses in the treated individual (1838), we estimate that if carriers of the rs877087 T allele could experience the same treatment effect as noncarriers, 170 HF diagnoses could have been avoided (95% CI 58 to 282), hence the 9.2% quoted earlier. In the subgroup of patients with a pre‐existing heart disease (MI, angina or HF) at the start of dCCB prescribing, RYR3 TT homozygotes had an increased risk of developing incident heart diseases compared to common CC homozygotes (75.4 vs . 67.8) with a HR 1.25 (95% CI 1.09 to 1.44, P = .002; Table ; see Table ). Overall, 2940 (7.5%) patients on dCCBs had incident CKD. RYR3 rs877087 TT homozygotes (461 CKD cases in 6164 TT homozygotes) were more likely to have hospital‐diagnosed CKD compared to the common homozygotes groups (HR 1.18, 95% CI 1.04 to 1.34, P = .01; see Table for details). We estimate that if carriers of the rs877087 T allele could experience the same treatment effect as noncarriers, 199 CKD diagnoses could have been avoided (95% CI 75 to 324; see Table ). Therefore, the overall incidence of CKD in patients prescribed dCCBs could be reduced by 8.6% if rs877087 T allele carriers received the same treatment benefit as noncarriers (95% CI 3.2 to 14.0). 4.2.2 CYP3A5 Patients with CYP3A5 rs776746 TT (CYP3A5*3) genotype (0.47% of patients), a variant previously linked to kidney related outcomes had increased risk of CKD (HR 2.12: 95% CI 1.34 to 3.38, P = .002) compared to CC homozygotes (Figure and Table ). The association was still significant after Benjamini–Hochberg adjustment for multiple statistical testing (adjusted P = .03). When we repeated the analysis for patients who were on dCCB but had no CKD history, 12.3% of CYP3A5 rs776746 TT homozygotes without prevalent CKD were diagnosed with incident CKD compared to 6.6% of heterozygotes and 6.8% homozygotes for CC (HR 2.09, 95% CI 1.29 to 3.37, P = .003; see Table for details). We estimated that if rs776746 TT homozygotes could experience the same treatment effect as CC homozygotes 11 CKD diagnoses could have been avoided (95% CI 4 to 18; see Table ). Therefore, the overall incidence of CKD in patients prescribed dCCBs could be reduced by 0.5% (95% CI 0.2 to 0.9) if rs776746 TT homozygotes received the same treatment benefit as CC homozygotes. Of the patients on dCCB prescription, 5565 (14.2%) changed treatment from dCCB CCBs to other antihypertensives. CYP3A5 rs776746 TT homozygotes ( n = 27/152) were also more likely to change treatments compared to common homozygotes; HR 1.59, 95% CI 1.09 to 2.32, P = .02, respectively (see Figure and Table ; see Table for details). Incident MI/angina was less likely to occur in patients heterozygous for CYP3A5 rs776746 compared to CC homozygotes ( P = .01). 4.2.3 NUMA1 Of those 3006 patients switched treatment, 800 were NUMA1 rs10898815 AA homozygotes ( n = 8272), and 1506 were GA heterozygotes ( n = 16 632). AA homozygotes and GA heterozygotes were more likely to switch treatments compare to their common homozygotes (HR 1.18, 95% CI 1.07 to 1.31, P = .001 and HR 1.10, 95% CI 1.01 to 1.21, P = .03, respectively; Figure and Table ; see Table for details). The association for AA was still significant after Benjamini–Hochberg adjustment for multiple statistical testing (adjusted P = .04). 4.2.4 ADRA1A Adrenoceptor α1A ( ADRA1A ) rs1048101 AA homozygotes had an increased risk for CKD (HR 1.18, 95% CI 1.04 to 1.34, P = .01) compared to GG homozygotes. GG homozygotes (395 cases of 5834 patients) and AG heterozygotes (947 cases of 14 277 patients) were associated with decreased risk of CKD compared to AA homozygotes (HR 0.88, 95% CI 0.77 to 0.99, P = .04, false discovery rate [FDR] P = .25 and HR 0.85, 95% CI 0.77 to 0.94, P = .001, FDR P = .03, respectively; Figure and Table ). We estimated that if rs1048101 AA homozygotes could experience the same treatment effect as noncarriers (e.g., were prescribed an alternative antihypertensive medication unaffected by this genotype) 86 CKD diagnoses could have been avoided (95% CI 13 to 138; see Table ). Therefore, the overall incidence of CKD in patients prescribed dCCBs could be reduced by 7% (95% CI 1.1 to 12.9) if rs1048101 AA homozygotes received the same treatment benefit as GG homozygotes. 4.2.5 APCDD1 Of those patients on dCCB prescriptions, 7430 (18.9%) patients had incident MI/angina post‐dCCB treatment. Of those 7430 patients, 1004 were homozygotes for APCDD1 rs564991 CC; 19.1% homozygotes for APCDD1 rs564991 CC had increased risk for MI/angina compared to 18.3% heterozygotes and 17.2% homozygotes for AA (HR 1.12, 95% CI 1.04 to 1.21, P = .004, FDR P = .2 for CC and HR 1.07, 95% CI 1 to 1.13, P = .04, FDR P = .28 for AC; Figure and Table ; see Table for details). TWIST analysis showed that rs564991 was not associated with CHD in individuals never prescribed dCCBs (GMTE0 estimate P > .05; Table ). The risk of CHD was 0.19% ( P = .002) greater per year after treatment in CC homozygotes compared to AA homozygotes (Table ). We estimated that if rs564991 CC homozygotes could experience the same treatment effect as AA homozygotes 98 CHD diagnoses could have been avoided (95% CI 35 to 162). Therefore, the overall incidence of CHD in patients prescribed dCCBs could be reduced by 3.5% (1.3 to 5.8) if rs564991 CC homozygotes received the same treatment benefit as AA homozygotes. Of 4910 APCDD1 rs564991 CC homozygotes, 306 patients had CKD after dCCB treatment. They were less likely to have CKD compared to their common homozygotes (6.2% vs . 7%; HR 0.87, 95% CI 0.76 to 1, P = .04). However, these were not significant after adjusting for multiple statistical testing. TWIST analysis estimated that is rs564991 CC homozygotes experienced the same treatment effect as AA homozygotes (i.e., were switched to an alternative antihypertensive) this may increase the overall CKD incidence in patients prescribed dCCBs by 63 diagnoses (95% CI 23 to 104, P = .002). GP‐diagnosed post‐dCCB oedema ( n = 5913—15.04%) was not associated with any of the variants. See Table for details. See for summary of associations for variants in other genes. Characteristics of the sample There were 32 360 (45.6% female) patients who were prescribed dCCB in primary care. The mean age was 61.3 years (standard deviation [SD] 7.7). The number of prescriptions in a year varied from 1 to 25, with a mean of 9.2 (SD 4.6) and a median of 7.9 (interquartile range 6.3 to 13). The mean prescription period was 5.9 (SD 5.2) years, the median was 4.4 (interquartile range 1.6 to 9.1) (see Table for details). The allele frequencies for the 23 studied genetic variants range from 3 to 50% (for details, see Table ). We found no pairs of variants in high ( R 2 < .8) linkage disequilibrium (see Table ). Associations with prior evidence We investigated 23 genetic variants with reported pharmacogenetic effects on dCCB effectiveness or adverse events. We found supporting evidence in the UKB for 5 of the 23 reported dCCB pharmacogenetic associations (Table ). Details of the 5 genes reported below, including secondary analysis of other adverse outcomes. 4.2.1 RYR3 The RYR3 rs877087 T allele prevalence in people on dCCB treatment in UKB was 46%, and TT homozygotes was 21.3%. Of the 32 360 patients prescribed dCCBs, 2292 developed HF during the follow‐up period. Diagnoses were more common in RYR3 rs877087 TT homozygotes ( n = 404, 6.1% of 6607) and CT heterozygotes ( n = 943, 6.1% of 15 377) compared to common CC homozygotes ( n = 491, 5.4% of 9090; Figure and Table ; see Table for details). The increased risk of hospital diagnosed HF was significant in Cox's proportional hazards regression (HR) models adjusted for age at first dCCB prescription, sex and genetic ancestry (HR TT vs. CC 1.15, 95% confidence interval [CI] 1.01 to 1.31, P = .04 and HR CT vs. CC 1.12, 95% CI 1.01 to 1.25, P = .04), and length of treatment is explicitly modelled in the time‐to‐event analysis methods. We also performed an analysis of rs877087 assuming a dominant model of inheritance, given the similarity in estimates between the CT and TT groups: T‐allele carriers had 13% increased risks of HF compared to CC homozygotes (HR 1.13: 95% CI 1.02 to 1.25, P = .02). These results were not significant after Benjamini–Hochberg adjustment for multiple statistical testing (adjusted P > .05). Heterozygotes (11.5%) were also more likely to have incident MI, angina or HF compared to common CC homozygotes (10.3%; HR 1.12, 95% CI 1.03 to 1.21, P = .007; Table ). Heterozygotes were more likely to have incident stroke compared to CC homozygotes (HR 1.22, 95% CI 1.04 to 1.45, P = .02; Table ). We used the TWIST framework to estimate that the overall incidence of HF in patients prescribed dCCBs could be reduced by 9.2% (95% CI 3.1 to 15.4) if rs877087 T allele carriers received the same treatment benefit as noncarriers, that is, were switched to an alternative antihypertensive medication unaffected by rs877087 genotype. To give further details on the TWIST results: because the association with HF was similar between heterozygotes and minor allele homozygotes we estimated the GMTE in carriers (any rs877087 T allele) compared to CC homozygotes. rs877087 was not associated with HF in individuals never prescribed dCCBs (GMTE0 estimate P > .05; Table ). From TWIST we found the robust GMTE and the Mendelian randomization estimates could be combined to give a more efficient and precise estimate. The risk of HF was 0.069% greater per year after treatment initiation in carriers compared to noncarriers ( P = .003; Table ). When multiplied by the number of genotype‐carrier patient‐years in the model (244 818) and divided by the total number of diagnoses in the treated individual (1838), we estimate that if carriers of the rs877087 T allele could experience the same treatment effect as noncarriers, 170 HF diagnoses could have been avoided (95% CI 58 to 282), hence the 9.2% quoted earlier. In the subgroup of patients with a pre‐existing heart disease (MI, angina or HF) at the start of dCCB prescribing, RYR3 TT homozygotes had an increased risk of developing incident heart diseases compared to common CC homozygotes (75.4 vs . 67.8) with a HR 1.25 (95% CI 1.09 to 1.44, P = .002; Table ; see Table ). Overall, 2940 (7.5%) patients on dCCBs had incident CKD. RYR3 rs877087 TT homozygotes (461 CKD cases in 6164 TT homozygotes) were more likely to have hospital‐diagnosed CKD compared to the common homozygotes groups (HR 1.18, 95% CI 1.04 to 1.34, P = .01; see Table for details). We estimate that if carriers of the rs877087 T allele could experience the same treatment effect as noncarriers, 199 CKD diagnoses could have been avoided (95% CI 75 to 324; see Table ). Therefore, the overall incidence of CKD in patients prescribed dCCBs could be reduced by 8.6% if rs877087 T allele carriers received the same treatment benefit as noncarriers (95% CI 3.2 to 14.0). 4.2.2 CYP3A5 Patients with CYP3A5 rs776746 TT (CYP3A5*3) genotype (0.47% of patients), a variant previously linked to kidney related outcomes had increased risk of CKD (HR 2.12: 95% CI 1.34 to 3.38, P = .002) compared to CC homozygotes (Figure and Table ). The association was still significant after Benjamini–Hochberg adjustment for multiple statistical testing (adjusted P = .03). When we repeated the analysis for patients who were on dCCB but had no CKD history, 12.3% of CYP3A5 rs776746 TT homozygotes without prevalent CKD were diagnosed with incident CKD compared to 6.6% of heterozygotes and 6.8% homozygotes for CC (HR 2.09, 95% CI 1.29 to 3.37, P = .003; see Table for details). We estimated that if rs776746 TT homozygotes could experience the same treatment effect as CC homozygotes 11 CKD diagnoses could have been avoided (95% CI 4 to 18; see Table ). Therefore, the overall incidence of CKD in patients prescribed dCCBs could be reduced by 0.5% (95% CI 0.2 to 0.9) if rs776746 TT homozygotes received the same treatment benefit as CC homozygotes. Of the patients on dCCB prescription, 5565 (14.2%) changed treatment from dCCB CCBs to other antihypertensives. CYP3A5 rs776746 TT homozygotes ( n = 27/152) were also more likely to change treatments compared to common homozygotes; HR 1.59, 95% CI 1.09 to 2.32, P = .02, respectively (see Figure and Table ; see Table for details). Incident MI/angina was less likely to occur in patients heterozygous for CYP3A5 rs776746 compared to CC homozygotes ( P = .01). 4.2.3 NUMA1 Of those 3006 patients switched treatment, 800 were NUMA1 rs10898815 AA homozygotes ( n = 8272), and 1506 were GA heterozygotes ( n = 16 632). AA homozygotes and GA heterozygotes were more likely to switch treatments compare to their common homozygotes (HR 1.18, 95% CI 1.07 to 1.31, P = .001 and HR 1.10, 95% CI 1.01 to 1.21, P = .03, respectively; Figure and Table ; see Table for details). The association for AA was still significant after Benjamini–Hochberg adjustment for multiple statistical testing (adjusted P = .04). 4.2.4 ADRA1A Adrenoceptor α1A ( ADRA1A ) rs1048101 AA homozygotes had an increased risk for CKD (HR 1.18, 95% CI 1.04 to 1.34, P = .01) compared to GG homozygotes. GG homozygotes (395 cases of 5834 patients) and AG heterozygotes (947 cases of 14 277 patients) were associated with decreased risk of CKD compared to AA homozygotes (HR 0.88, 95% CI 0.77 to 0.99, P = .04, false discovery rate [FDR] P = .25 and HR 0.85, 95% CI 0.77 to 0.94, P = .001, FDR P = .03, respectively; Figure and Table ). We estimated that if rs1048101 AA homozygotes could experience the same treatment effect as noncarriers (e.g., were prescribed an alternative antihypertensive medication unaffected by this genotype) 86 CKD diagnoses could have been avoided (95% CI 13 to 138; see Table ). Therefore, the overall incidence of CKD in patients prescribed dCCBs could be reduced by 7% (95% CI 1.1 to 12.9) if rs1048101 AA homozygotes received the same treatment benefit as GG homozygotes. 4.2.5 APCDD1 Of those patients on dCCB prescriptions, 7430 (18.9%) patients had incident MI/angina post‐dCCB treatment. Of those 7430 patients, 1004 were homozygotes for APCDD1 rs564991 CC; 19.1% homozygotes for APCDD1 rs564991 CC had increased risk for MI/angina compared to 18.3% heterozygotes and 17.2% homozygotes for AA (HR 1.12, 95% CI 1.04 to 1.21, P = .004, FDR P = .2 for CC and HR 1.07, 95% CI 1 to 1.13, P = .04, FDR P = .28 for AC; Figure and Table ; see Table for details). TWIST analysis showed that rs564991 was not associated with CHD in individuals never prescribed dCCBs (GMTE0 estimate P > .05; Table ). The risk of CHD was 0.19% ( P = .002) greater per year after treatment in CC homozygotes compared to AA homozygotes (Table ). We estimated that if rs564991 CC homozygotes could experience the same treatment effect as AA homozygotes 98 CHD diagnoses could have been avoided (95% CI 35 to 162). Therefore, the overall incidence of CHD in patients prescribed dCCBs could be reduced by 3.5% (1.3 to 5.8) if rs564991 CC homozygotes received the same treatment benefit as AA homozygotes. Of 4910 APCDD1 rs564991 CC homozygotes, 306 patients had CKD after dCCB treatment. They were less likely to have CKD compared to their common homozygotes (6.2% vs . 7%; HR 0.87, 95% CI 0.76 to 1, P = .04). However, these were not significant after adjusting for multiple statistical testing. TWIST analysis estimated that is rs564991 CC homozygotes experienced the same treatment effect as AA homozygotes (i.e., were switched to an alternative antihypertensive) this may increase the overall CKD incidence in patients prescribed dCCBs by 63 diagnoses (95% CI 23 to 104, P = .002). GP‐diagnosed post‐dCCB oedema ( n = 5913—15.04%) was not associated with any of the variants. See Table for details. See for summary of associations for variants in other genes. RYR3 The RYR3 rs877087 T allele prevalence in people on dCCB treatment in UKB was 46%, and TT homozygotes was 21.3%. Of the 32 360 patients prescribed dCCBs, 2292 developed HF during the follow‐up period. Diagnoses were more common in RYR3 rs877087 TT homozygotes ( n = 404, 6.1% of 6607) and CT heterozygotes ( n = 943, 6.1% of 15 377) compared to common CC homozygotes ( n = 491, 5.4% of 9090; Figure and Table ; see Table for details). The increased risk of hospital diagnosed HF was significant in Cox's proportional hazards regression (HR) models adjusted for age at first dCCB prescription, sex and genetic ancestry (HR TT vs. CC 1.15, 95% confidence interval [CI] 1.01 to 1.31, P = .04 and HR CT vs. CC 1.12, 95% CI 1.01 to 1.25, P = .04), and length of treatment is explicitly modelled in the time‐to‐event analysis methods. We also performed an analysis of rs877087 assuming a dominant model of inheritance, given the similarity in estimates between the CT and TT groups: T‐allele carriers had 13% increased risks of HF compared to CC homozygotes (HR 1.13: 95% CI 1.02 to 1.25, P = .02). These results were not significant after Benjamini–Hochberg adjustment for multiple statistical testing (adjusted P > .05). Heterozygotes (11.5%) were also more likely to have incident MI, angina or HF compared to common CC homozygotes (10.3%; HR 1.12, 95% CI 1.03 to 1.21, P = .007; Table ). Heterozygotes were more likely to have incident stroke compared to CC homozygotes (HR 1.22, 95% CI 1.04 to 1.45, P = .02; Table ). We used the TWIST framework to estimate that the overall incidence of HF in patients prescribed dCCBs could be reduced by 9.2% (95% CI 3.1 to 15.4) if rs877087 T allele carriers received the same treatment benefit as noncarriers, that is, were switched to an alternative antihypertensive medication unaffected by rs877087 genotype. To give further details on the TWIST results: because the association with HF was similar between heterozygotes and minor allele homozygotes we estimated the GMTE in carriers (any rs877087 T allele) compared to CC homozygotes. rs877087 was not associated with HF in individuals never prescribed dCCBs (GMTE0 estimate P > .05; Table ). From TWIST we found the robust GMTE and the Mendelian randomization estimates could be combined to give a more efficient and precise estimate. The risk of HF was 0.069% greater per year after treatment initiation in carriers compared to noncarriers ( P = .003; Table ). When multiplied by the number of genotype‐carrier patient‐years in the model (244 818) and divided by the total number of diagnoses in the treated individual (1838), we estimate that if carriers of the rs877087 T allele could experience the same treatment effect as noncarriers, 170 HF diagnoses could have been avoided (95% CI 58 to 282), hence the 9.2% quoted earlier. In the subgroup of patients with a pre‐existing heart disease (MI, angina or HF) at the start of dCCB prescribing, RYR3 TT homozygotes had an increased risk of developing incident heart diseases compared to common CC homozygotes (75.4 vs . 67.8) with a HR 1.25 (95% CI 1.09 to 1.44, P = .002; Table ; see Table ). Overall, 2940 (7.5%) patients on dCCBs had incident CKD. RYR3 rs877087 TT homozygotes (461 CKD cases in 6164 TT homozygotes) were more likely to have hospital‐diagnosed CKD compared to the common homozygotes groups (HR 1.18, 95% CI 1.04 to 1.34, P = .01; see Table for details). We estimate that if carriers of the rs877087 T allele could experience the same treatment effect as noncarriers, 199 CKD diagnoses could have been avoided (95% CI 75 to 324; see Table ). Therefore, the overall incidence of CKD in patients prescribed dCCBs could be reduced by 8.6% if rs877087 T allele carriers received the same treatment benefit as noncarriers (95% CI 3.2 to 14.0). CYP3A5 Patients with CYP3A5 rs776746 TT (CYP3A5*3) genotype (0.47% of patients), a variant previously linked to kidney related outcomes had increased risk of CKD (HR 2.12: 95% CI 1.34 to 3.38, P = .002) compared to CC homozygotes (Figure and Table ). The association was still significant after Benjamini–Hochberg adjustment for multiple statistical testing (adjusted P = .03). When we repeated the analysis for patients who were on dCCB but had no CKD history, 12.3% of CYP3A5 rs776746 TT homozygotes without prevalent CKD were diagnosed with incident CKD compared to 6.6% of heterozygotes and 6.8% homozygotes for CC (HR 2.09, 95% CI 1.29 to 3.37, P = .003; see Table for details). We estimated that if rs776746 TT homozygotes could experience the same treatment effect as CC homozygotes 11 CKD diagnoses could have been avoided (95% CI 4 to 18; see Table ). Therefore, the overall incidence of CKD in patients prescribed dCCBs could be reduced by 0.5% (95% CI 0.2 to 0.9) if rs776746 TT homozygotes received the same treatment benefit as CC homozygotes. Of the patients on dCCB prescription, 5565 (14.2%) changed treatment from dCCB CCBs to other antihypertensives. CYP3A5 rs776746 TT homozygotes ( n = 27/152) were also more likely to change treatments compared to common homozygotes; HR 1.59, 95% CI 1.09 to 2.32, P = .02, respectively (see Figure and Table ; see Table for details). Incident MI/angina was less likely to occur in patients heterozygous for CYP3A5 rs776746 compared to CC homozygotes ( P = .01). NUMA1 Of those 3006 patients switched treatment, 800 were NUMA1 rs10898815 AA homozygotes ( n = 8272), and 1506 were GA heterozygotes ( n = 16 632). AA homozygotes and GA heterozygotes were more likely to switch treatments compare to their common homozygotes (HR 1.18, 95% CI 1.07 to 1.31, P = .001 and HR 1.10, 95% CI 1.01 to 1.21, P = .03, respectively; Figure and Table ; see Table for details). The association for AA was still significant after Benjamini–Hochberg adjustment for multiple statistical testing (adjusted P = .04). ADRA1A Adrenoceptor α1A ( ADRA1A ) rs1048101 AA homozygotes had an increased risk for CKD (HR 1.18, 95% CI 1.04 to 1.34, P = .01) compared to GG homozygotes. GG homozygotes (395 cases of 5834 patients) and AG heterozygotes (947 cases of 14 277 patients) were associated with decreased risk of CKD compared to AA homozygotes (HR 0.88, 95% CI 0.77 to 0.99, P = .04, false discovery rate [FDR] P = .25 and HR 0.85, 95% CI 0.77 to 0.94, P = .001, FDR P = .03, respectively; Figure and Table ). We estimated that if rs1048101 AA homozygotes could experience the same treatment effect as noncarriers (e.g., were prescribed an alternative antihypertensive medication unaffected by this genotype) 86 CKD diagnoses could have been avoided (95% CI 13 to 138; see Table ). Therefore, the overall incidence of CKD in patients prescribed dCCBs could be reduced by 7% (95% CI 1.1 to 12.9) if rs1048101 AA homozygotes received the same treatment benefit as GG homozygotes. APCDD1 Of those patients on dCCB prescriptions, 7430 (18.9%) patients had incident MI/angina post‐dCCB treatment. Of those 7430 patients, 1004 were homozygotes for APCDD1 rs564991 CC; 19.1% homozygotes for APCDD1 rs564991 CC had increased risk for MI/angina compared to 18.3% heterozygotes and 17.2% homozygotes for AA (HR 1.12, 95% CI 1.04 to 1.21, P = .004, FDR P = .2 for CC and HR 1.07, 95% CI 1 to 1.13, P = .04, FDR P = .28 for AC; Figure and Table ; see Table for details). TWIST analysis showed that rs564991 was not associated with CHD in individuals never prescribed dCCBs (GMTE0 estimate P > .05; Table ). The risk of CHD was 0.19% ( P = .002) greater per year after treatment in CC homozygotes compared to AA homozygotes (Table ). We estimated that if rs564991 CC homozygotes could experience the same treatment effect as AA homozygotes 98 CHD diagnoses could have been avoided (95% CI 35 to 162). Therefore, the overall incidence of CHD in patients prescribed dCCBs could be reduced by 3.5% (1.3 to 5.8) if rs564991 CC homozygotes received the same treatment benefit as AA homozygotes. Of 4910 APCDD1 rs564991 CC homozygotes, 306 patients had CKD after dCCB treatment. They were less likely to have CKD compared to their common homozygotes (6.2% vs . 7%; HR 0.87, 95% CI 0.76 to 1, P = .04). However, these were not significant after adjusting for multiple statistical testing. TWIST analysis estimated that is rs564991 CC homozygotes experienced the same treatment effect as AA homozygotes (i.e., were switched to an alternative antihypertensive) this may increase the overall CKD incidence in patients prescribed dCCBs by 63 diagnoses (95% CI 23 to 104, P = .002). GP‐diagnosed post‐dCCB oedema ( n = 5913—15.04%) was not associated with any of the variants. See Table for details. See for summary of associations for variants in other genes. SENSITIVITY ANALYSES In total, 23 971 (61%) patients were also on another antihypertensive medication at some point during dCCB prescription time period. In the sensitivity analysis adjusted for receiving another antihypertensives during the dCCB prescribing period, significant associations with outcomes from the main analysis remained consistent. See Table for the details. The sensitivity analysis of patients on amlodipine ( n = 31 357) and other dCCBs ( n = 6854) separately were consisted with the primary analysis, is presented in Table . After excluding one of patients related 27 042 patients remained. Many associations remained significant such as APCDD1 and CHD, RYR3 and CKD, NUMA1 and treatment switch whilst some associations were not significant, the effect sizes were consistent to the whole cohort (see Table ). DISCUSSION CCBs, especially dihydropyridines (dCCBs) such as amlodipine, are commonly prescribed to reduce blood pressure. Many pharmacogenetic variants are reported to impact dCCB responses, with evidence from laboratory studies, randomized trials, or acute hospital settings. However, data on clinical impact in routine care in the community is limited. We estimated the association between 23 pharmacogenetic variants reported to affect dCCB response or adverse events in 32 360 patients prescribed dCCB using the UKB‐linked primary care data. Outcomes were assessed over a mean follow up of >10 years after first dCCB prescription. The most striking results were for the ryanodine receptor 3 ( RYR3 ) rs877087, with T allele carriers having 13% increased risks of HF ( P = .02), after accounting for any effect in untreated individuals. Although results were not significant after Benjamini–Hochberg adjustment for multiple statistical testing (adjusted P > .05), the burden of prior evidence increases plausibility of the associations. In patients with a history of heart disease when first prescribed dCCB ( n = 2296), RYR3 homozygotes had 25% increased risk of heart disease diagnosis (MI, angina or HF) compared to CC homozygotes ( P = .002). In addition, 2 genetic variants increased the likelihood of patients switching to an alternative antihypertensive medication (in NUMA1 and CYP3A5 ). The variant in CYP3A5 also increased risk of CKD, and we hypothesize that it might be the reason for the switch in treatment, whilst the variant APCDD1 increased risk of CHD. These adverse reactions are potentially preventable if patients were prescribed medications accounting for genotype. However, for the majority of reported pharmacogenetic variants included, we found little or inconsistent evidence of associations with adverse events, and some appeared genetically contradictory (with heterozygote and homozygote effects in opposite directions). Many associations had modest P values that would not survive strict multiple testing corrections. RYR3 mediates Ca 2+ release from ryanodine‐sensitive stores, triggering cardiac and skeletal muscles. A common variant in RYR3 (rs877087) increased risk of HF in a study of 2516 people randomized to amlodipine or to other antihypertensives. We support and extend this literature in a substantially larger sample using longitudinal analysis methods: we report increased HF risk in both TT homozygotes ( n HFs = 404 in 6607 genotypes; HR 1.15) and CT heterozygotes ( n HFs = 943 in 15 377 genotypes; HR 1.12), compared to CC homozygotes ( n = 9090). We used TWIST, a novel pharmacogenetic causal inference framework, to estimate the population average GMTE on HF if all RYR3 T allele carriers could experience the same treatment effect as common CC homozygotes (e.g., they were prescribed an alternative medication): we estimate that HF risk would reduce by 9.2%, corresponding to 170 avoidable HF diagnoses in the studied patients. Further work is needed to determine the optimum strategy to reduce the risk of T allele carriers, for example, by prescribing an alternative treatment or with increased monitoring of patients. Furthermore, rs877087 has been associated with stroke in a GWAS, but the genotype effect in patients on treatment is unknown. Our findings suggest that rs877087 CT had an increased risk of hospital diagnosed stroke ( n = 203/15381, HR = 1.11, P = .02) compared to common CC homozygotes, but we found effect in TT homozygotes. Observational studies of drug effects often suffer from indication and other biases: As doctors aim to prescribe each medication based on the patients' clinical state, statistically separating the effects of the medication from the effects of underlying disease is challenging, especially as data to correct potential confounders are seldom complete or entirely accurate. However, genotypes are inherited at conception and stay fixed, meaning that they predate receipt of the studied medications. In our study, we found that genotype was not associated with treatment initiation. Associations between genotypes and outcomes provide less confounded evidence than conventional observational associations, particularly because the participants and GPs were not told given genotype information by the UKB study. Because genetic variants are largely independent of traditional confounding, and GPs and patients are unaware of these genotypes when making prescribing decisions and diagnosing outcomes, we can therefore assume that the difference between genotype carriers is due to the modifying effect of the genotype on medication (hence our naming in the GMTE ‐genetically modified treatment effect). This assumption is common to such Mendelian randomization studies. Therefore, our finding that variants in 2 genes ( NUMA1 and CYP3A5 ) with switching treatment may result directly from dCCB pharmacokinetics or/and pharmacodynamics effects. Further work (both replication and experimental validation) is required to confirm the precise biological mechanisms involved. NUMA1 rs10898815 was previously identified in a GWAS of blood pressure but we found no reports on switching antihypertensive treatment. We found that AA homozygotes had increased likelihood of switching treatment, which was significant after multiple testing adjustment. CYP3A5 is a cytochrome p450 enzyme and metabolizer of dCCB; CYP3A5 *3 is the most common nonfunctional allele (rs776746‐T; with a prevalence of 6.6% in the UKB European cohort), which results in increased clearance of dCCB, resulting in less successful treatment. Our findings support this: CYP3A5 *3 homozygotes had increased risk of CKD, for which high blood pressure is a risk factor, and were 59% more likely to change treatments compared to common homozygotes. In a pathway‐focused GWAS, genes in the ADRA1 pathway ultimately affect intracellular calcium release (which dCCBs block) and blood pressure. The isoforms (e.g., ADRA1A ) was associated with hypertension in patients. In a study in mice, it also mediated renal vasoconstriction in hypertension. Furthermore, ADRA1A affects renal functions via regulating Na + reabsorption, renin secretion, renal blood flow and glomerular filtration rate, of which alterations cause kidney disease. , Although we were not able to analyse GP‐recorded blood pressure measures robustly due to high rates of missing data, an SNP in ADRA1A (rs1048101) was associated with CKD. Patients homozygous for the common AA allele (30.4% of participants) had an increased risk of CKD, in contrast to GG homozygotes (20.3%), who had decreased risk. In TWIST analysis, we estimated that 86 CKD diagnoses (7% of total) could be avoided if AA homozygotes could receive an alternative antihypertensive unaffected by the genotype. In a previous GWAS, the rs564991 C allele in APCDD1 was associated with response to CCB. In our study, we found that CC homozygotes had increased risk of MI/angina with a HR 1.12 ( P = .004); however, this was not significant after accounting for multiple testing. It is important to consider concomitant medications to fully interpret our results. To evaluate whether coprescribing with other antihypertensive medications during patients' dCCB prescribing period affected the results, we conducted a sensitivity analysis adjusting for whether the patient received another antihypertensive. The significant associations we report in the primary analysis were consistent and remained significant. We opted to adjust rather than exclude, as excluding could introduce bias as extra medications might be prescribed for the adverse events (i.e., oedema) or worsening conditions (i.e., HF) we aimed to study and also reduce power. To determine whether results are biased by including multiple dCCBs in the analysis, which may have divergent mechanisms despite being in the same drug class, we performed sensitivity analysis. Subsetting the analysis into just amlodipine prescriptions and other dCCB demonstrated consistent effect sizes between the variants and outcomes, albeit with attenuated significance (due to reduced sample size), suggest that the significant associations we observe were not driven by a single dCCB drug. In our analyses, 15.04% of patients on dCCBs had GP recorded oedema, lower than the reported prevalence of approximately 22% in the literature. , This could be due to limitations in the data available; UKB‐linked primary care diagnoses include diagnostic codes only, with no free text. It has been discussed previously that estimates of the prevalence of oedema depends on the study methods; in randomized controlled trials self‐reported oedema might be overestimated by the patients, or milder forms of oedema might be reported compared to those that enter the GP record. In routine clinical care, patients' blood pressure is regularly monitored after dCCB treatment initiation to determine whether the targets were achieved. However, we were unable to analyse this using UKB‐linked primary care data due to the sparsity of blood pressure data available (only a feew patients had blood pressure in the record at the time of initiation, or within 2 months). This wide variation between patients in the time from prescription to next follow up meant that we focused instead on adverse drug reactions, not measured blood pressure. The PharmGKB database ( n = 19 of the 23 studied variants were reported in PharmGKB) includes smaller and candidate studies, with only low or moderate levels of evidence for most of the relevant variants (as categorized by PharmGKB curators, often reflecting small sample sizes and lack of replication). Previously, the most frequently studied genes were CYP3A4 and CYP3A5 with smaller sample sizes or non‐European ancestry patients. The biggest study that we are aware of was a randomized control study 8174 patients randomized to amlodipine. Additionally, a previous review about pharmacogenomics of hypertension medications reported 4 variants associated with dCCB response in a small Japanese sample. Of the 23 dCCB variants, we found evidence for an effect on outcomes/adverse events for only 10 variants—even fewer after adjustment for multiple statistical testing—suggesting that these few specific variants should be a priority for future study. The possible reasons for the lack of consistency include interethnic differences in studied populations, heterogeneity in exact phenotype studied, lack of adherence to medication or variability in medication history between patients, but may also include publication biases, in which false positive statistical associations (type 1 errors) tend to be overrepresented, especially from small studies. However, we here add substantially to the evidence base for these variants due in part to the large sample size studied, but also the strengths of analysing real‐world primary care prescribing and the novel pharmacogenetic analysis approach triangulating evidence from multiple analysis methods (TWIST ). Using these data for pharmacogenetics analysis means that we are able to look at more adverse reactions over longer periods and have therefore increased confidence for the relevance to routine clinical care of hypertension of variants where significant effects on outcomes are identified. In conclusion, our analysis of longer‐term prescribing in real‐world primary care data support the hypothesis that use of genetic information in antihypertensive prescribing might optimize treatment selection for specific patients to maximize efficacy and reduce incidence of adverse events. The variants identified as associated with adverse clinical outcomes are good candidates for studies to test whether dCCB treatment outcomes can be improved with pharmacogenetic guided prescribing. All authors declare no support from any organization for the submitted work, no financial relationships with any organizations that might have an interest in the submitted work in the previous 3 years and no other relationships or activities that could appear to have influenced the submitted work. D.T. generated data, performed analyses, interpreted results, created the figures, searched the literature and cowrote the manuscript. J.A.H.M. provided expert clinical interpretation of the data and contributed to the manuscript. C.‐L.K., J.D. and J.B. contributed to data analysis and interpretation and contributed to the manuscript. D.M. oversaw interpretation and literature searching and cowrote the manuscript. L.C.P. generated data, performed analyses, interpreted results, created the figures, searched the literature and cowrote the manuscript. We affirm that this manuscript is an honest, accurate and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained. The full methods are available in the . Data S1. Supporting information Click here for additional data file. Data S2. Supporting information Click here for additional data file.
Use of surgical glue versus suture to repair perineal tears: a randomised controlled trial
20bef0ef-13c9-4c14-bc5d-2bfec0624b82
10091848
Suturing[mh]
Perineal trauma in vaginal birth can negatively influence women's physical, physiological, psychological and social well-being with short- and long-term consequences . Nearly 70.3% of women present some perineal trauma at delivery, 18.2% present first-degree tears and 40.6% second-degree tears . Nulliparous women present approximately 2.5 times more chances of suffering some perineal trauma at delivery than multiparous women . The literature indicates that perineal pain related to perineal traumas is present in many primiparous women during the first year after birth, reported in one out of ten mothers . The incidence of complications in the healing process resulting from perineal traumas varies between 0.1% and 23.6% due to infection and from 0.2% to 24.6% to dehiscence . Currently, the fast-absorbing polyglycolic suture thread (Vicryl® rapid) with the continuous technique is the primary choice for perineal repair, as it presents better results in pain and perineal healing . However, adhesive glue shows excellent potential for changing the perineal repair technique, as it presents similar or better results to the Vicryl® rapid suture thread [ – ]. One of the first studies that compared the use of fast-absorbing polyglycolic suture with octyl-2-cyanoacrylate surgical glue in the perineal repair of first-degree tears was conducted with 102 women (divided into two groups: 28 sutured women and 74 with glue repair), monitored during six weeks. It concluded that the use of glue presented cosmetic and functional results similar to those of suturing with thread and also several advantages, such as reduction in perineal repair time and perineal pain intensity, exemption from the need for local anaesthesia, and more satisfaction among women . A literature search showed the use of surgical glue in the perineal repair of first-degree tears and perineal skin in second-degree tears. Still, it remained a lack of knowledge in Obstetrics related to the effectiveness of the perineal repair of all tissue layers in second-degree tears and episiotomy . In addition, it is essential to compare several types of surgical glues with other existing methods for perineal repair concerning perineal pain intensity, the long-term perineal healing process, the procedure duration, and the postpartum infection rates. The study aimed to evaluate the effectiveness of surgical glue compared with standard suture thread in repairing first- and second-degree perineal tears and episiotomy in vaginal births concerning perineal pain and the healing process. Design A parallel randomised controlled open trial. Setting The study was conducted at the birth centre of a municipal emergency and maternity hospital in the metropolitan region of São Paulo (Brazil), which assists women with low-risk full-term pregnancies. Participants and sample size The population consisted of women with first- or second-degree spontaneous perineal tears or episiotomy. After delivery, this population was allocated into two experimental groups (EG) and two control groups (CG). The EG consisted of EG1: women who underwent repair of first-degree tears with glue, and EG2: women who underwent repair of second-degree tears or episiotomy with glue. The CG were as follows: CG1: women who underwent repair of first-degree tears with polyglactin 910 thread; and CG2: women who underwent repair of second-degree tears or episiotomy with polyglactin 910 thread. The Bioestat® 5.3 software was used to estimate the sample size. The sample was constituted to detect a significant minimum difference of 2 points in the pain score between both perineal repair methods. A priori , a residual standard deviation of 3 points, a 5% alpha error and 80% test power, were considered. It resulted in a minimum sample of 35 parturient women in each group. Thus, the sample consisted of 140 women: 70 allocated to the EGs (EG1: n = 35; EG2: n = 35) and another 70 to the CGs (CG1: n = 35; CG2: n = 35). Inclusion criteria The eligibility criteria were as follows: no previous vaginal birth; having up to 6 cm of cervical dilation at the time the woman was invited to participate in the research; not using steroid substances; not presenting leukorrhea or any signs of infection at the repair site; no difficulty understanding the Portuguese language or in communication; accepting to be subjected to perineal repair methods with surgical glue or suture thread. The women included in the study underwent vaginal birth with first- and second-degree spontaneous perineal tears or episiotomy. Randomisation The sequence for inclusion of the parturient in each group was randomised through an electronically-produced table of random numbers using the Statistical Package for the Social Sciences (SPSS) statistical program. Opaque envelopes were employed, which were only opened at the perineal repair moment and contained the allocation to the glue or thread repair groups. One of the researchers was in charge of opening the envelopes. Interventions and materials The interventions used surgical glue or suture thread to repair first- and second-degree perineal tears or episiotomy. N-butyl-2-cyanoacrylate (Glubran-2®) is a synthetic surgical glue to be used on internal and external tissue, registered at the National Health Surveillance Agency ( Agência Nacional de Vigilância Sanitária , ANVISA) under No. 80159010003. In contact with living tissue or a humid environment, the glue polymerises quickly, creating both an antiseptic barrier and a thin elastic film with high tensile strength, which ensures solid tissue adhesion that is not damaged by blood or organic fluids. Proper glue application leads to solidification that starts in 1–2 s, finishing its reaction after nearly 60–90 s. In typical surgical procedures, the glue film is removed via hydrolytic degradation. The polyglactin 910 thread consists of polyglycolic, synthetic and absorbable acid, which is fully absorbed in approximately 35 days via hydrolysis. The thread used for this study was a Vicryl rapid® 2.0 fast absorption thread with a continuous suture technique for perineal repair. The polyglactin 910 thread consists of polyglycolic, synthetic and absorbable acid and is fully absorbed in approximately 35 days via hydrolysis. The procedure described by Caroci-Becker et al. (2021 was used to apply the Glubran-2® glue. It is worth noting that the woman was subjected to a new repair process with the same material in case of failure in perineal repair with surgical glue. The new repair procedure was performed with suture thread only in case of impossibility to repair with surgical glue due to bleeding, for instance. For the suture with the Vicryl rapid® thread, local anaesthesia was applied with lidocaine 2% without vasoconstrictor. The perineal repair procedure was performed with thread using the non-anchored continuous technique in all the tissue layers. Outcomes Pain occurrence and intensity were the primary outcomes evaluated, whereas the secondary outcome was perineal healing. The perineal repair time was also evaluated. Training of the team and pilot study In order to improve the technique of applying the Glubran-2® glue, a training session was conducted with the researchers before data collection, in which the surgical glue was applied to beef tongue and other pieces of beef. After training the researchers, a case-series study was conducted to implement the necessary adjustments to develop the current study. Data collection and measurements The data were collected from March 2017 to September 2018 in six stages: stage 1: during labour and up to 2 h after the perineal repair procedure; stage 2: from 12 to 24 h postpartum; stage 3: from 36 to 48 h; stage 4: from 10 to 20 days; stage 5: from 50 to 70 days; and stage 6: from 6 to 8 months. A form for the interview and data recording was explicitly developed for this research, which contained the following baseline characteristics: maternal age, ethnicity, schooling level, occupation, marital status, nutritional status, parity, gestational age, body mass index (BMI), newborn weight, and the outcomes variables. A pre-test was conducted to evaluate the form and the procedures that would be done during data collection. As a first step, the researchers presented the study to professionals working in the service in order for them to accept, collaborate and integrate themselves into the research. During the recruitment, the researchers visited the study locus daily to locate the women who met the study's eligibility and inclusion criteria. The eligible women were invited to participate in the study when hospitalised. Aiming to avoid bias in the data, the classification of the perineal trauma and the evaluation regarding the need for the repair procedure was in charge by the nurse-midwives of the birth centre, who were not part of the research team. Nevertheless, the nurse-midwives of the research team were in charge of the perineal repair procedure. Both groups used a digital stopwatch to measure the perineal repair time. The professionals were asked to prescribe analgesics or anti-inflammatory medications if the puerperal women complained about pain so that perineal pain intensity could be better assessed. The participating women were instructed to request pain medications anytime they needed them. A medical evaluation was requested in case of complications related to the perineal repair procedure in the women from any research group. In order to assess perineal pain intensity, the women were handed in the Visual Numeric Scale (VNS) to visualise and indicate the number corresponding to pain intensity. VNS consists of a horizontal line with values expressed in centimetres from 0 to 10, where zero is the total absence of pain and ten represents the worst pain possible . This evaluation was performed 2 h postpartum to ensure a proper pain assessment between the groups, avoiding the anaesthesia bias in the suture group and all the other study stages. The perineum healing process was evaluated using the REEDA scale in stages 1 and 4 of the study. The scale is indicated to evaluate the tissue recovery process after perineal trauma through five healing items: redness, oedema, ecchymosis, discharge, and approximation (coaptation of the wound edges) . Each item evaluated was assigned a score from 0 to 3, where the maximum score (15) corresponds to the worst possible perineum healing result . Each item evaluated was assigned a score from 0 to 3, where the maximum score (15) corresponds to the worst possible perineum healing result . A Peri-Rule® ruler was used to measure hyperemia, oedema, ecchymosis and coaptation of the edges . This ruler was wrapped in polyvinyl chloride (PVC) film and reused after cleaning with soap and water, followed by antisepsis with 70% alcohol. In addition to the items on the REEDA scale, the researchers evaluated any other tissue damage or morbidity related to perineal repairs, such as hematoma, itching, wound infection, or allergic reaction. Given the nature of the interventions and outcomes, there was no possibility of blinding, as both the women and the researchers were aware of the type of perineal repair performed and because, in the evaluation of the healing process, it is possible to see whether glue or suture thread was used. Statistical analysis The data were double-typed into Epi-Info 6, and the database was validated and imported into Excel. The mean and standard deviation (SD) were calculated for the descriptive analysis of the continuous quantitative variables. The Student's t-test was used to determine whether there was a statistical difference between the means of the two groups and analysis of variance (ANOVA) with the coefficient of determination for the multiple comparisons of means. Absolute and relative frequencies were calculated for the categorical variables. The test used in the inferential analysis was Pearson's chi-square, and the approximate chi-square test in the Monte Carlo simulation was used in cross-tabulation. In the longitudinal analysis, the generalised linear model (GLM) was employed, with Wald's chi-square test and analysis of the interactions of the effects (group and time or group and tear degree) based on linearly independent pair comparisons between estimated marginal means. The significance level adopted was p ≤ 0.05. The analyses were performed in the following statistical packages: SAS System for Windows V8, SPSS for Windows (version 12.0) and Minitab Statistical Software – Release 13.1. Ethics The project was approved by the Research Ethics Committee of the Arts, Sciences and Humanities School of the University of São Paulo—CAAE 44,832,615.1.0000.5390 and guaranteed the participants' rights. The study was registered in the Brazilian Registry of Clinical Trials, with registration data on 01/25/2018; last approval date on 01/25/2018; UTN code U1111-1184–2507; ( www.ensaiosclinicos.gov.br/rg/RBR-2q5wy8 ). It is worth noting that the researchers are not linked to the manufacturers or distributors of the materials used in this study. The study was registered in the Brazilian Registry of Clinical Trials, 01/25/2018; last approval date 01/25/2018; UTN code U1111-1184–2507; ( www.ensaiosclinicos.gov.br/rg/RBR-2q5wy8 ). It is worth noting that the researchers are not linked to the manufacturers or distributors of the materials used in this study. A parallel randomised controlled open trial. The study was conducted at the birth centre of a municipal emergency and maternity hospital in the metropolitan region of São Paulo (Brazil), which assists women with low-risk full-term pregnancies. The population consisted of women with first- or second-degree spontaneous perineal tears or episiotomy. After delivery, this population was allocated into two experimental groups (EG) and two control groups (CG). The EG consisted of EG1: women who underwent repair of first-degree tears with glue, and EG2: women who underwent repair of second-degree tears or episiotomy with glue. The CG were as follows: CG1: women who underwent repair of first-degree tears with polyglactin 910 thread; and CG2: women who underwent repair of second-degree tears or episiotomy with polyglactin 910 thread. The Bioestat® 5.3 software was used to estimate the sample size. The sample was constituted to detect a significant minimum difference of 2 points in the pain score between both perineal repair methods. A priori , a residual standard deviation of 3 points, a 5% alpha error and 80% test power, were considered. It resulted in a minimum sample of 35 parturient women in each group. Thus, the sample consisted of 140 women: 70 allocated to the EGs (EG1: n = 35; EG2: n = 35) and another 70 to the CGs (CG1: n = 35; CG2: n = 35). The eligibility criteria were as follows: no previous vaginal birth; having up to 6 cm of cervical dilation at the time the woman was invited to participate in the research; not using steroid substances; not presenting leukorrhea or any signs of infection at the repair site; no difficulty understanding the Portuguese language or in communication; accepting to be subjected to perineal repair methods with surgical glue or suture thread. The women included in the study underwent vaginal birth with first- and second-degree spontaneous perineal tears or episiotomy. The sequence for inclusion of the parturient in each group was randomised through an electronically-produced table of random numbers using the Statistical Package for the Social Sciences (SPSS) statistical program. Opaque envelopes were employed, which were only opened at the perineal repair moment and contained the allocation to the glue or thread repair groups. One of the researchers was in charge of opening the envelopes. The interventions used surgical glue or suture thread to repair first- and second-degree perineal tears or episiotomy. N-butyl-2-cyanoacrylate (Glubran-2®) is a synthetic surgical glue to be used on internal and external tissue, registered at the National Health Surveillance Agency ( Agência Nacional de Vigilância Sanitária , ANVISA) under No. 80159010003. In contact with living tissue or a humid environment, the glue polymerises quickly, creating both an antiseptic barrier and a thin elastic film with high tensile strength, which ensures solid tissue adhesion that is not damaged by blood or organic fluids. Proper glue application leads to solidification that starts in 1–2 s, finishing its reaction after nearly 60–90 s. In typical surgical procedures, the glue film is removed via hydrolytic degradation. The polyglactin 910 thread consists of polyglycolic, synthetic and absorbable acid, which is fully absorbed in approximately 35 days via hydrolysis. The thread used for this study was a Vicryl rapid® 2.0 fast absorption thread with a continuous suture technique for perineal repair. The polyglactin 910 thread consists of polyglycolic, synthetic and absorbable acid and is fully absorbed in approximately 35 days via hydrolysis. The procedure described by Caroci-Becker et al. (2021 was used to apply the Glubran-2® glue. It is worth noting that the woman was subjected to a new repair process with the same material in case of failure in perineal repair with surgical glue. The new repair procedure was performed with suture thread only in case of impossibility to repair with surgical glue due to bleeding, for instance. For the suture with the Vicryl rapid® thread, local anaesthesia was applied with lidocaine 2% without vasoconstrictor. The perineal repair procedure was performed with thread using the non-anchored continuous technique in all the tissue layers. Pain occurrence and intensity were the primary outcomes evaluated, whereas the secondary outcome was perineal healing. The perineal repair time was also evaluated. In order to improve the technique of applying the Glubran-2® glue, a training session was conducted with the researchers before data collection, in which the surgical glue was applied to beef tongue and other pieces of beef. After training the researchers, a case-series study was conducted to implement the necessary adjustments to develop the current study. The data were collected from March 2017 to September 2018 in six stages: stage 1: during labour and up to 2 h after the perineal repair procedure; stage 2: from 12 to 24 h postpartum; stage 3: from 36 to 48 h; stage 4: from 10 to 20 days; stage 5: from 50 to 70 days; and stage 6: from 6 to 8 months. A form for the interview and data recording was explicitly developed for this research, which contained the following baseline characteristics: maternal age, ethnicity, schooling level, occupation, marital status, nutritional status, parity, gestational age, body mass index (BMI), newborn weight, and the outcomes variables. A pre-test was conducted to evaluate the form and the procedures that would be done during data collection. As a first step, the researchers presented the study to professionals working in the service in order for them to accept, collaborate and integrate themselves into the research. During the recruitment, the researchers visited the study locus daily to locate the women who met the study's eligibility and inclusion criteria. The eligible women were invited to participate in the study when hospitalised. Aiming to avoid bias in the data, the classification of the perineal trauma and the evaluation regarding the need for the repair procedure was in charge by the nurse-midwives of the birth centre, who were not part of the research team. Nevertheless, the nurse-midwives of the research team were in charge of the perineal repair procedure. Both groups used a digital stopwatch to measure the perineal repair time. The professionals were asked to prescribe analgesics or anti-inflammatory medications if the puerperal women complained about pain so that perineal pain intensity could be better assessed. The participating women were instructed to request pain medications anytime they needed them. A medical evaluation was requested in case of complications related to the perineal repair procedure in the women from any research group. In order to assess perineal pain intensity, the women were handed in the Visual Numeric Scale (VNS) to visualise and indicate the number corresponding to pain intensity. VNS consists of a horizontal line with values expressed in centimetres from 0 to 10, where zero is the total absence of pain and ten represents the worst pain possible . This evaluation was performed 2 h postpartum to ensure a proper pain assessment between the groups, avoiding the anaesthesia bias in the suture group and all the other study stages. The perineum healing process was evaluated using the REEDA scale in stages 1 and 4 of the study. The scale is indicated to evaluate the tissue recovery process after perineal trauma through five healing items: redness, oedema, ecchymosis, discharge, and approximation (coaptation of the wound edges) . Each item evaluated was assigned a score from 0 to 3, where the maximum score (15) corresponds to the worst possible perineum healing result . Each item evaluated was assigned a score from 0 to 3, where the maximum score (15) corresponds to the worst possible perineum healing result . A Peri-Rule® ruler was used to measure hyperemia, oedema, ecchymosis and coaptation of the edges . This ruler was wrapped in polyvinyl chloride (PVC) film and reused after cleaning with soap and water, followed by antisepsis with 70% alcohol. In addition to the items on the REEDA scale, the researchers evaluated any other tissue damage or morbidity related to perineal repairs, such as hematoma, itching, wound infection, or allergic reaction. Given the nature of the interventions and outcomes, there was no possibility of blinding, as both the women and the researchers were aware of the type of perineal repair performed and because, in the evaluation of the healing process, it is possible to see whether glue or suture thread was used. The data were double-typed into Epi-Info 6, and the database was validated and imported into Excel. The mean and standard deviation (SD) were calculated for the descriptive analysis of the continuous quantitative variables. The Student's t-test was used to determine whether there was a statistical difference between the means of the two groups and analysis of variance (ANOVA) with the coefficient of determination for the multiple comparisons of means. Absolute and relative frequencies were calculated for the categorical variables. The test used in the inferential analysis was Pearson's chi-square, and the approximate chi-square test in the Monte Carlo simulation was used in cross-tabulation. In the longitudinal analysis, the generalised linear model (GLM) was employed, with Wald's chi-square test and analysis of the interactions of the effects (group and time or group and tear degree) based on linearly independent pair comparisons between estimated marginal means. The significance level adopted was p ≤ 0.05. The analyses were performed in the following statistical packages: SAS System for Windows V8, SPSS for Windows (version 12.0) and Minitab Statistical Software – Release 13.1. The project was approved by the Research Ethics Committee of the Arts, Sciences and Humanities School of the University of São Paulo—CAAE 44,832,615.1.0000.5390 and guaranteed the participants' rights. The study was registered in the Brazilian Registry of Clinical Trials, with registration data on 01/25/2018; last approval date on 01/25/2018; UTN code U1111-1184–2507; ( www.ensaiosclinicos.gov.br/rg/RBR-2q5wy8 ). It is worth noting that the researchers are not linked to the manufacturers or distributors of the materials used in this study. The study was registered in the Brazilian Registry of Clinical Trials, 01/25/2018; last approval date 01/25/2018; UTN code U1111-1184–2507; ( www.ensaiosclinicos.gov.br/rg/RBR-2q5wy8 ). It is worth noting that the researchers are not linked to the manufacturers or distributors of the materials used in this study. A total of 254 women met the eligibility criteria. Among these, 114 were excluded for the following reasons: not meeting the inclusion criteria ( n = 76; caesarean section indicated during labour = 55; intact perineum = 21); refusing to participate ( n = 7); other reasons ( n = 31; first-degree tear when the number of EG participants was completed = 12; included in the pilot study = 19). Consequently, 140 women were included and randomised in the allocations: EG1 ( n = 35), EG2 ( n = 35), CG1 ( n = 35), and CG2 ( n = 35) groups, according to the type of trauma and the repair procedure performed (Fig. ). Among the 140 women who participated in the first three stages, 110 (78.6%) returned between 10 and 20 days postpartum (stage 4), 122 (87.1%) did so between 50 and 70 days (stage 5) and 54 (38.6%), between 6 and 8 months after delivery (stage 6). 30 (21.4%) women were follow-up losses between stages 3 and 4, 18 (12.9%) between stages 3 and 5, and 86 (61.4%) between stages 3 and 6 (Fig. ). The follow-up losses among the women were due to the following reasons: reported feeling good, waiving re-evaluation ( n = 39) (stage 4: n = 10, 33.4%; stage 5: n = 5, 3.6%; stage 6: n = 24, 17.1%); did not answer or return the calls ( n = 35) (stage 4: n = 4, 13.3%; stage 5: n = 4, 2.9%; stage 6: n = 27, 19.3%); did not attend the scheduled return visit or did not accept a home visit, without stating the reason ( n = 25) (stage 4: n = 6, 20.0%; stage 5: n = 5, 3.6%; stage 6: n = 14, 10.0%); changed place of residence ( n = 17) (stage 4: n = 3, 10.0%; stage 5: n = 2, 4.0%; stage 6: n = 12, 8.6%); home visit cancelled due to living in a highly hazardous location or requested at an inappropriate time ( n = 14) (stage 4: n = 3, 10.0%; stage 5: n = 2, 1.4%; stage 6: n = 9, 6.4%); returned for consultation after the established period (stage 4: n = 4, 13.3%). All the women enrolled on this study were nulliparous. There was no significant difference between the EGs and CGs (EG1, EG2, CG1, and CG2) concerning the sociodemographic and clinical characteristics (Table ). Perineal pain intensity was evaluated in both types of perineal repair, from stage 1 to stage 6, verifying that perineal pain intensity was lower in the EGs ( p ≤ 0.001), with a decrease in pain over time ( p ≤ 0.001) (Fig. ). The healing process according to groups is shown in Fig. . The separate analysis of the REEDA scale items in the EGs and CGs presented a variation in the scores of the "edge approximation" item, observed with the group, to the study stage and the tear degree. Approximation was better among the women with first-degree tears who had perineal repair with suture (for the group and tear degree effects: p ≤ 0.001). Over time, approximation was also better in the CG women (for the group, and time p ≤ 0.001). It is worth noting that the lower the REEDA score, the better the healing process. In the EG, a new repair procedure with surgical glue was necessary for six women (8.6%; EG1 = 2; EG2 = 4) between 12 and 48 h postpartum. It is worth mentioning that these women continued in the study. No need for a new repair procedure was verified in any of the CG women. The hyperemia ( p = 0.359), oedema ( p = 0.059), ecchymosis ( p = 0.712), and discharge ( p = 0.260) items did not present any statistical difference. The studied groups did not observe other tissue damage or morbidity related to perineal repairs, such as hematoma, itching, wound infection, or allergic reaction. The perineal repair time was lower in the EG compared to the CG, with a mean of 12.1 (SD = 12.4) minutes vs 18.2 (SD = 10.1). It is worth noting that the repair time was not recorded in 22 (31.4%) women from the EG and 9 (12.9%) from the CG (Table ). The principal findings of this study were that the use of surgical glue for the perineal repair of first- and second-degree tears and episiotomy in all tissue planes (skin, mucosa, and muscle) proved to be as effective as the standard suture method. It showed less pain, shorter procedure time, and a similar healing process. The strengths of the present study were the design of a clinical, controlled, and randomised trial, in which the researchers rigorously followed all the eligibility and inclusion criteria to minimise selection biases. Also, a surgical glue suitable for deep tissue layers, such as muscles, allowed it to be used in second-degree tears and episiotomy. In addition, the follow-up for a more extended period (up to 8 months) allowed the evaluation of the healing process until its complete resolution. Another strength was the development of the surgical glue application technique and training for the team that participated in the study, which will allow the future sharing of this method. There was a good acceptance among the women to participate in the research, which can be considered a strong study point. This finding surprised the researchers, as it was believed that, for being a new procedure, most women would not accept participating in the research out of fear, but this was not the case. On the contrary, some women allocated to the control group requested that the glue be used. However, the importance of randomisation in the types of perineal repair was explained by not allowing changing the method used. The weaknesses found in the current study were the extended data collection period due to the small number of deliveries per day at the research site and the high number of exclusions related to the indication of cesarean section or intact perineum. Due to the rapid polymerisation of the surgical glue, there was also difficulty in using surgical glue in the presence of heavy bleeding. In some cases, it was necessary to use more than one surgical glue ampoule (0.5 ml) to repair the tear, increasing the cost of the procedure. Another problem observed in the EG was the need for a new repair with surgical glue between 24 and 48 h after the initial procedure, which did not occur with the CG group. On the other hand, it was also observed that one glue unit could be used for more than one woman, depending on the degree and extent of the perineal tear. A significant limitation was the price of the products. The surgical glue had an excessive cost (R$ 350.00) compared to the suture thread (R$ 22.00) in this study. However, it is worth mentioning that some materials and medications were not used when performing the repair with surgical glue, such as anaesthetics for the procedure and using an analgesic schedule for perineal pain after delivery, and the shorter time spent by the health professional to perform it. Although there was an option of using more economical surgical glues for skin and mucosa repair, only the glue chosen in the research is registered at ANVISA with approval to be used in the innermost layer (muscle) of perineal trauma. The favourable results with surgical glue for repairing first- and second-degree tears concerning perineal pain agree with the results from other studies. A study in women with second-degree tears compared three skin closure methods (glue, suture, and non-suture) and showed that the lowest perineal pain intensity was with surgical glue. Assessed with a 100 mm visual analogue scale, the mean pain in the second postpartum week was 3.0 with glue, 5.0 with suture and 7.0 with no suture ( p = 0.02). This difference was no longer observed three months after delivery ( p = 0.31) . Other studies also confirm the positive findings of using glue [ , , ]. A study with a sample of 135 women, aiming to compare the use of Histoacryl® glue with the Monosyb® suture thread to repair first-degree tears, was done. It showed that women repaired with surgical glue had less perineal pain intensity in all situations evaluated (at rest, when sitting, walking and urinating) than those with sutures in the first week after birth. Nevertheless, no difference in perineal pain was found at 30 days postpartum . As for the healing process, evaluated by the REEDA scale, the groups were similar regarding hyperemia, oedema, ecchymosis and discharge. The difference in edge coaptation was due to a lower score in the CG than in the EG, and it occurred mainly among women with second-degree tears up to 10 days after delivery. Other clinical trials that compared the use of glue to suture for perineal skin repair showed no significant difference in any of the items of the REEDA scale [ , , , , ]. Nonetheless, it is essential to point out that the coaptation of deeper tissue layers was not evaluated in these studies, which had different design than ours. The need to perform a new perineal repair procedure was also observed in a study conducted with 61 women where surgical glue was used to close the cutaneous episiotomy . The percentage of 3.3% (2 women) who had superficial wound dehiscence in the first 48 h after birth was lower than the 8.6% observed in the current study, likely due to the study researching solely the cutaneous layer repair. Only the current study was the repair with surgical glue performed in all tissue layers affected (skin, mucosa and muscles), except for the anal sphincter muscles, as the women with third or fourth-grade tears were not included. As for the perineal repair time, in the EG, it was 6.1 min shorter than in the CG, corroborating the results of other studies [ , , , , ]. It is worth emphasising that these studies used surgical glue on the mucosa or perineal skin and that this study evaluated the repair time of all tissue planes. Reducing the duration of the perineal repair procedure is essential, as it can decrease infections due to the lower exposure of tissues to microorganisms in the environment and abbreviate discomforts for women . The results of this study related to less pain for women and a shorter procedure time are auspicious reasons for clinicians and policymakers to change the practice of perineal repair. Nevertheless, the excessive cost of surgical glue compared to suture thread can be an important limiting factor for its use in the delivery care practice, especially in health systems that face challenges due to the increased costs of materials and equipment, as well as in developing countries with few available resources. Therefore, future cost analysis research is suggested, comparing all materials, procedures involved, and time spent by the professional in performing the two types of perineal repair. It is also suggested that further studies be conducted with several types of glue available and application methods to find the materials and techniques that contribute to the best cost–benefit to women. In addition, another vital factor to be analysed is the woman's satisfaction with both types of perineal repair. The n-butyl-2-cyanoacrylate surgical glue (GLUBRAN-2®) has proved to be effective because it has similar or better results in pain intensity and healing process compared to continuous suture with polyglycolic thread (Vicryl rapid®) in the repair of first- and second-degree perineal tears in vaginal births. Perineal repair with surgical glue can be an alternative method to standard suturing.
A monolateral pigmented lesion of the nipple
ebd33043-e18a-4e5e-a4f2-7020f17d1671
10091942
Anatomy[mh]
The authors declare that they have no conflicts of interest. Open access funding provided by BIBLIOSAN. Ethics approval not applicable. The patient provided informed consent for publication of their case details and images. Learning objective To gain up‐to‐date knowledge of the histochemical profile of pigmented mammary Paget disease and the prevalence of the association of mammary Paget disease with an underlying carcinoma. Question 1 Which of the following best describes the immunohistochemical profile characteristic of pigmented mammary Paget disease (PMPD)? (a) Negativity for HMB‐45, Melan A, anti‐cytokeratin (CAM 5.2), CK7, epidermal membrane antigen (EMA) and carcinoembryonic antigen (CEA), and positivity for S‐100. (b) Positivity for CEA, EMA, CK7 and CAM 5.2, and negativity for Melan‐A, HMB‐45 and S‐100. (c) Positivity for CEA, Melan‐A and S‐100, and negativity for CAM 5.2 and CK7. (d) Positivity for HMB‐45, Melan‐A, CAM 5.2, CK7, EMA and CEA, and negativity for S‐100. (e) Positivity for HMB‐45 and Melan‐A, and negativity for CK7 and EMA. Question 2 In which percentage of cases is mammary Paget disease (MPD) associated with an underlying malignancy? (a) About 20% of cases. (b) About 30% of cases. (c) About 50% of cases. (d) Less than 10% of cases. (e) More than 80% of cases. To gain up‐to‐date knowledge of the histochemical profile of pigmented mammary Paget disease and the prevalence of the association of mammary Paget disease with an underlying carcinoma. Which of the following best describes the immunohistochemical profile characteristic of pigmented mammary Paget disease (PMPD)? (a) Negativity for HMB‐45, Melan A, anti‐cytokeratin (CAM 5.2), CK7, epidermal membrane antigen (EMA) and carcinoembryonic antigen (CEA), and positivity for S‐100. (b) Positivity for CEA, EMA, CK7 and CAM 5.2, and negativity for Melan‐A, HMB‐45 and S‐100. (c) Positivity for CEA, Melan‐A and S‐100, and negativity for CAM 5.2 and CK7. (d) Positivity for HMB‐45, Melan‐A, CAM 5.2, CK7, EMA and CEA, and negativity for S‐100. (e) Positivity for HMB‐45 and Melan‐A, and negativity for CK7 and EMA. In which percentage of cases is mammary Paget disease (MPD) associated with an underlying malignancy? (a) About 20% of cases. (b) About 30% of cases. (c) About 50% of cases. (d) Less than 10% of cases. (e) More than 80% of cases. This learning activity is freely available online at http://www.wileyhealthlearning.com/ced Users are encouraged to Read the article in print or online, paying particular attention to the learning points and any author conflict of interest disclosures. Reflect on the article. Register or login online at http://www.wileyhealthlearning.com/ced and answer the CPD questions. Complete the required evaluation component of the activity. Once the test is passed, you will receive a certificate and the learning activity can be added to your RCP CPD diary as a self‐certified entry. This activity will be available for CPD credit for 2 years following its publication date. At that time, it will be reviewed and potentially updated and extended for an additional period.
Deaths caused by medication in persons not using illicit narcotic drugs: An autopsy study from Western Denmark
177a73d5-a83f-4788-8b17-93151d2d61d4
10092188
Forensic Medicine[mh]
INTRODUCTION Legal autopsy is mandatory in relation to all Danish deaths where illegal narcotic drugs are suspected to play a role. Accordingly, toxicological findings in fatal poisonings in Danish persons using illicit narcotic drugs (PUIDs) are monitored by the health authorities and reported annually. Furthermore, the pattern of fatal poisonings in PUIDs in the Nordic countries have been reported every 5 years since 1991, , which is important to monitor the illegal drug market and assess effects of preventive measures. In contrast, information about involved medications in medication‐related deaths in Danish persons not using illicit narcotic drugs (PNUIDs) is sparse. Patients with psychiatric disease are at risk of fatal poisoning, and toxicological findings in legal autopsy cases with psychiatric disease have recently been investigated. However, approximately two thirds of 180 fatal poisonings in this study were due to illegal narcotic drugs. Two reports from Eastern Denmark have described substances involved in fatal poisonings exclusively in PNUIDs in Eastern Denmark from 2003 to 2012. , In these studies, carbon monoxide and ethanol poisonings were included and accounted for the two most commonly involved individual substances. , Thus, the demographics of PNUIDs dying from medication‐related poisonings or deaths due to adverse effects were not isolated in these previous reports. , , Furthermore, the pattern of involved substances may differ between regions, and aetiology of medication‐related fatalities changes over time. During the 1980s, propoxyphene and barbiturates accounted for a large proportion of fatal poisonings in our region. Realizing their harm, the vast majority of these medications are now unavailable for medical use in Denmark. We aimed to update information on involved substances and demographics in a population restricted to deaths caused by medication in PNUIDs undergoing legal autopsy in our region. We find this knowledge important to direct prophylactic initiatives regarding use and prescription of medication. METHODS 2.1 Study population and design The present study was based on data collected for a previous study of the prevalence of suspected medication‐induced QT‐prolongation in PNUIDs. This was a cross‐sectional study of all consecutive legal autopsies performed in 2017, 2018, and 2019 at Department of Forensic Medicine, Aarhus University, Denmark, which covers four police districts and 2.2 million inhabitants. A PUID was defined as previously by information or evidence of current or past use of illegal narcotic drugs such as heroin, amphetamine, or cocaine. Persons with isolated use of cannabis were not considered PUIDs. PUIDs identified in the present study were cross‐checked with the lists of deaths in PUIDs that we report yearly to the Danish Health Agency. 2.2 Isolation and classification of deaths caused by medication in PNUIDs Deaths caused by disease, trauma or unknown cause according to the primary SNOMED term and narrative conclusion on the cause of death in the autopsy report, and poisonings caused by ethanol, carbon monoxide or other non‐pharmaceutical substances were excluded. Cases of suspected medication‐induced QT‐prolongation and cardiac arrhythmia identified in the previous study were included, as adverse effects of medication was suspected to cause death in these cases. Deaths were classified as suicides if manner of death was set as suicide by the forensic pathologist. Cases, which the forensic pathologist determined as accidental, natural, and unexplained, were classified as non‐suicides. To sub‐classify cases according to the medications causing death and determine the number of involved medications, we used the descriptive conclusion in the autopsy report. Opioids were regarded as the primary cause if both opioids and other medications were mentioned to have caused death. Thus, if the cause of death was stated as ‘poisoning with morphine and diazepam’, death was classified as caused by opioids, and the number of involved medications was two. If the cause of death was stated as ‘poisoning with quetiapine’, and an opioid was detected in blood in non‐toxic levels, the case was classified as death caused by psychotropic medication. In cases where the descriptive cause of death was ‘poisoning with several medications’, medications and number of medications causing death were determined based on the conclusion of the toxicological report. Psychotropic medications included antidepressants, antipsychotics, antiepileptics, benzodiazepines, benzodiazepine‐like medications (zolpidem and zopiclone), barbiturates, and anaesthetics. Weak analgesics included paracetamol and non‐steroid anti‐inflammatory drugs (NSAIDs). 2.3 Descriptive data Demographics, manner and cause of death, death scene investigation, and prescribed medication was extracted from the autopsy report including information about health status and treatment from the general practitioner and/or hospital patient record obtained by the police. Toxicological findings were exported from LabWare laboratory information and management system, which contains all results from toxicological analyses at our department. Total number of detected medications in each case was constructed by the count of each detected individual medication. Metabolites were only counted in absence of the parent drug. Medications used for resuscitation or hospital treatment after the initial poisoning and ethanol were not counted. A limit of 0.5 g/kg was set as cut‐off level to define antemortem presence of ethanol. 2.4 Legal autopsies and toxicological analyses Legal autopsies must be performed if a criminal act has been carried out or is suspected, if the cause of death is not established and the death has aspects of interest to the police, if the manner of death is unknown, or if an autopsy is considered necessary to prevent suspicion from arising at a later point of time. Systematic toxicological analysis is performed by initial screening of preferably postmortem femoral blood with ultra‐performance liquid chromatography with high‐resolution time‐of‐flight mass spectrometry followed by quantification by ultra‐performance liquid chromatography with tandem mass spectrometry. The methods are credited (international standard ISO 17025). The initial screening encompasses more than 700 drugs, medications, and metabolites, but not lithium, insulin, and digoxin, which are analysed upon suspicion of contribution to death. Ethanol is measured by head space gas chromatography with flame ionization detection. Depending on case circumstances, further analyses are initiated. Concentration levels are interpreted as therapeutic, toxic, or lethal according to literature. , , Cut‐off values for toxic and lethal concentration for each medication are indicative and always interpreted in relation to the specific information, autopsy findings, and other toxicological findings in each case. The forensic pathologist incorporates the result of the toxicological evaluation in the conclusion of the cause of death. 2.5 Ethics statement The study was registered at Aarhus University (journal no. 2016‐051‐000001). Data were handled in accordance with the General Data Protection Regulation (GDPR) and the Danish Data Protection Act . According to the Consolidation Act on Research Ethics Review of Health Research Projects, Consolidation Act number 1083 of 15 September 2017, section 14 (2), the project did not need approval from the Committees on Health Research Ethics. 2.6 Statistics Data were stored in REDCap electronic data capture tool. Continuous variables with normal distribution were expressed as mean ± standard deviation, non‐normally distributed data were expressed as median[interquartile range], and categorical variables in percentages. Differences between two groups were calculated in Stata IC 17, Stata Corp, Texas, or Graph Pad Prism 9, Dotmatics, Boston, USA, using Student's t ‐test, Wilcoxon's rank sum test, or χ 2 test, where appropriate. A p ‐value <0.05 was considered statistically significant. Missing data were not imputed. In case of incomplete data set, the number of cases included in the analysis is given for each parameter. Study population and design The present study was based on data collected for a previous study of the prevalence of suspected medication‐induced QT‐prolongation in PNUIDs. This was a cross‐sectional study of all consecutive legal autopsies performed in 2017, 2018, and 2019 at Department of Forensic Medicine, Aarhus University, Denmark, which covers four police districts and 2.2 million inhabitants. A PUID was defined as previously by information or evidence of current or past use of illegal narcotic drugs such as heroin, amphetamine, or cocaine. Persons with isolated use of cannabis were not considered PUIDs. PUIDs identified in the present study were cross‐checked with the lists of deaths in PUIDs that we report yearly to the Danish Health Agency. Isolation and classification of deaths caused by medication in PNUIDs Deaths caused by disease, trauma or unknown cause according to the primary SNOMED term and narrative conclusion on the cause of death in the autopsy report, and poisonings caused by ethanol, carbon monoxide or other non‐pharmaceutical substances were excluded. Cases of suspected medication‐induced QT‐prolongation and cardiac arrhythmia identified in the previous study were included, as adverse effects of medication was suspected to cause death in these cases. Deaths were classified as suicides if manner of death was set as suicide by the forensic pathologist. Cases, which the forensic pathologist determined as accidental, natural, and unexplained, were classified as non‐suicides. To sub‐classify cases according to the medications causing death and determine the number of involved medications, we used the descriptive conclusion in the autopsy report. Opioids were regarded as the primary cause if both opioids and other medications were mentioned to have caused death. Thus, if the cause of death was stated as ‘poisoning with morphine and diazepam’, death was classified as caused by opioids, and the number of involved medications was two. If the cause of death was stated as ‘poisoning with quetiapine’, and an opioid was detected in blood in non‐toxic levels, the case was classified as death caused by psychotropic medication. In cases where the descriptive cause of death was ‘poisoning with several medications’, medications and number of medications causing death were determined based on the conclusion of the toxicological report. Psychotropic medications included antidepressants, antipsychotics, antiepileptics, benzodiazepines, benzodiazepine‐like medications (zolpidem and zopiclone), barbiturates, and anaesthetics. Weak analgesics included paracetamol and non‐steroid anti‐inflammatory drugs (NSAIDs). Descriptive data Demographics, manner and cause of death, death scene investigation, and prescribed medication was extracted from the autopsy report including information about health status and treatment from the general practitioner and/or hospital patient record obtained by the police. Toxicological findings were exported from LabWare laboratory information and management system, which contains all results from toxicological analyses at our department. Total number of detected medications in each case was constructed by the count of each detected individual medication. Metabolites were only counted in absence of the parent drug. Medications used for resuscitation or hospital treatment after the initial poisoning and ethanol were not counted. A limit of 0.5 g/kg was set as cut‐off level to define antemortem presence of ethanol. Legal autopsies and toxicological analyses Legal autopsies must be performed if a criminal act has been carried out or is suspected, if the cause of death is not established and the death has aspects of interest to the police, if the manner of death is unknown, or if an autopsy is considered necessary to prevent suspicion from arising at a later point of time. Systematic toxicological analysis is performed by initial screening of preferably postmortem femoral blood with ultra‐performance liquid chromatography with high‐resolution time‐of‐flight mass spectrometry followed by quantification by ultra‐performance liquid chromatography with tandem mass spectrometry. The methods are credited (international standard ISO 17025). The initial screening encompasses more than 700 drugs, medications, and metabolites, but not lithium, insulin, and digoxin, which are analysed upon suspicion of contribution to death. Ethanol is measured by head space gas chromatography with flame ionization detection. Depending on case circumstances, further analyses are initiated. Concentration levels are interpreted as therapeutic, toxic, or lethal according to literature. , , Cut‐off values for toxic and lethal concentration for each medication are indicative and always interpreted in relation to the specific information, autopsy findings, and other toxicological findings in each case. The forensic pathologist incorporates the result of the toxicological evaluation in the conclusion of the cause of death. Ethics statement The study was registered at Aarhus University (journal no. 2016‐051‐000001). Data were handled in accordance with the General Data Protection Regulation (GDPR) and the Danish Data Protection Act . According to the Consolidation Act on Research Ethics Review of Health Research Projects, Consolidation Act number 1083 of 15 September 2017, section 14 (2), the project did not need approval from the Committees on Health Research Ethics. Statistics Data were stored in REDCap electronic data capture tool. Continuous variables with normal distribution were expressed as mean ± standard deviation, non‐normally distributed data were expressed as median[interquartile range], and categorical variables in percentages. Differences between two groups were calculated in Stata IC 17, Stata Corp, Texas, or Graph Pad Prism 9, Dotmatics, Boston, USA, using Student's t ‐test, Wilcoxon's rank sum test, or χ 2 test, where appropriate. A p ‐value <0.05 was considered statistically significant. Missing data were not imputed. In case of incomplete data set, the number of cases included in the analysis is given for each parameter. RESULTS 3.1 Identification of cases, cause and manner of death The total number of legal autopsies was 1028. We identified 96 deaths caused by medication including seven cases of suspected medication‐induced QT‐prolongation (Figure ). The primary SNOMED term included veneficium medicamentale, intoxication, or poisoning in 90 cases (94%). In the remaining cases, the SNOMED term most often reflected a cause related to an adverse effect of medication such as cardiac arrhythmia. Manner of death was suicide in 36 (38%), accident in 53 (55%), natural in three (3%), and unexplained in four (4%) cases, resulting in 60 (63%) non‐suicides. Deaths coded as natural occurred among the seven cases of suspected QT‐prolongation. 3.2 Aetiology Opioids were the most common main cause of death followed by psychotropic and other medications in both suicides and non‐suicides (Table ). Multiple medications caused death in 48 cases (50%) (Figure ). Opioids and psychotropic medications were involved in both suicides and non‐suicides in males and females, respectively (Figure ). Opioids and antipsychotics were involved in deaths caused by both single and multiple substances, respectively, whereas antidepressants and benzodiazepines were mainly involved in deaths caused by multiple substances (Figure ). Morphine, tramadol, and quetiapine were the most frequently involved individual medications (Figure ). Death due to weak analgesics (paracetamol or NSAIDs) occurred in four cases (4%). Morphine was the only individual substance involved in more than three cases in both suicides and non‐suicides caused by single or multiple substances, respectively (Table ). Quetiapine was involved in three, whereas no other individual medication was involved in more than two of the seven cases with suspected medication‐induced QT‐prolongation. A list of medications involved in less than six deaths is shown in Table in the online supplement. Medications, which were not apparently prescribed to the decedent, were more likely to cause death in suicides than in non‐suicide cases (Table ). 3.3 Demographics Median age was 51 [42.5–61.5] years, 57% were female, and psychiatric disease was present in 64% (Table ). Females accounted for 28 of 46 deaths caused by opioids (60%), and 21 of 34 deaths caused by psychotropic medications (62%). In deaths caused by other medications, the male:female ratio was 1. Five decedents had cancer, of which opioids were detected in three. Less than three had insulin‐treated type 1 diabetes. Nine had bipolar affective disorder, of which three received treatment with lithium. Blood‐ethanol was above 0.5 g/kg in nine cases (median concentration 1.23 [0.8–1.64] g/kg). A manifestation of disease was mentioned as a possible contributor to death in four suicides and 12 non‐suicides. These cases were significantly older than the remaining cases (71.5[60–76.5] vs. 49[41–55] years, p = 0.0001), and in five (31%), death was caused by a non‐opioid, non‐psychotropic medication compared with 11 (14%) among the 84 other cases ( p = 0.08). There were no significant differences in the examined demographic parameters between suicides and non‐suicides (Table ). Within the group of non‐suicides, females had a higher BMI than males (Tables and ). Overall, a significantly higher number of medications was detected in females compared with males (5 [4–7] vs. 4 [2–5], p = 0.009), and in cases where multiple substances caused deaths compared to cases in which death was caused by a single substance (6 [4–8] vs. 4 [3–5], p = 0.005). Weak analgesics, antipsychotics, and benzodiazepine‐like drugs were detected in a higher proportion of female than male cases [33 (60%) vs. 16 (39%), p = 0.04, 31 (56%) vs. 12 (29%), p = 0.008, and 16 (29%) vs. 5 (12%), p = 0.047, respectively]. The most frequently detected groups of medications including both substances with and without involvement in the cause of death are shown in Figure . 3.4 Death scene investigation Most decedents died or were found dead at home, and most frequently on a bed or couch (Table ). The fraction of cases with fatal poisoning with psychotropic medication found on the floor was significantly higher than that of cases with fatal opioid poisonings as shown in Figure . Identification of cases, cause and manner of death The total number of legal autopsies was 1028. We identified 96 deaths caused by medication including seven cases of suspected medication‐induced QT‐prolongation (Figure ). The primary SNOMED term included veneficium medicamentale, intoxication, or poisoning in 90 cases (94%). In the remaining cases, the SNOMED term most often reflected a cause related to an adverse effect of medication such as cardiac arrhythmia. Manner of death was suicide in 36 (38%), accident in 53 (55%), natural in three (3%), and unexplained in four (4%) cases, resulting in 60 (63%) non‐suicides. Deaths coded as natural occurred among the seven cases of suspected QT‐prolongation. Aetiology Opioids were the most common main cause of death followed by psychotropic and other medications in both suicides and non‐suicides (Table ). Multiple medications caused death in 48 cases (50%) (Figure ). Opioids and psychotropic medications were involved in both suicides and non‐suicides in males and females, respectively (Figure ). Opioids and antipsychotics were involved in deaths caused by both single and multiple substances, respectively, whereas antidepressants and benzodiazepines were mainly involved in deaths caused by multiple substances (Figure ). Morphine, tramadol, and quetiapine were the most frequently involved individual medications (Figure ). Death due to weak analgesics (paracetamol or NSAIDs) occurred in four cases (4%). Morphine was the only individual substance involved in more than three cases in both suicides and non‐suicides caused by single or multiple substances, respectively (Table ). Quetiapine was involved in three, whereas no other individual medication was involved in more than two of the seven cases with suspected medication‐induced QT‐prolongation. A list of medications involved in less than six deaths is shown in Table in the online supplement. Medications, which were not apparently prescribed to the decedent, were more likely to cause death in suicides than in non‐suicide cases (Table ). Demographics Median age was 51 [42.5–61.5] years, 57% were female, and psychiatric disease was present in 64% (Table ). Females accounted for 28 of 46 deaths caused by opioids (60%), and 21 of 34 deaths caused by psychotropic medications (62%). In deaths caused by other medications, the male:female ratio was 1. Five decedents had cancer, of which opioids were detected in three. Less than three had insulin‐treated type 1 diabetes. Nine had bipolar affective disorder, of which three received treatment with lithium. Blood‐ethanol was above 0.5 g/kg in nine cases (median concentration 1.23 [0.8–1.64] g/kg). A manifestation of disease was mentioned as a possible contributor to death in four suicides and 12 non‐suicides. These cases were significantly older than the remaining cases (71.5[60–76.5] vs. 49[41–55] years, p = 0.0001), and in five (31%), death was caused by a non‐opioid, non‐psychotropic medication compared with 11 (14%) among the 84 other cases ( p = 0.08). There were no significant differences in the examined demographic parameters between suicides and non‐suicides (Table ). Within the group of non‐suicides, females had a higher BMI than males (Tables and ). Overall, a significantly higher number of medications was detected in females compared with males (5 [4–7] vs. 4 [2–5], p = 0.009), and in cases where multiple substances caused deaths compared to cases in which death was caused by a single substance (6 [4–8] vs. 4 [3–5], p = 0.005). Weak analgesics, antipsychotics, and benzodiazepine‐like drugs were detected in a higher proportion of female than male cases [33 (60%) vs. 16 (39%), p = 0.04, 31 (56%) vs. 12 (29%), p = 0.008, and 16 (29%) vs. 5 (12%), p = 0.047, respectively]. The most frequently detected groups of medications including both substances with and without involvement in the cause of death are shown in Figure . Death scene investigation Most decedents died or were found dead at home, and most frequently on a bed or couch (Table ). The fraction of cases with fatal poisoning with psychotropic medication found on the floor was significantly higher than that of cases with fatal opioid poisonings as shown in Figure . DISCUSSION The main findings of the present study are that opioids and psychotropic medications caused most deaths in PNUIDs, that multiple substances caused death in 50% of cases, and that five or more medications were detected in more than half of the cases. Furthermore, a higher number of medications were detected in females than males. A significant role of opioids in both single and multiple substance related deaths is in line with a relatively frequent presentation of acute opioid poisonings requiring hospitalization, , a major part of fatal poisonings caused by opioids in PUIDs in the Nordic countries, and in general in New Zealand and United States. Tramadol and morphine are the most frequently prescribed opioids in Denmark, tramadol being used approximately twice as much as morphine as measured in number of used defined daily doses (DDD) per 1000 inhabitants per day. Tramadol was marketed in Denmark in 1993 and may have substituted propoxyphene, which was withdrawn in 2010. It was the seventh and fifth most frequently involved substance in poisonings in PNUIDs in 2003–2007 and 2008–2012, respectively, in Eastern Denmark, , whereas it ranked second in our material. Thus, deaths caused by tramadol may have increased over time. Tramadol was an uncontrolled opioid in Denmark until the Danish authorities decided to give tramadol status as a controlled substance in April 2022. This was due to increasing reports of illicit tramadol found by the police and custom authorities and because of increased awareness of tramadol's abuse potential at the Danish Medicines Agency. Although morphine is prescribed more often than tramadol, the result of the present study supports that tramadol should be monitored and prescribed with caution like other opioids. Methadone was involved in less than three cases in the present study. This contrasts to the previous reports, , where methadone was the fourth most frequently involved substance. Currently, methadone is used for opioid maintenance treatment but is considered a highly specialized treatment for pain in palliative care. This may explain the low occurrence of methadone in our population. In deaths caused by psychotropic medications, the antipsychotic drug quetiapine and the tricyclic antidepressant (TCA) amitriptyline and/or its metabolite nortriptyline were frequently involved in the present study. In terms of quetiapine, the frequent occurrence seems to have emerged since 2008–2012 and is in accordance with a current extensive use in Denmark. However, olanzapine, an antipsychotic that was sold in almost equal quantities to quetiapine measured in DDD/1000 inhabitants per day in 2019, occurred infrequently. Similarly, ami/nortriptyline's frequent appearance compared with other antidepressants including selective serotonin reuptake inhibitors (SSRIs) cannot be explained by availability, because sale of SSRIs was approximately 10‐fold higher compared with TCAs in DDD per 1000 inhabitants per day in Denmark in 2019. In line with this, TCAs are known as having a higher toxic potential than SSRIs. Our finding that quetiapine and ami/nortriptyline as well as antidepressants in general were most often involved in deaths caused by multiple substances concurs with a recent study from New Zealand. Polypharmacy, which was more frequent in deaths caused by multiple substances, and which increases the risk of pharmacokinetic and pharmacodynamic interactions, may thus play an important role in deaths due to psychotropic medications, for example, by inducing QT prolongation and cardiac arrhythmia, which was suspected in seven cases in the present study of which quetiapine were involved in three. Weak analgesics, especially paracetamol accounts for a large part of hospitalizations due to acute poisonings. However, in the present study, fatalities caused by weak analgesics were rare. This may be due to a high success rate of treating paracetamol poisonings in hospital as shown in a recent Icelandic study in which the mortality rate was 1.2%. On the other hand, a recent study on in‐hospital deaths due to poisoning found that non‐opioid analgesics were the most common cause of death in single substance exposures. Our finding concur with previous findings from the 1980s in our region where seven fatal paracetamol poisonings occurred, although poisonings with salicylic acid were more frequent. Furthermore, a study found a total number of 15 deaths caused by analgesics registered in Denmark in the cause of death register (CDR) in a 15‐month period in 2013–2014, indicating that fatal poisonings with weak analgesics are relatively rare in Denmark. There are some similarities between fatal poisonings in PUIDs in Denmark, and this study's findings in PNUIDs. Opioids are the main cause of death in both populations, although methadone is the most common individual substance in PUIDs. Additionally, multiple substances are often found in PUIDs as seen in the present study. However, females accounted for 60% of deaths caused by opioids and psychotropic medications in the present study, which contrasts the pronounced preponderance of males in fatal poisonings in PUIDs, and legal autopsy cases in general. We detected a higher number of medications in female cases, mainly driven by weak analgesics, antipsychotics, and benzodiazepine‐like medications. Thus, the results might call for attention on polypharmacy and risk of poisoning with prescription medication in women not using illicit narcotic drugs. Most decedents in the present study died or were found dead at home, often lying on a bed or couch. This may concur with sedation or sleep preceding death as described previously in opioid‐related deaths. Sleeping into death may prevent timely initiation of resuscitation, even if the person is not alone. Thus, it is important to inform patients and relatives of sedation or unconsciousness as a warning sign. In addition, as medications not prescribed to the decedent contributed in several suicides, actively asking about use of or access to non‐prescribed medication may be appropriate in vulnerable patients. Of interest for forensic investigation, our results point towards an association between being found on the floor, and dying from poisoning with psychotropic medications, which may induce cardiac arrhythmias and seizures. This finding should be interpreted cautiously due to a low number of cases, because tramadol poisoning may cause cardiac arrhythmia, and seizures, and because cardiac arrhythmias may occur while resting. A recent study found an association between opioid poisonings and positional asphyxia, and it could be interesting to incorporate death scene investigation for predicting the aetiology of poisoning in future research. In 2017 to 2019, 188 deaths in persons aged above 15 years were registered as ‘self‐poisonings’, 154 as ‘accidental poisonings with medications’, and 159 as ‘accidental poisonings with narcotics, psychedelics, and psychotropic drugs’ in the CDR in the four police districts covered by our department. The CDR contains information about the immediate and underlying cause of death, and registration of all deaths is mandatory by law. Assuming that ‘accidental poisonings with narcotics, psychedelics, and psychotropic drugs’ mainly contains deaths among PUIDs, our best estimate is that approximately a third of accidental medication poisonings, and approximately one fourth of suicides with medication that occurred in PNUIDs above 15 years of age in our region is represented in our study population. However, our population is restricted to decedents, in which the police found a legal autopsy relevant. Thus, most cases in the present study died in private homes, suggesting that persons dying from medication poisoning or adverse effects in hospital, where the cause of death might have been settled, may be underrepresented in our population. Accordingly, cases with high age and comorbidities, which have a high risk of polypharmacy and drug–drug interactions, were relatively rare in our population. Unfortunately, the publicly available data did not allow us to determine the aetiology of the registered poisonings in the CDR to elucidate differences from the findings of our study. Furthermore, the validity of the registrations in the CDR can be questioned in cases where no autopsy and toxicological analysis has been performed. The present study taken together with the registrations in the CDR suggest that the cause of death depends on the attending physician's best estimate in most deaths suspected to be caused by medications, leaving a considerable potential for deeper insight. There are a number of limitations to the present study. First, when death is concluded to be caused be medication, it is based on all available information from death scene, police investigation, macroscopic‐ and microscopic autopsy findings, and toxicology. However, other non‐detectable causes such as epilepsy or cardiac arrhythmia due to other causes cannot entirely be ruled out. Secondly, suicide is set as manner of death if there are clear indications, such as a suicide note. Otherwise, the cause of death is most often set as accident. Thus, the segregation of suicides and non‐suicides is not 100% certain. Thirdly, the initial screening of post‐mortem blood does not cover some rarely used medications or medications with low toxic potential. Furthermore, some components including lithium, insulin, and digoxin are only measured when they are suspected to play a role. Finally, not having direct access to prescription registers or patient records may hamper our ability to evaluate prescribed medication and health status, but on the other hand, the police and the Danish Patient Safety Authority provide copies of the necessary information on medication and patient records. Thus, some unavoidable uncertainties are associated with the conclusions about aetiology and classification of deaths. However, we consider the quality of our findings high as they are based on thorough autopsy and toxicology examinations for legal purposes. CONCLUSION Opioids dominated by morphine and tramadol and psychotropic medications such as quetiapine most frequently caused death in PNUIDs. Monitoring deaths caused by medications with autopsy and toxicology in PNUIDS may yield important knowledge on direct prophylactic initiatives regarding medication use and prescription and prevent future deaths that cannot be obtained by monitoring deaths in PUIDs or from the current registrations in the CDR. The authors declare no conflicts of interest. Table S1. List of medications causing death in less than six cases. Table S2. Demographics of cases with deaths caused by medication according to sex Table S3. Characteristics according to manner of death and sex. Figure S1. Most frequently detected groups of medication Click here for additional data file.
Summer thaw duration is a strong predictor of the soil microbiome and its response to permafrost thaw in arctic tundra
4eb96998-e7b0-43ad-9477-0da8596611fd
10092252
Microbiology[mh]
Arctic tundra is underlain by permafrost, an ice‐hardened soil layer defined by its sub‐zero temperatures for two or more consecutive years (Williams & Smith, ). The layer of soil that lies above the permafrost undergoes annual freeze–thaw cycles and is known as the active layer. Collectively, these soils represent an important microbial ecosystem (Jansson & Tas, ) and a globally significant pool of sequestered carbon (Hugelius et al., ; Schuur et al., ; Tarnocai et al., ) that is being thawed and mobilized as climate warms (Hinzman et al., ; Jorgenson et al., ; Osterkamp & Romanovsky, ). Warmer soil temperatures, earlier spring thaw, and later fall freeze‐up have all contributed to an increase in annual thaw depth and an extended duration of annual thaw at specific depths in tundra soil profiles (Barichivich et al., ; Euskirchen et al., ; Serreze et al., ). Yet, few studies have explicitly investigated how increased thaw frequency versus average thaw duration over time has affected the soil microbiome. The genomic potential of soil microbiomes determines the biotic decomposition of soil carbon and its release to the atmosphere as carbon dioxide (CO 2 ) and methane (CH 4 ) (Chen et al., ). Therefore, it is important to determine how increases in frequency and duration of thaw affect the composition of the soil microbiome in order to predict the microbial response to future permafrost thaw and the fate of permafrost soil carbon. It is well established that microbiome composition varies across tundra soils (Deng et al., ; Frank‐Fahle et al., ; Gittel et al., ; Mackelprang et al., ; Müller et al., ; Yergeau et al., ). In the active layer, the composition of the soil microbiome is influenced by geographic distance between sites (Malard et al., ) and variations in landscape topography that control dominant plant species and soil physicochemical properties (Judd et al., ; Judd & Kling, ; Taş et al., ; Zak & Kling, ). For example, geographic distance and plant species can change the regional composition of the active‐layer microbiome given its direct connection with aboveground environmental conditions (Chu et al., ; Malard & Pearce, ; Romanowicz et al., ; Taş et al., ; Tripathi et al., ; Wallenstein et al., ). Additional environmental factors such as active‐layer depth or soil type (Malard et al., ) as well as climatic variables such as temperature and precipitation (Castro et al., ; Nielsen & Ball, ) can also influence the composition of the active‐layer microbiome. In permafrost, the composition of the soil microbiome is influenced by landscape age, with substantial changes in composition found along permafrost chronosequences in response to increasing age and associated stresses of the harsh permafrost environment (Mackelprang et al., ; Saidi‐Mehrabad et al., ). Additional environmental factors such as ice content (Burkert et al., ), dispersal limitations (Bottos et al., ), and thermodynamic constraints imposed by prolonged freezing (Bottos et al., ) can influence the composition of the permafrost microbiome. What remains unclear is whether and how an increase in the frequency of thaw at depth over time, and the duration of that thaw, will affect the microbial composition of soils in the transition zone between the upper active layer and deeper permafrost. Recent studies that attempt to predict the microbial response to permafrost thaw often focus on the microbiome composition as it exists at the time of sampling (Coolen & Orsi, ; Hultman et al., ; Mackelprang et al., ; Waldrop et al., ). Yet, over time the harsh permafrost environment can alter the composition of the microbiome by selecting for a subset of taxa originally present when the permafrost formed (Kraft et al., ; Liang et al., ; Willerslev et al., ). In laboratory‐based incubations, these relic permafrost microbiomes have shown rapid, substantial shifts in composition within a few days of thaw (Coolen & Orsi, ; Mackelprang et al., ). The abundance of certain taxa within the relic permafrost microbiome has also been used to predict post‐thaw biogeochemical rates such as methanogenesis (Waldrop et al., ), iron reduction (Hultman et al., ), and soil carbon transformations (Coolen & Orsi, ; Mackelprang et al., ). However, the results of these studies correspond weakly or not at all with multi‐year in situ soil warming experiments that show little or no change in permafrost microbiome composition (Biasi et al., ; Lamb et al., ; Rinnan et al., ). Different outcomes between field and laboratory‐based experiments are likely due to differences in the rates that temperature is manipulated. That is, field‐based studies involve moderate heating of the active layer to mimic its natural extension into the permafrost, whereas laboratory‐based experiments induce rapid permafrost thaw that can lead to substantial changes in the physicochemical properties of the soil and the composition of the permafrost microbiome and its associated biogeochemical functions (Mackelprang et al., ; Ricketts et al., ; Schostag et al., ). Here, we analyse the natural depth transition between thawed and permafrost soil microbiomes, coupled with estimates of intermittent thaw frequency and duration in this transition zone, to determine whether recent decades of intermittent freeze–thaw cycles have induced a compositional change in the relic permafrost microbiome. Predicting microbial responses to thawing permafrost depends on how thaw frequency and thaw duration affect the depth‐dependent composition of the soil microbiome. We approached these questions by assessing how the relic permafrost microbiome responds to intermittent freeze–thaw cycles in the soil transition zone between annually thawed active layer and permafrost. We combine results of thaw frequency and duration from multi‐decadal thaw surveys with the genomic composition of the active layer, transition zone, and permafrost microbiomes measured at 10‐cm increments along soil profiles of arctic tundra. The results demonstrate (1) how soil microbiomes differ regionally between sites located on distinct landscape ages and between distinct tundra types; (2) how active layer and permafrost microbiomes differ from each other between sites and tundra types; (3) that the transition zone microbiome remains indistinguishable from the permafrost microbiome even after decades of intermittent thaw, and (4) how thaw duration rather than intermittent thaw frequency has a greater impact on the composition of soil microbiomes at these arctic tundra sites. We propose that changes in microbiome composition across the transition from long‐to‐short duration thaw may be used to predict shifts in microbiome composition in response to future permafrost thaw. Three sites were selected for this study on the North Slope of the Brooks Range in northern Alaska, USA (Figure ). Sites were (1) Toolik Lake (68°37′16.18″ N, 149°36′54.17″ W); (2) Imnavait Creek (68°36′35.36″ N, 149°18′29.80″ W); and (3) Sagwon Hills (69°20′36.81″ N, 148°45′31.75″ W). Landscape age and glaciation surface differed by site (see Walker et al., ). Briefly, soils in the Toolik Lake area formed on an Itkillik II‐age landscape (~14 kyr BP), soils in the Imnavait Creek area formed on a Sagavanirktok‐age landscape (~250 kyr BP), and soils in the Sagwon Hills area formed on a Gunsight Mountain‐age landscape (~2500 kyr BP). Topography controlled the dominant plant species at each site such that hillslopes supported moist acidic tussock (MAT) tundra and valley bottoms supported wet sedge (WS) tundra. See Appendix for additional details. At each site, the vertical soil profile of MAT and WS tundra was sampled in 10‐cm increments down to 1 m depth. Within MAT tundra, soil pits (~1 m 2 and 1 m deep) were excavated using a jack hammer, shovels, and pickaxe to expose the soil profile. Soil samples (~25 g) were collected in duplicate from each 10‐cm increment along the soil profile using a chisel rinsed with 70% ethanol between depths. Samples were placed in 50 ml Falcon tubes in a cooler with ice, and immediately frozen at −80°C upon return from the field. Within WS tundra, soil cores were collected using a SIPRE (Snow, Ice, and Permafrost Research Establishment, Tarnocai, ) corer with carbide bits (Jon's Machine Shop, Fairbanks, AK). Soil cores were extruded then scraped parallel to soil layers using aseptic techniques in the field to remove outer layers of soil and then separated into 10‐cm increments. Soil samples (~25 g) were collected in duplicate from each 10‐cm increment along the soil core using a chisel rinsed with 70% ethanol between depths. Samples were placed in 50 ml Falcon tubes in a cooler with ice, and immediately frozen at −80°C upon return from the field. Soil sampling at each site took place over the course of 1 day between 10 and 20 July 2018 for a total of 51 samples for analysis. Annual thaw depth measurements began in 1990 at the Toolik MAT site, and in 2003 at the Imnavait MAT and WS sites. Annual thaw depth measurements have not been conducted at the Sagwon sampling location. Thaw frequency over time was characterized by calculating the probability of thaw for each 10‐cm depth increment of the soil profile. This represents the probability that in any given year the surface thaw would reach that depth. Probabilities were calculated for July and August sampling dates as measured from thaw depth surveys at Toolik MAT (1990–2018), Imnavait MAT (2003–2018), and Imnavait WS (2003–2018) tundra sites. Average thaw duration (i.e. the minimum number of days the soil at a given depth was thawed) was calculated from thaw depth measurements taken at seven time points from 2 June to 20 August in 2018 at Toolik and 21 June to 21 August in 2018 at Imnavait (see also Figure . These are ‘minimum’ estimates of thaw duration because any particular depth may have thawed just after one survey but not been detected until the next survey. See Appendix S1 for additional details. Soil physicochemical properties including soil pH, electrical conductivity (μS cm −1 ), water content (%), and organic carbon content (%) were measured from each 10‐cm soil profile increment. Bacterial cell viability assays were performed on select soil profile increments using three depths at each sampling location to represent a surface active‐layer depth (~10–20 cm), a transition zone depth (~40–50 cm), and a permafrost depth (~70–80 cm). The viability assays were performed with a Live/Dead BacLight Bacterial Viability kit (Invitrogen) following methods from Burkert et al. ( ). Cell counts were corrected to calculate the average number of cells per gram of soil (dry weight). See Appendix for additional details. Genomic DNA was extracted from each 10‐cm soil profile increment and amplified through polymerase chain reaction (PCR) using dual‐barcoded 16S rRNA gene primers 515f‐806r (Apprill et al., ; Parada et al., ) to profile the bacterial and archaeal communities. PCR amplicons were pooled into a single library and submitted to the University of Michigan Microbiome Core for high‐throughput sequencing on the Illumina MiSeq platform. Sequencing data were downloaded from Illumina BaseSpace and analysed using QIIME2 (v. 2020.11) (Bolyen et al., ) on the Great Lakes high‐performance computing cluster (University of Michigan, USA). Raw forward and reverse sequencing reads were quality filtered with DADA2 (Callahan et al., ). Taxonomy was assigned to amplicon sequence variants (ASVs) using scikit‐learn naïve Bayes taxonomy classifier (Pedregosa et al., ) against the SILVA sequence database (v. 138) (Quast et al., ). ASVs were chosen over operational taxonomic units (OTUs) following recent benchmark studies (Callahan et al., ; Chiarello et al., ). ASVs were filtered to remove chloroplast, mitochondria, and ASVs not assigned to bacteria or archaea classification. To assess community composition along the depth profile, samples were rarefied to 50,487 sequences per sample (average 116,360 QC sequences per sample prior to rarefying), with rarefaction plots asymptotic at ~20,000 sequences for all samples. See Appendix S1 for a summary of sequencing read statistics (Table ). QIIME2 artefacts were exported to R (v. 4.2.1) (R Core Team, ) using the ‘qiime2R’ package (Bisanz, ), where all statistical analyses were conducted and considered significant at p < 0.05. One‐way analysis of variance (ANOVA) was used to assess differences in soil physicochemical properties and microbiome alpha diversity between sites, tundra type, soil layer, and their interactions across the sampling region. Multivariate statistical analysis of the microbiome data was conducted using the ‘microeco’ package (Liu et al., ) and the ‘vegan’ package (Oksanen et al., ). Bray–Curtis dissimilarity was calculated for ASV abundance and analysed via permutational multivariate analysis of variance (PERMANOVA) using the ‘adonis()’ function (999 permutations) in the ‘vegan’ package to determine the effects of site, tundra type, and their interactions on the composition of the soil microbiome across the sampling region. Venn diagrams were generated to visualize the unique and shared ASV counts and their relative abundance between sampling sites or between tundra types. Differences in ASV abundance by individual soil depths across all sites and tundra types were visualized through unconstrained NMDS ordination using the ‘metaMDS()’ function in the ‘vegan’ package. Hierarchical clustering analysis was used to determine which soil depths were statistically similar to each other based on the vertical distribution of dominant microbial taxa, where an agglomerative clustering algorithm calculated similarity of mean z‐scaled relative abundance values by soil depth and determined the optimal number of clusters from 10,000 bootstrap iterations using the ‘pvclust()’ function in the ‘stats’ package (R Core Team, ). Hierarchical clustering analysis was also used to determine which microbial taxa were statistically similar to each other across soil depths based on the mean z‐scaled relative abundance values of each taxon at each depth, with the optimal number of clusters determined from 10,000 bootstrap iterations using the ‘pvclust()’ function. The hierarchical clustering results for soil depth and taxon clusters were visualized together with the ‘pheatmap’ package (Kolde & Kolde, ). Spearman's non‐parametric rank correlations (rho) were used to determine the similarity of the dominant microbial taxa (via relative abundance) with soil physicochemical properties and thaw duration measurements across all depths of the soil profile using the ‘cor.test()’ function. Thaw duration measurements were non‐normally distributed along soil profiles and a non‐parametric correlations test such as Spearman's rho was most appropriate for the data. See Appendix for additional details regarding thaw frequency and duration measurements. We sampled the depth‐dependent composition of the soil microbiome at 10‐cm increments down soil profiles under two distinct tundra types at each of three sites separated by ~90 km on the North Slope of Alaska, USA (Figure ). Tundra types included moist acidic tussock (MAT) tundra found on hillslopes and wet sedge (WS) tundra found in valley bottoms (see for full description). The depth‐dependent distribution of microbial taxa in tundra soils has previously been investigated (Kim et al., ; Müller et al., ; Tripathi et al., , ) and is known to be highly influenced by geographic separation between sites (biogeography) and distinct tundra types that are associated with the taxonomic composition of the active layer and permafrost microbiomes (Deng et al., ; Singh et al., ; Varsadiya et al., ; Wilhelm et al., ). Our results from high‐throughput DNA sequencing analysis of bacteria and archaea are generally consistent with these previous studies by showing geographic and depth‐dependent variation in microbiome composition. However, we analyse this variation in detail and relate changes in microbiome composition from the active layer through a transition zone into permafrost to the frequency and duration of intermittent thaw as determined by multi‐decadal thaw surveys. Soil microbiome diversity For the soil microbiome analysis, 16S rRNA gene amplicons were resolved to amplicon sequence variants (ASVs) rather than operational taxonomic units (OTUs) based on recent benchmark studies comparing both sequence inference methods (Callahan et al., ; Chiarello et al., ). ASVs showed no statistical difference in taxonomic alpha diversity between sites, tundra type, or site × tundra type interactions using Chao1 and Shannon diversity metrics (ANOVA; p > 0.05 for all; Figure ) when all depths were included. However, there were significant depth‐dependent differences in taxonomic alpha diversity between soil layers when soil depths were pooled as active layer (0–40 cm MAT tundra; 0–50 cm WS tundra) or permafrost (40+ cm MAT tundra, 50+ cm WS tundra), but only when soil layers were compared across all sampling locations (ANOVA; p < 0.001). There were no significant differences in alpha diversity between active layer and permafrost soil layers within any single sampling location (ANOVA; p > 0.05 for each). Beta diversity results based on Bray–Curtis dissimilarity of ASV abundance indicated significant differences in taxonomic composition between sites, tundra type, soil layer, and their interactions (PERMANOVA; p < 0.001 for all; Figure ). Venn diagrams indicated that the greatest relative abundance of ASVs were shared between all three sites (52.8%; Figure ), with an additional 25.4% of ASV abundance shared between at least two sites (Figure ). Notably, these ASVs represented 78.2% of the relative abundance of all ASVs shared between sites but consisted of only 4550 ASVs out of 23,798 total ASVs (Figure ). Likewise, the dominant ASVs shared between tundra types (69.8%; Figure ) consisted of only 3328 ASVs out of 23,798 total ASVs (Figure ). The remaining extent of ASVs were unique to each site (19,248 ASVs; Figure ) or each tundra type (20,470 ASVs; Figure ). Sagwon had the greatest abundance of unique ASVs (13.4%) compared with Toolik (4%) and Imnavait (4.3%; Figure ), while WS tundra had more unique ASVs (19.3%) compared with MAT tundra (10.9%; Figure ). Collectively, the abundance of unique ASVs by site (21.7%) or by tundra type (30.2%) suggests the potential for regional endemism consistent with previous biogeographical microbiome surveys across the Arctic (Malard et al., ). Differences in ASV abundance by individual soil depths across all sites and tundra types were visualized through unconstrained non‐metric multidimensional scaling (NMDS) ordination based on Bray–Curtis dissimilarity (Figure ). At Toolik and Imnavait sites, it appears that the relative abundance of ASVs by soil depth are strongly influenced by tundra types shared between sites rather than between tundra types within each site (Figure ). Notably, we found that the soil depths corresponding to MAT tundra or WS tundra overlapped between Toolik and Imnavait sites in ordination space based on their shared abundance of ASVs (Figure ). Beta diversity measurements also indicated no significant difference in taxonomic abundance of ASVs shared between these sites based on tundra type (PERMANOVA; p > 0.05 for each), while there were significant differences between tundra types within each site (PERMANOVA; p < 0.05 for each; Figure ). These results are likely due to differences in the dominant plant species known to affect soil physicochemical properties that regulate microbiome composition (Judd et al., ; Judd & Kling, ; Taş et al., ; Zak & Kling, ). For example, moist acidic tussock tundra is dominated by sedges ( Eriophorum vaginatum ) and dwarf shrubs ( Betula nana , Ledum palustre ) on hillslopes with greater soil water drainage than wet sedge tundra, which is dominated entirely by sedges ( Carex aquatilis , C. chordorrhiza , C. rotunda ) in valley bottoms where soil water accumulates (Walker et al., ). The ecological differences in dominant plant species between MAT and WS tundra affect soil physicochemical properties such as soil pH, water content, and the accumulation of organic C (Table ) and likely account for the overlapping ordination of soil depths by tundra type between Toolik and Imnavait sampling locations (Figure ). Our results are also consistent with numerous studies showing regional variations in taxonomic abundance linked to tundra type (Chu et al., ; Judd et al., ; Tripathi et al., ; Wallenstein et al., ; Zak & Kling, ). In contrast, we found overlapping ordinations for soil depths corresponding to MAT and WS tundra at Sagwon based on shared ASV abundance within the site (Figure ). However, there was a significant difference in taxonomic beta diversity based on the abundance of ASVs shared between tundra types at Sagwon (PERMANOVA; p < 0.001; Figure ) that may account for the distinct clustering of MAT tundra soil depths within the more dispersed cluster of WS tundra depths (Figure ). The Sagwon landscape is ~2500 kyr BP in age compared to the youngest landscape at Toolik (~14 kyr BP) and the intermediate‐aged Imnavait (~250 kyr BP), and our sites represent a natural Pleistocene chronosequence across the sampling region (Walker et al., ). The greater proportion of ASVs unique to the considerably older Sagwon landscape (13.4%; Figure ) compared to the younger aged landscapes at Toolik and Imnavait (4%, 4.3%, respectively; Figure ) is also consistent with previous arctic studies showing substantial variations in taxonomic abundance by landscape age due to differences in soil physicochemical properties that develop over geologic time (Mackelprang et al., ; Saidi‐Mehrabad et al., ). Here, we found that soil pH and conductivity were significantly greater at Sagwon compared to the other sites (ANOVA; p < 0.05; Table ), which may account for the higher proportion of unique ASVs at Sagwon (Figure ) and the overlapping soil depths by tundra types within Sagwon that ordinated separately from the same tundra types shared between Toolik and Imnavait (Figure ). Soil pH has previously been identified as a key factor influencing the abundance of microbial taxa over both small and large geographic scales in the Arctic (Chu et al., ; Ganzert et al., ; Siciliano et al., ), and pH can vary by landscape age (Hultman et al., ; Mackelprang et al., ; Saidi‐Mehrabad et al., ). Alternatively, the greater geographic distance between Sagwon and Toolik or Imnavait (~90 km; Figure ) may have influenced microbial distribution (e.g. via dispersal limitations) across the region such that the effects of tundra type on ASV abundance by soil depths at Sagwon was minimized in our NMDS ordination (Figure ). All these overlapping ordination patterns of soil depths based on ASV abundance in the unconstrained NMDS plot were consistent with the relative abundance of shared and unique ASVs shown in the Venn diagrams (Figure ) and with the significant differences measured in taxonomic beta diversity (Figure ). Furthermore, the surface depth (0–10 cm) from five of six sampling locations (Sagwon MAT tundra as the exception) ordinated separately from all other depths within their respective soil profiles and coalesced near each other in ordination space (grey ellipse; Figure ). This also included the 10–20 cm surface depth for MAT tundra at Toolik and Imnavait (Figure ), which together indicate a unique taxonomic composition within the surface depths that is consistent across sampling locations. This strong pattern of shared taxa in surface depths across the tundra region regardless of tundra type or landscape age has not been shown before and could imply that a common dispersal mechanism such as wind is homogenizing the microbiome composition at the soil surface, and that this mechanism is strong enough to overcome the effect of differences in dominant plant species by tundra type. Soil microbiome composition ASV annotations, derived from the SILVA database (v. 138; Quast et al., ), indicated that the soil microbiome at all sampling locations with all depths combined was dominated by bacteria (>98% relative abundance). However, the relative abundance of dominant bacterial taxa changed with depth at all sampling locations (Figure ). Using hierarchical clustering analysis on depth‐dependent differences in bacterial abundance, we found that all soil profiles clustered into several distinct soil layers (Figure ). Figure delineates distinct soil layers within each soil profile with black horizontal lines between soil depths as determined from hierarchical clustering analysis (Figure ) to visualize the depth‐dependent differences in bacterial abundance along the soil profiles. For Toolik and Imnavait MAT tundra, we consider the distinct soil clusters from 0 to 40 cm depth to represent the active layer, while the active layer in Sagwon MAT tundra ranged from 0 to 30 cm depth (Figure ). There was also an additional cluster of soil depths within the active layer of Toolik and Imnavait MAT tundra where the abundance of bacterial taxa clustered by different soil types (Figure ). Specifically, the surface soil depths (0–20 cm) were composed of organic soil and clustered distinctly from the lower half of the active layer (20–40 cm) that was composed of mineral soil (Figure ). Likewise, the Sagwon MAT tundra active layer (0–30 cm) was composed of organic soil that clustered distinctly from the permafrost depths (40+ cm) that were composed of mineral soil (Figure ). Similar results were also found in previous tundra studies where bacterial abundance was strongly related to a shift in substrate availability between the organic and mineral soil horizons (Deng et al., ; Koyama et al., ). The change from organic to mineral soil type could also account for the distinct ordination patterns of surface depth MAT tundra samples in our NMDS plot (0–10 cm, 10–20 cm; Figure ). The distinct cluster of bacterial taxa after 40‐cm depth (30‐cm Sagwon) in MAT tundra (Figure ) was not associated with a change in soil type because all soil depths below 20 cm (30‐cm Sagwon) were composed of mineral soil (Figure ). Thus, the distinct cluster after 40‐cm depth (30‐cm Sagwon) likely indicates the average permafrost boundary (described below). In contrast, the WS tundra soil profiles were composed entirely of organic soil (Figure ); thus, the clustering patterns of shared bacterial taxa into distinct soil layers are not due to major changes in soil type with depth (Figure ). Rather, these clustering patterns are likely due to the substantial differences in taxonomic abundance at the surface‐most depth (0–10 cm in Toolik and Imnavait WS; Figure ) and additional differences at the permafrost boundary (50+ cm Toolik WS; 60+ cm Imnavait WS; 40+ cm Sagwon WS; Figure ). These significant taxonomic differences at the permafrost boundary are likely due to the limitations imposed on the microbiome from prolonged freezing (see also Kraft et al., ; Liang et al., ; Willerslev et al., ). Imnavait WS tundra was the only soil profile with several non‐significant clusters of bacterial taxa by distinct depths, with only the 40–60 cm depths forming a significantly distinct cluster based on the relative abundance of shared bacterial taxa within these soil depths (95% similarity; Figure ). The observed shift in composition between the active layer and permafrost microbiomes at all sites was largely due to variations in the abundance of Acidobacteriota, Actinobacteriota, and Bacteroidota (Figure ), generally consistent with previous studies in arctic soils (Deng et al., ; Frank‐Fahle et al., ; Kim et al., ; Müller et al., ; Singh et al., ; Tripathi et al., , ; Varsadiya et al., ; Wilhelm et al., ). For example, we found that Acidobacteriales , Solibacteriales , and Vicinamibacterales (Acidobacteriota) abundance was greater in the active‐layer microbiome compared to the permafrost microbiome, especially within MAT tundra, while Gaiellales , Micrococcales , RBG‐16‐55‐12, and WCHB1‐81 (Actinobacteriota), as well as Bacteroidales and Sphingobacteriales (Bacteroidota) abundance was greater in the permafrost microbiome of both tundra types (ANOVA; p < 0.05 where indicated; Table ). However, we did find that dominant Acidobacteriota taxa in the active‐layer soil depths differed between Toolik and Imnavait MAT tundra compared to Sagwon MAT tundra (Table ). Specifically, Acidobacteriota within the active‐layer microbiome of Toolik and Imnavait MAT tundra consisted of Acidobacteriales (7.8%–10.4%) and Solibacteriales (1.6%–1.7%), with less than 1.5% relative abundance of Vicinamibacterales (Table ). The Sagwon MAT tundra active‐layer microbiome contained no measurable abundance of Acidobacteriales or Solibacteriales but had ~6.5% of Vicinamibacterales (Table ). The active‐layer microbiome within Sagwon WS tundra was similar and contained ~3.1% relative abundance of Vicinamibacterales (Table ). The similarly high relative abundance of Vicinamibacterales within Sagwon MAT and WS tundra may also account, in part, for their shared ordination patterns in our NMDS plot that clustered distinctly from either tundra type at Toolik and Imnavait (Figure ). In addition to the Acidobacteriota, the relative abundance of Pseudomonadota (formerly Proteobacteria) such as Rhizobiales (Alphaproteobacteria) and Burkholderiales (Gammaproteobacteria), as well as Geobacterales (Desulfobacterota) and Verrucomicrobiota including Chthoniobacterales and Pedosphaerales were greater in the active‐layer microbiome across the region (Table ), consistent with studies conducted at numerous sites across the Arctic (reviewed by Malard & Pearce, ). The higher abundance of Pseudomonadota in the active layer could be related to their preference for higher concentrations of nutrients (Kim et al., ), especially given that the relative abundance of Alphaproteobacteria and Gammaproteobacteria taxa were shown to increase after fertilization compared with control plots at Toolik (Campbell et al., ; Koyama et al., ). Also, the greater abundance of bacterial taxa such as Geobacterales (Desulfobacterota) in the active‐layer microbiome of WS tundra (Table ) is likely because they thrive under saturated conditions common in the active layer of WS tundra (Emerson et al., ), as has been previously reported from the Toolik Lake region (Romanowicz et al., ). This suggests that the depth‐dependent variations in the soil microbiome could be related to the different resource needs of each bacterial taxon. In contrast, the permafrost microbiome was similar across sites due to an increase in the relative abundance of Actinobacteriota and Bacteroidota (Figure ), as mentioned earlier. In addition, the relative abundance of Caldisericota and Firmicutes also increased in the permafrost microbiome (Figure ) and clustered together consistently with the greater relative abundance of Actinobacteriota and Bacteroidota with depth (Figure ). We note that Caldisericota ( Caldisericales ) in the permafrost microbiome (Figure ; Table ), also reported in numerous arctic studies (Monteux et al., ; Taş et al., ; Tripathi et al., , ; Varsadiya et al., ), was recently proposed as Candidatus Cryosericota phylum (Martinez et al., ), although we retain the taxonomic annotation as Caldisericota throughout this study (see Appendix for full details). The high relative abundance of Actinobacteriota such as Gaiellales and Micrococcales in the permafrost microbiome (Table ) is likely due to their ability to form dormant and spore‐like structures (Wunderlin et al., ) that can survive radiation, starvation, and extreme desiccation (De Vos et al., ; Johnson et al., ). Likewise, Bacteroidales (Bacteroidota) and Clostridiales (Firmicutes) are known to form endospores under the stressful conditions associated with permafrost, and they persist at higher relative abundance than non‐endospore forming taxa common in the active‐layer microbiome that perish in the harsh conditions (Burkert et al., ). This loss of non‐endospore forming taxa is compounded with time causing further convergence in the composition of the permafrost microbiome across the region as the permafrost ages (see Liang et al., ; Mackelprang et al., ). Total cell counts by depth along each soil profile showed that the absolute abundance of bacterial cells was similar between the active layer and permafrost depths (~10 6 to 10 8 cells per gram of soil; Table ), and only the relative abundance of bacterial taxa changed with depth (Figure ). Likewise, bacterial cell viability assays (% live cells) showed similar numbers of live cells with increasing soil depth across all sampling locations (Table ). Thus, the permafrost microbiome maintains a similar abundance of bacterial cells with similar proportions of live cells compared to the active‐layer microbiome. However, the permafrost microbiome has developed into a relic composition consisting of only a subset of bacterial taxa originally present at the time of permafrost formation, such as has been shown previously (Burkert et al., ; Liang et al., ). Our results strongly suggest that the permafrost microbiome has converged toward a shared subset of bacterial taxa capable of withstanding the harsh permafrost environment, and these taxa are no longer regulated by the same environmental factors affecting the overlying active‐layer microbiomes across the region. The relative abundance of archaeal taxa was <3% at any given site or depth (Figure , Figure ) but still differed statistically between tundra types (ANOVA; p < 0.001). Archaeal taxa consisted primarily of methanogenic Euryarchaeota such as Methanobacteriales and Methanosarcinales , consistent with similar observations across the tundra region (Deng et al., ; Hultman et al., ; Lipson et al., ; Romanowicz et al., ; Tripathi et al., ). Methanobacteriales are hydrogenotrophic methanogens and Methanosarcinales are acetoclastic methanogens, both of which are commonly found in saturated organic peat soils (Conrad et al., ; Deng et al., ; Metje & Frenzel, ; Tveit et al., ). Here, these methanogenic archaeal taxa were found predominantly in the surface depths (0–50 cm) of WS tundra at Toolik and Imnavait, while archaeal taxa in general were negligible in MAT tundra across all sites (Figure ). The greater relative abundance of both groups of methanogenic archaea in the WS tundra microbiome is likely due to relatively flat topography and the lack of soil drainage imposed by the permafrost boundary, both of which facilitate persistent saturation of the active layer and subsequent anoxia, providing substrates to carry out multiple fermentative pathways of methanogenesis. Thaw frequency and the transition zone microbiome Thaw depth measurements collected at three of our sampling sites annually in July and August since 1990 (Toolik MAT tundra) or 2003 (Imnavait MAT and WS tundra; see Appendix ) show the annual and seasonal extent of thawed versus frozen soil (Table ). The mean August thaw depth (±SD) over the survey duration was 40.5 cm (±5.3 cm) for Toolik MAT tundra, 43.9 cm (±6.2 cm) for Imnavait MAT tundra, and 56.3 cm (±5.7 cm) for Imnavait WS tundra. Thaw depth increased at a mean rate of 0.34 cm yr −1 for Toolik MAT tundra, 0.85 cm yr −1 for Imnavait MAT tundra, and 0.84 cm yr −1 for Imnavait WS tundra. The thaw survey data over time show the frequency of summer thaw to any depth, and we converted thaw depth measurements (cm) into thaw probabilities (% of thaw occurrence in any 1 year) for each 10‐cm increment of the soil profile over the survey duration (Table ). Depths that thawed intermittently in August (i.e. not every year) were considered to be in the ‘transition zone’ between high probability of thaw in a year for the active layer and low probability of thaw in a year for the permafrost. Within MAT tundra, thaw probabilities for July and August in the soil profiles were similar between Toolik and Imnavait, with the active layer extending from 0 to 40 cm, the transition zone from 40 to 60 cm, and permafrost from 60+ cm depth (Table ). For Imnavait WS tundra, we found the active layer extended from 0 to 50 cm, the transition zone from 50 to 70 cm, and permafrost from 70+ cm depth (Table ). None of the transition‐zone depths at any sampling location experienced thaw at the July sampling point for the entire duration of thaw surveys (July thaw probability = 0%; Table ). By comparing soil depths within each soil layer as determined from thaw surveys, with results from the hierarchical clustering analysis of microbial taxa (Figure ), we demonstrate statistically that microbial composition in the transition‐zone depths always clusters with the composition of the permafrost depths for the three sampling locations having thaw survey data. This indicates that the microbiome composition of the transition zone was statistically indistinguishable from the composition in the permafrost. Results from the transition‐zone microbiome confirm our prediction that the permafrost microbiome has not been substantially altered in composition when exposed to intermittent freeze–thaw cycles. This differs from previous laboratory‐based experiments of the permafrost microbiome that show rapid shifts in the composition and associated biogeochemical functions of microbial taxa within a few days of simulated thaw (Coolen & Orsi, ; Hultman et al., ; Mackelprang et al., ; Waldrop et al., ). Our results are more consistent with previous field‐based warming experiments that show little or no change in permafrost microbiome composition following moderate heating of the active layer that mimics its natural extension into the permafrost (Biasi et al., ; Lamb et al., ; Rinnan et al., ). As such, this study demonstrates that in natural settings, it takes more than just intermittent thaw, for example ~0.1–0.8 probability that the soil will thaw in any single year (Table ), to induce compositional change in the transition‐zone microbiome. Our transition‐zone microbiome results also contrast with a previous soil profile survey from high Arctic heath at Svalbard, Norway (Müller et al., ), where microbial taxa in the thaw transition zone differed in their relative abundance by >60% from the permafrost microbiome. This difference could be because their transition zone is narrower than ours, or they had finer resolution of the soil profile sampling at 3‐cm intervals rather than the 10‐cm interval we used. Homogenization of microbial taxa within a 10‐cm soil increment that spans the depths of the transition zone and permafrost soil layers may have hidden subtle changes in taxonomic abundance between these soil layers. However, our thaw surveys showed that depths with intermittent thaw ranged from a minimum of 15 cm (Imnavait WS tundra) up to 23 cm (Toolik MAT tundra) through the soil profiles; these depths exceed any single 10‐cm soil increment, and thus potential homogenization of the transition‐zone depths with the permafrost depths should be minimized. These results are the first to show that composition of the transition‐zone microbiome has not significantly shifted from the permafrost microbiome even though these soil depths have experienced intermittent thaw for on average 61% of the past 28 years at Toolik or 65%–82% of the past 15 years at Imnavait (Table ). Thaw duration In addition to the frequency of thaw, we determined the thaw duration (i.e. minimum number of thaw days in a summer) that each 10‐cm increment along the soil profile experienced in 2018 (Table ); the year 2018 corresponds to the field season when soil profiles were sampled for their microbiome composition. These are ‘minimum’ estimates because a depth may have thawed in between successive thaw survey dates or after the final survey (see Appendix ). The transition‐zone depths had less than half the number of thaw days than the average thaw days of the active layer. Specifically, the Toolik MAT tundra transition zone (40–60 cm) on average thawed only a quarter of the time (24.7%) that the active layer (0–40 cm) thawed (Table ). In Imnavait MAT tundra the transition zone (40–60 cm) had only 25.5% of mean active layer (0–40 cm) thaw days, and Imnavait WS tundra transition zone (50–70 cm) had 43.9% of mean active layer (0–50 cm) thaw days (Table ). This decline of thaw duration in the transition zone depths compared to the active‐layer depths, as well as the complete lack of thaw in the transition zone depths in July over all survey years, may account for the similar composition of the transition zone microbiome with the permafrost microbiome in hierarchical clustering analysis (Figure ). Spearman's non‐parametric rank correlations between thaw duration and the relative abundance of microbial taxa by depth along the soil profiles showed consistent, significant patterns between the three sampling locations associated with thaw survey data. In MAT tundra, Acidobacteriota, Alphaproteobacteria, and Verrucomicrobiota were significantly positively correlated with thaw duration at Toolik (Figure ) and Imnavait (Figure ), with Myxococcota and Planctomycetota also significantly positively correlated with thaw duration at Toolik (Figure ). This indicates that their relative abundance was greatest in soil depths experiencing the greatest duration of thaw (i.e. active‐layer depths). In WS tundra at Imnavait, Acidobacteriota, Myxococcota, and Alphaproteobacteria were also significantly positively correlated with thaw duration, but Verrucomicrobiota and Planctomycetota were not (Figure ). These correlations, or lack thereof, distinguishes the dominant WS tundra microbial taxa from the dominant MAT tundra microbial taxa in the active layer by hierarchical clustering analysis (Figure ). In contrast, Actinobacteriota, Bacteroidota, Caldisericota, and Firmicutes were significantly negatively correlated with thaw duration at all three sampling locations (Figure ). This indicates that their relative abundance was greatest in soil depths experiencing the shortest duration of thaw each year (i.e. permafrost depths). The significant positive and negative correlations between thaw duration and the relative abundance of microbial taxa at each soil depth are consistent with the significant clusters of microbial taxa in the active layer and the permafrost depths derived from hierarchical clustering analysis, respectively (Figure ). These correlation results are also consistent with significant depth‐dependent differences in the relative abundance of dominant bacterial taxa between the active layer and permafrost depths (Table ). For example, the significant positive correlation between thaw duration and the abundance of Alphaproteobacteria at all three sites (Figure ) was consistent with a greater abundance of Rhizobiales (Alphaproteobacteria) in the active layer depths of those same sites (ANOVA; p < 0.05 for MAT tundra sites; Table ). This suggests that the greater abundance of Alphaproteobacteria, specifically the Rhizobiales , in the active layer depths is in part a direct result of the greater duration of annual thaw at these corresponding depths along the soil profile. Likewise, the significant positive correlation between thaw duration and the abundance of Acidobacteriota (Figure ) likely accounts for the significantly greater abundance of Acidobacteriales and Solibacterales in the active layer depths of all three sites associated with thaw duration measurements (Table ). In a similar way, but with the opposite effect, the significant negative correlations between thaw duration and the abundance of Actinobacteriota, Bacteroidota, Caldisericota, and Firmicutes at all three sites (Figure ) is most likely due to the low or non‐existent thaw duration in the transition zone and permafrost soil depths (Table ). Prolonged freezing in permafrost soils significantly decreases dispersal and imposes substantial thermodynamic constraints that influence the composition of the permafrost microbiome (Bottos et al., ). As such, the permafrost microbiome converges toward a relic composition dominated by dormant and spore‐forming taxa such as Gaiellales and Micrococcales (Actinobacteriota), Bacteroidales (Bacteroidota), Caldisericales (Caldisericota), and Clostridiales (Firmicutes) (see Table ) that can survive radiation, starvation, and extreme desiccation (De Vos et al., ; Johnson et al., ). The loss of non‐endospore forming taxa is compounded with time causing further convergence in the composition of the permafrost microbiome across the region as the permafrost ages (see Liang et al., ; Mackelprang et al., ). Thus, the lack of substantial thaw duration and the assumed effects associated with prolonged freezing in the transition zone and permafrost soil depths accounts for the consistent negative correlations between thaw duration and the relative abundance of each of these dominant phyla (Actinobacteriota, Bacteroidota, Caldisericota, Firmicutes; Figure ) at sites associated with thaw duration measurements. Previous studies that focused on changes in soil physicochemical properties such as pH, conductivity, water content, and organic C to explain depth‐dependent variations in the soil microbiome concluded that the majority of the variation remained unexplained by these environmental factors (reviewed by Malard & Pearce, ). We also found weak correlations between soil physicochemical properties and microbial taxa (Table ). For the three sampling locations associated with multi‐decadal thaw surveys, we found a significant correlation between a physicochemical property and a certain taxon at only one or two locations instead of significant correlations at all three locations. For example, differences in soil pH by depth were significantly negatively correlated with active‐layer dominant Acidobacteriota and Alphaproteobacteria at both MAT tundra sites (Toolik and Imnavait) but not significantly correlated (positively or negatively) by soil depth at Imnavait WS tundra (Table ). Likewise, differences in soil pH by depth were significantly positively correlated with the permafrost‐dominant taxa Bacteroidota, Caldisericota, and Firmicutes at both MAT tundra sites, but not at Imnavait WS tundra (Table ). Previous studies showed that the relative abundance of Acidobacteriota had a strong positive correlation with soil pH and these taxa dominated the more acidic surface depths of the active layer before declining in abundance with depth toward the permafrost (Kim et al., ; Neufeld & Mohn, ; Wallenstein et al., ). In our results, we found a similar correlation between soil pH and Acidobacteriota with soil depth at all our MAT tundra sites. There were also significant correlations with other environmental factors and microbial taxa by soil depth at some sites (Table ); however, there were no general patterns of correlation across all sites. For example, those taxa that were significantly correlated with soil conductivity, water content, or organic C content with soil depth in Toolik MAT tundra were not the same taxa that were significantly correlated by soil depth in Imnavait MAT tundra (Table ). Inconsistent patterns occur between many taxa and the physicochemical properties measured at each sampling location (Table ), making it difficult to draw conclusions about how these environmental factors regulate soil microbial abundance in MAT or WS tundra across the region. This analysis indicates that the measured soil physicochemical properties explain a relatively small amount of the among and within site variance in microbiome composition. In contrast to soil physicochemical properties, the measured thaw frequency and especially thaw duration by depth (Figure ) showed consistent correlations with the composition of the active layer and permafrost microbiomes. The transition zone experienced less than half the number of thaw days than the average for the active layer (Table ). This considerable difference in thaw duration between the active layer and transition zone may explain why the transition zone microbiome was statistically indistinguishable from the permafrost microbiome (Figure ). However, as the thaw duration in the transition zone increases with future warming, we suggest that the abundance of active‐layer dominant taxa will increase including Alphaproteobacteria in MAT tundra ( Rhizobiales ), Desulfobacterota in WS tundra ( Geobacterales ), as well as Acidobacteriota ( Acidobacteriales and Solibacteriales ), Myxococcota ( Myxococcales ), Planctomycetota ( Tepidisphaerales ) and Verrucomicrobiota ( Chthoniobacterales and Pedosphaerales ) in both tundra types (Table ). We also predict that an increase in thaw duration will likely decrease the relative abundance of Actinobacteriota ( Gaiellales and Micrococcales ), Bacteroidota ( Bacteroidales ), Caldisericota ( Caldisericales ), and Firmicutes ( Clostridiales ) in the transition zone of both MAT and WS tundra (Table ). These predicted increases and decreases in taxa will shift the transition‐zone microbiome away from the current, relic permafrost microbiome and towards the active‐layer composition. For the soil microbiome analysis, 16S rRNA gene amplicons were resolved to amplicon sequence variants (ASVs) rather than operational taxonomic units (OTUs) based on recent benchmark studies comparing both sequence inference methods (Callahan et al., ; Chiarello et al., ). ASVs showed no statistical difference in taxonomic alpha diversity between sites, tundra type, or site × tundra type interactions using Chao1 and Shannon diversity metrics (ANOVA; p > 0.05 for all; Figure ) when all depths were included. However, there were significant depth‐dependent differences in taxonomic alpha diversity between soil layers when soil depths were pooled as active layer (0–40 cm MAT tundra; 0–50 cm WS tundra) or permafrost (40+ cm MAT tundra, 50+ cm WS tundra), but only when soil layers were compared across all sampling locations (ANOVA; p < 0.001). There were no significant differences in alpha diversity between active layer and permafrost soil layers within any single sampling location (ANOVA; p > 0.05 for each). Beta diversity results based on Bray–Curtis dissimilarity of ASV abundance indicated significant differences in taxonomic composition between sites, tundra type, soil layer, and their interactions (PERMANOVA; p < 0.001 for all; Figure ). Venn diagrams indicated that the greatest relative abundance of ASVs were shared between all three sites (52.8%; Figure ), with an additional 25.4% of ASV abundance shared between at least two sites (Figure ). Notably, these ASVs represented 78.2% of the relative abundance of all ASVs shared between sites but consisted of only 4550 ASVs out of 23,798 total ASVs (Figure ). Likewise, the dominant ASVs shared between tundra types (69.8%; Figure ) consisted of only 3328 ASVs out of 23,798 total ASVs (Figure ). The remaining extent of ASVs were unique to each site (19,248 ASVs; Figure ) or each tundra type (20,470 ASVs; Figure ). Sagwon had the greatest abundance of unique ASVs (13.4%) compared with Toolik (4%) and Imnavait (4.3%; Figure ), while WS tundra had more unique ASVs (19.3%) compared with MAT tundra (10.9%; Figure ). Collectively, the abundance of unique ASVs by site (21.7%) or by tundra type (30.2%) suggests the potential for regional endemism consistent with previous biogeographical microbiome surveys across the Arctic (Malard et al., ). Differences in ASV abundance by individual soil depths across all sites and tundra types were visualized through unconstrained non‐metric multidimensional scaling (NMDS) ordination based on Bray–Curtis dissimilarity (Figure ). At Toolik and Imnavait sites, it appears that the relative abundance of ASVs by soil depth are strongly influenced by tundra types shared between sites rather than between tundra types within each site (Figure ). Notably, we found that the soil depths corresponding to MAT tundra or WS tundra overlapped between Toolik and Imnavait sites in ordination space based on their shared abundance of ASVs (Figure ). Beta diversity measurements also indicated no significant difference in taxonomic abundance of ASVs shared between these sites based on tundra type (PERMANOVA; p > 0.05 for each), while there were significant differences between tundra types within each site (PERMANOVA; p < 0.05 for each; Figure ). These results are likely due to differences in the dominant plant species known to affect soil physicochemical properties that regulate microbiome composition (Judd et al., ; Judd & Kling, ; Taş et al., ; Zak & Kling, ). For example, moist acidic tussock tundra is dominated by sedges ( Eriophorum vaginatum ) and dwarf shrubs ( Betula nana , Ledum palustre ) on hillslopes with greater soil water drainage than wet sedge tundra, which is dominated entirely by sedges ( Carex aquatilis , C. chordorrhiza , C. rotunda ) in valley bottoms where soil water accumulates (Walker et al., ). The ecological differences in dominant plant species between MAT and WS tundra affect soil physicochemical properties such as soil pH, water content, and the accumulation of organic C (Table ) and likely account for the overlapping ordination of soil depths by tundra type between Toolik and Imnavait sampling locations (Figure ). Our results are also consistent with numerous studies showing regional variations in taxonomic abundance linked to tundra type (Chu et al., ; Judd et al., ; Tripathi et al., ; Wallenstein et al., ; Zak & Kling, ). In contrast, we found overlapping ordinations for soil depths corresponding to MAT and WS tundra at Sagwon based on shared ASV abundance within the site (Figure ). However, there was a significant difference in taxonomic beta diversity based on the abundance of ASVs shared between tundra types at Sagwon (PERMANOVA; p < 0.001; Figure ) that may account for the distinct clustering of MAT tundra soil depths within the more dispersed cluster of WS tundra depths (Figure ). The Sagwon landscape is ~2500 kyr BP in age compared to the youngest landscape at Toolik (~14 kyr BP) and the intermediate‐aged Imnavait (~250 kyr BP), and our sites represent a natural Pleistocene chronosequence across the sampling region (Walker et al., ). The greater proportion of ASVs unique to the considerably older Sagwon landscape (13.4%; Figure ) compared to the younger aged landscapes at Toolik and Imnavait (4%, 4.3%, respectively; Figure ) is also consistent with previous arctic studies showing substantial variations in taxonomic abundance by landscape age due to differences in soil physicochemical properties that develop over geologic time (Mackelprang et al., ; Saidi‐Mehrabad et al., ). Here, we found that soil pH and conductivity were significantly greater at Sagwon compared to the other sites (ANOVA; p < 0.05; Table ), which may account for the higher proportion of unique ASVs at Sagwon (Figure ) and the overlapping soil depths by tundra types within Sagwon that ordinated separately from the same tundra types shared between Toolik and Imnavait (Figure ). Soil pH has previously been identified as a key factor influencing the abundance of microbial taxa over both small and large geographic scales in the Arctic (Chu et al., ; Ganzert et al., ; Siciliano et al., ), and pH can vary by landscape age (Hultman et al., ; Mackelprang et al., ; Saidi‐Mehrabad et al., ). Alternatively, the greater geographic distance between Sagwon and Toolik or Imnavait (~90 km; Figure ) may have influenced microbial distribution (e.g. via dispersal limitations) across the region such that the effects of tundra type on ASV abundance by soil depths at Sagwon was minimized in our NMDS ordination (Figure ). All these overlapping ordination patterns of soil depths based on ASV abundance in the unconstrained NMDS plot were consistent with the relative abundance of shared and unique ASVs shown in the Venn diagrams (Figure ) and with the significant differences measured in taxonomic beta diversity (Figure ). Furthermore, the surface depth (0–10 cm) from five of six sampling locations (Sagwon MAT tundra as the exception) ordinated separately from all other depths within their respective soil profiles and coalesced near each other in ordination space (grey ellipse; Figure ). This also included the 10–20 cm surface depth for MAT tundra at Toolik and Imnavait (Figure ), which together indicate a unique taxonomic composition within the surface depths that is consistent across sampling locations. This strong pattern of shared taxa in surface depths across the tundra region regardless of tundra type or landscape age has not been shown before and could imply that a common dispersal mechanism such as wind is homogenizing the microbiome composition at the soil surface, and that this mechanism is strong enough to overcome the effect of differences in dominant plant species by tundra type. ASV annotations, derived from the SILVA database (v. 138; Quast et al., ), indicated that the soil microbiome at all sampling locations with all depths combined was dominated by bacteria (>98% relative abundance). However, the relative abundance of dominant bacterial taxa changed with depth at all sampling locations (Figure ). Using hierarchical clustering analysis on depth‐dependent differences in bacterial abundance, we found that all soil profiles clustered into several distinct soil layers (Figure ). Figure delineates distinct soil layers within each soil profile with black horizontal lines between soil depths as determined from hierarchical clustering analysis (Figure ) to visualize the depth‐dependent differences in bacterial abundance along the soil profiles. For Toolik and Imnavait MAT tundra, we consider the distinct soil clusters from 0 to 40 cm depth to represent the active layer, while the active layer in Sagwon MAT tundra ranged from 0 to 30 cm depth (Figure ). There was also an additional cluster of soil depths within the active layer of Toolik and Imnavait MAT tundra where the abundance of bacterial taxa clustered by different soil types (Figure ). Specifically, the surface soil depths (0–20 cm) were composed of organic soil and clustered distinctly from the lower half of the active layer (20–40 cm) that was composed of mineral soil (Figure ). Likewise, the Sagwon MAT tundra active layer (0–30 cm) was composed of organic soil that clustered distinctly from the permafrost depths (40+ cm) that were composed of mineral soil (Figure ). Similar results were also found in previous tundra studies where bacterial abundance was strongly related to a shift in substrate availability between the organic and mineral soil horizons (Deng et al., ; Koyama et al., ). The change from organic to mineral soil type could also account for the distinct ordination patterns of surface depth MAT tundra samples in our NMDS plot (0–10 cm, 10–20 cm; Figure ). The distinct cluster of bacterial taxa after 40‐cm depth (30‐cm Sagwon) in MAT tundra (Figure ) was not associated with a change in soil type because all soil depths below 20 cm (30‐cm Sagwon) were composed of mineral soil (Figure ). Thus, the distinct cluster after 40‐cm depth (30‐cm Sagwon) likely indicates the average permafrost boundary (described below). In contrast, the WS tundra soil profiles were composed entirely of organic soil (Figure ); thus, the clustering patterns of shared bacterial taxa into distinct soil layers are not due to major changes in soil type with depth (Figure ). Rather, these clustering patterns are likely due to the substantial differences in taxonomic abundance at the surface‐most depth (0–10 cm in Toolik and Imnavait WS; Figure ) and additional differences at the permafrost boundary (50+ cm Toolik WS; 60+ cm Imnavait WS; 40+ cm Sagwon WS; Figure ). These significant taxonomic differences at the permafrost boundary are likely due to the limitations imposed on the microbiome from prolonged freezing (see also Kraft et al., ; Liang et al., ; Willerslev et al., ). Imnavait WS tundra was the only soil profile with several non‐significant clusters of bacterial taxa by distinct depths, with only the 40–60 cm depths forming a significantly distinct cluster based on the relative abundance of shared bacterial taxa within these soil depths (95% similarity; Figure ). The observed shift in composition between the active layer and permafrost microbiomes at all sites was largely due to variations in the abundance of Acidobacteriota, Actinobacteriota, and Bacteroidota (Figure ), generally consistent with previous studies in arctic soils (Deng et al., ; Frank‐Fahle et al., ; Kim et al., ; Müller et al., ; Singh et al., ; Tripathi et al., , ; Varsadiya et al., ; Wilhelm et al., ). For example, we found that Acidobacteriales , Solibacteriales , and Vicinamibacterales (Acidobacteriota) abundance was greater in the active‐layer microbiome compared to the permafrost microbiome, especially within MAT tundra, while Gaiellales , Micrococcales , RBG‐16‐55‐12, and WCHB1‐81 (Actinobacteriota), as well as Bacteroidales and Sphingobacteriales (Bacteroidota) abundance was greater in the permafrost microbiome of both tundra types (ANOVA; p < 0.05 where indicated; Table ). However, we did find that dominant Acidobacteriota taxa in the active‐layer soil depths differed between Toolik and Imnavait MAT tundra compared to Sagwon MAT tundra (Table ). Specifically, Acidobacteriota within the active‐layer microbiome of Toolik and Imnavait MAT tundra consisted of Acidobacteriales (7.8%–10.4%) and Solibacteriales (1.6%–1.7%), with less than 1.5% relative abundance of Vicinamibacterales (Table ). The Sagwon MAT tundra active‐layer microbiome contained no measurable abundance of Acidobacteriales or Solibacteriales but had ~6.5% of Vicinamibacterales (Table ). The active‐layer microbiome within Sagwon WS tundra was similar and contained ~3.1% relative abundance of Vicinamibacterales (Table ). The similarly high relative abundance of Vicinamibacterales within Sagwon MAT and WS tundra may also account, in part, for their shared ordination patterns in our NMDS plot that clustered distinctly from either tundra type at Toolik and Imnavait (Figure ). In addition to the Acidobacteriota, the relative abundance of Pseudomonadota (formerly Proteobacteria) such as Rhizobiales (Alphaproteobacteria) and Burkholderiales (Gammaproteobacteria), as well as Geobacterales (Desulfobacterota) and Verrucomicrobiota including Chthoniobacterales and Pedosphaerales were greater in the active‐layer microbiome across the region (Table ), consistent with studies conducted at numerous sites across the Arctic (reviewed by Malard & Pearce, ). The higher abundance of Pseudomonadota in the active layer could be related to their preference for higher concentrations of nutrients (Kim et al., ), especially given that the relative abundance of Alphaproteobacteria and Gammaproteobacteria taxa were shown to increase after fertilization compared with control plots at Toolik (Campbell et al., ; Koyama et al., ). Also, the greater abundance of bacterial taxa such as Geobacterales (Desulfobacterota) in the active‐layer microbiome of WS tundra (Table ) is likely because they thrive under saturated conditions common in the active layer of WS tundra (Emerson et al., ), as has been previously reported from the Toolik Lake region (Romanowicz et al., ). This suggests that the depth‐dependent variations in the soil microbiome could be related to the different resource needs of each bacterial taxon. In contrast, the permafrost microbiome was similar across sites due to an increase in the relative abundance of Actinobacteriota and Bacteroidota (Figure ), as mentioned earlier. In addition, the relative abundance of Caldisericota and Firmicutes also increased in the permafrost microbiome (Figure ) and clustered together consistently with the greater relative abundance of Actinobacteriota and Bacteroidota with depth (Figure ). We note that Caldisericota ( Caldisericales ) in the permafrost microbiome (Figure ; Table ), also reported in numerous arctic studies (Monteux et al., ; Taş et al., ; Tripathi et al., , ; Varsadiya et al., ), was recently proposed as Candidatus Cryosericota phylum (Martinez et al., ), although we retain the taxonomic annotation as Caldisericota throughout this study (see Appendix for full details). The high relative abundance of Actinobacteriota such as Gaiellales and Micrococcales in the permafrost microbiome (Table ) is likely due to their ability to form dormant and spore‐like structures (Wunderlin et al., ) that can survive radiation, starvation, and extreme desiccation (De Vos et al., ; Johnson et al., ). Likewise, Bacteroidales (Bacteroidota) and Clostridiales (Firmicutes) are known to form endospores under the stressful conditions associated with permafrost, and they persist at higher relative abundance than non‐endospore forming taxa common in the active‐layer microbiome that perish in the harsh conditions (Burkert et al., ). This loss of non‐endospore forming taxa is compounded with time causing further convergence in the composition of the permafrost microbiome across the region as the permafrost ages (see Liang et al., ; Mackelprang et al., ). Total cell counts by depth along each soil profile showed that the absolute abundance of bacterial cells was similar between the active layer and permafrost depths (~10 6 to 10 8 cells per gram of soil; Table ), and only the relative abundance of bacterial taxa changed with depth (Figure ). Likewise, bacterial cell viability assays (% live cells) showed similar numbers of live cells with increasing soil depth across all sampling locations (Table ). Thus, the permafrost microbiome maintains a similar abundance of bacterial cells with similar proportions of live cells compared to the active‐layer microbiome. However, the permafrost microbiome has developed into a relic composition consisting of only a subset of bacterial taxa originally present at the time of permafrost formation, such as has been shown previously (Burkert et al., ; Liang et al., ). Our results strongly suggest that the permafrost microbiome has converged toward a shared subset of bacterial taxa capable of withstanding the harsh permafrost environment, and these taxa are no longer regulated by the same environmental factors affecting the overlying active‐layer microbiomes across the region. The relative abundance of archaeal taxa was <3% at any given site or depth (Figure , Figure ) but still differed statistically between tundra types (ANOVA; p < 0.001). Archaeal taxa consisted primarily of methanogenic Euryarchaeota such as Methanobacteriales and Methanosarcinales , consistent with similar observations across the tundra region (Deng et al., ; Hultman et al., ; Lipson et al., ; Romanowicz et al., ; Tripathi et al., ). Methanobacteriales are hydrogenotrophic methanogens and Methanosarcinales are acetoclastic methanogens, both of which are commonly found in saturated organic peat soils (Conrad et al., ; Deng et al., ; Metje & Frenzel, ; Tveit et al., ). Here, these methanogenic archaeal taxa were found predominantly in the surface depths (0–50 cm) of WS tundra at Toolik and Imnavait, while archaeal taxa in general were negligible in MAT tundra across all sites (Figure ). The greater relative abundance of both groups of methanogenic archaea in the WS tundra microbiome is likely due to relatively flat topography and the lack of soil drainage imposed by the permafrost boundary, both of which facilitate persistent saturation of the active layer and subsequent anoxia, providing substrates to carry out multiple fermentative pathways of methanogenesis. Thaw depth measurements collected at three of our sampling sites annually in July and August since 1990 (Toolik MAT tundra) or 2003 (Imnavait MAT and WS tundra; see Appendix ) show the annual and seasonal extent of thawed versus frozen soil (Table ). The mean August thaw depth (±SD) over the survey duration was 40.5 cm (±5.3 cm) for Toolik MAT tundra, 43.9 cm (±6.2 cm) for Imnavait MAT tundra, and 56.3 cm (±5.7 cm) for Imnavait WS tundra. Thaw depth increased at a mean rate of 0.34 cm yr −1 for Toolik MAT tundra, 0.85 cm yr −1 for Imnavait MAT tundra, and 0.84 cm yr −1 for Imnavait WS tundra. The thaw survey data over time show the frequency of summer thaw to any depth, and we converted thaw depth measurements (cm) into thaw probabilities (% of thaw occurrence in any 1 year) for each 10‐cm increment of the soil profile over the survey duration (Table ). Depths that thawed intermittently in August (i.e. not every year) were considered to be in the ‘transition zone’ between high probability of thaw in a year for the active layer and low probability of thaw in a year for the permafrost. Within MAT tundra, thaw probabilities for July and August in the soil profiles were similar between Toolik and Imnavait, with the active layer extending from 0 to 40 cm, the transition zone from 40 to 60 cm, and permafrost from 60+ cm depth (Table ). For Imnavait WS tundra, we found the active layer extended from 0 to 50 cm, the transition zone from 50 to 70 cm, and permafrost from 70+ cm depth (Table ). None of the transition‐zone depths at any sampling location experienced thaw at the July sampling point for the entire duration of thaw surveys (July thaw probability = 0%; Table ). By comparing soil depths within each soil layer as determined from thaw surveys, with results from the hierarchical clustering analysis of microbial taxa (Figure ), we demonstrate statistically that microbial composition in the transition‐zone depths always clusters with the composition of the permafrost depths for the three sampling locations having thaw survey data. This indicates that the microbiome composition of the transition zone was statistically indistinguishable from the composition in the permafrost. Results from the transition‐zone microbiome confirm our prediction that the permafrost microbiome has not been substantially altered in composition when exposed to intermittent freeze–thaw cycles. This differs from previous laboratory‐based experiments of the permafrost microbiome that show rapid shifts in the composition and associated biogeochemical functions of microbial taxa within a few days of simulated thaw (Coolen & Orsi, ; Hultman et al., ; Mackelprang et al., ; Waldrop et al., ). Our results are more consistent with previous field‐based warming experiments that show little or no change in permafrost microbiome composition following moderate heating of the active layer that mimics its natural extension into the permafrost (Biasi et al., ; Lamb et al., ; Rinnan et al., ). As such, this study demonstrates that in natural settings, it takes more than just intermittent thaw, for example ~0.1–0.8 probability that the soil will thaw in any single year (Table ), to induce compositional change in the transition‐zone microbiome. Our transition‐zone microbiome results also contrast with a previous soil profile survey from high Arctic heath at Svalbard, Norway (Müller et al., ), where microbial taxa in the thaw transition zone differed in their relative abundance by >60% from the permafrost microbiome. This difference could be because their transition zone is narrower than ours, or they had finer resolution of the soil profile sampling at 3‐cm intervals rather than the 10‐cm interval we used. Homogenization of microbial taxa within a 10‐cm soil increment that spans the depths of the transition zone and permafrost soil layers may have hidden subtle changes in taxonomic abundance between these soil layers. However, our thaw surveys showed that depths with intermittent thaw ranged from a minimum of 15 cm (Imnavait WS tundra) up to 23 cm (Toolik MAT tundra) through the soil profiles; these depths exceed any single 10‐cm soil increment, and thus potential homogenization of the transition‐zone depths with the permafrost depths should be minimized. These results are the first to show that composition of the transition‐zone microbiome has not significantly shifted from the permafrost microbiome even though these soil depths have experienced intermittent thaw for on average 61% of the past 28 years at Toolik or 65%–82% of the past 15 years at Imnavait (Table ). In addition to the frequency of thaw, we determined the thaw duration (i.e. minimum number of thaw days in a summer) that each 10‐cm increment along the soil profile experienced in 2018 (Table ); the year 2018 corresponds to the field season when soil profiles were sampled for their microbiome composition. These are ‘minimum’ estimates because a depth may have thawed in between successive thaw survey dates or after the final survey (see Appendix ). The transition‐zone depths had less than half the number of thaw days than the average thaw days of the active layer. Specifically, the Toolik MAT tundra transition zone (40–60 cm) on average thawed only a quarter of the time (24.7%) that the active layer (0–40 cm) thawed (Table ). In Imnavait MAT tundra the transition zone (40–60 cm) had only 25.5% of mean active layer (0–40 cm) thaw days, and Imnavait WS tundra transition zone (50–70 cm) had 43.9% of mean active layer (0–50 cm) thaw days (Table ). This decline of thaw duration in the transition zone depths compared to the active‐layer depths, as well as the complete lack of thaw in the transition zone depths in July over all survey years, may account for the similar composition of the transition zone microbiome with the permafrost microbiome in hierarchical clustering analysis (Figure ). Spearman's non‐parametric rank correlations between thaw duration and the relative abundance of microbial taxa by depth along the soil profiles showed consistent, significant patterns between the three sampling locations associated with thaw survey data. In MAT tundra, Acidobacteriota, Alphaproteobacteria, and Verrucomicrobiota were significantly positively correlated with thaw duration at Toolik (Figure ) and Imnavait (Figure ), with Myxococcota and Planctomycetota also significantly positively correlated with thaw duration at Toolik (Figure ). This indicates that their relative abundance was greatest in soil depths experiencing the greatest duration of thaw (i.e. active‐layer depths). In WS tundra at Imnavait, Acidobacteriota, Myxococcota, and Alphaproteobacteria were also significantly positively correlated with thaw duration, but Verrucomicrobiota and Planctomycetota were not (Figure ). These correlations, or lack thereof, distinguishes the dominant WS tundra microbial taxa from the dominant MAT tundra microbial taxa in the active layer by hierarchical clustering analysis (Figure ). In contrast, Actinobacteriota, Bacteroidota, Caldisericota, and Firmicutes were significantly negatively correlated with thaw duration at all three sampling locations (Figure ). This indicates that their relative abundance was greatest in soil depths experiencing the shortest duration of thaw each year (i.e. permafrost depths). The significant positive and negative correlations between thaw duration and the relative abundance of microbial taxa at each soil depth are consistent with the significant clusters of microbial taxa in the active layer and the permafrost depths derived from hierarchical clustering analysis, respectively (Figure ). These correlation results are also consistent with significant depth‐dependent differences in the relative abundance of dominant bacterial taxa between the active layer and permafrost depths (Table ). For example, the significant positive correlation between thaw duration and the abundance of Alphaproteobacteria at all three sites (Figure ) was consistent with a greater abundance of Rhizobiales (Alphaproteobacteria) in the active layer depths of those same sites (ANOVA; p < 0.05 for MAT tundra sites; Table ). This suggests that the greater abundance of Alphaproteobacteria, specifically the Rhizobiales , in the active layer depths is in part a direct result of the greater duration of annual thaw at these corresponding depths along the soil profile. Likewise, the significant positive correlation between thaw duration and the abundance of Acidobacteriota (Figure ) likely accounts for the significantly greater abundance of Acidobacteriales and Solibacterales in the active layer depths of all three sites associated with thaw duration measurements (Table ). In a similar way, but with the opposite effect, the significant negative correlations between thaw duration and the abundance of Actinobacteriota, Bacteroidota, Caldisericota, and Firmicutes at all three sites (Figure ) is most likely due to the low or non‐existent thaw duration in the transition zone and permafrost soil depths (Table ). Prolonged freezing in permafrost soils significantly decreases dispersal and imposes substantial thermodynamic constraints that influence the composition of the permafrost microbiome (Bottos et al., ). As such, the permafrost microbiome converges toward a relic composition dominated by dormant and spore‐forming taxa such as Gaiellales and Micrococcales (Actinobacteriota), Bacteroidales (Bacteroidota), Caldisericales (Caldisericota), and Clostridiales (Firmicutes) (see Table ) that can survive radiation, starvation, and extreme desiccation (De Vos et al., ; Johnson et al., ). The loss of non‐endospore forming taxa is compounded with time causing further convergence in the composition of the permafrost microbiome across the region as the permafrost ages (see Liang et al., ; Mackelprang et al., ). Thus, the lack of substantial thaw duration and the assumed effects associated with prolonged freezing in the transition zone and permafrost soil depths accounts for the consistent negative correlations between thaw duration and the relative abundance of each of these dominant phyla (Actinobacteriota, Bacteroidota, Caldisericota, Firmicutes; Figure ) at sites associated with thaw duration measurements. Previous studies that focused on changes in soil physicochemical properties such as pH, conductivity, water content, and organic C to explain depth‐dependent variations in the soil microbiome concluded that the majority of the variation remained unexplained by these environmental factors (reviewed by Malard & Pearce, ). We also found weak correlations between soil physicochemical properties and microbial taxa (Table ). For the three sampling locations associated with multi‐decadal thaw surveys, we found a significant correlation between a physicochemical property and a certain taxon at only one or two locations instead of significant correlations at all three locations. For example, differences in soil pH by depth were significantly negatively correlated with active‐layer dominant Acidobacteriota and Alphaproteobacteria at both MAT tundra sites (Toolik and Imnavait) but not significantly correlated (positively or negatively) by soil depth at Imnavait WS tundra (Table ). Likewise, differences in soil pH by depth were significantly positively correlated with the permafrost‐dominant taxa Bacteroidota, Caldisericota, and Firmicutes at both MAT tundra sites, but not at Imnavait WS tundra (Table ). Previous studies showed that the relative abundance of Acidobacteriota had a strong positive correlation with soil pH and these taxa dominated the more acidic surface depths of the active layer before declining in abundance with depth toward the permafrost (Kim et al., ; Neufeld & Mohn, ; Wallenstein et al., ). In our results, we found a similar correlation between soil pH and Acidobacteriota with soil depth at all our MAT tundra sites. There were also significant correlations with other environmental factors and microbial taxa by soil depth at some sites (Table ); however, there were no general patterns of correlation across all sites. For example, those taxa that were significantly correlated with soil conductivity, water content, or organic C content with soil depth in Toolik MAT tundra were not the same taxa that were significantly correlated by soil depth in Imnavait MAT tundra (Table ). Inconsistent patterns occur between many taxa and the physicochemical properties measured at each sampling location (Table ), making it difficult to draw conclusions about how these environmental factors regulate soil microbial abundance in MAT or WS tundra across the region. This analysis indicates that the measured soil physicochemical properties explain a relatively small amount of the among and within site variance in microbiome composition. In contrast to soil physicochemical properties, the measured thaw frequency and especially thaw duration by depth (Figure ) showed consistent correlations with the composition of the active layer and permafrost microbiomes. The transition zone experienced less than half the number of thaw days than the average for the active layer (Table ). This considerable difference in thaw duration between the active layer and transition zone may explain why the transition zone microbiome was statistically indistinguishable from the permafrost microbiome (Figure ). However, as the thaw duration in the transition zone increases with future warming, we suggest that the abundance of active‐layer dominant taxa will increase including Alphaproteobacteria in MAT tundra ( Rhizobiales ), Desulfobacterota in WS tundra ( Geobacterales ), as well as Acidobacteriota ( Acidobacteriales and Solibacteriales ), Myxococcota ( Myxococcales ), Planctomycetota ( Tepidisphaerales ) and Verrucomicrobiota ( Chthoniobacterales and Pedosphaerales ) in both tundra types (Table ). We also predict that an increase in thaw duration will likely decrease the relative abundance of Actinobacteriota ( Gaiellales and Micrococcales ), Bacteroidota ( Bacteroidales ), Caldisericota ( Caldisericales ), and Firmicutes ( Clostridiales ) in the transition zone of both MAT and WS tundra (Table ). These predicted increases and decreases in taxa will shift the transition‐zone microbiome away from the current, relic permafrost microbiome and towards the active‐layer composition. Results from our study are the first to show that thaw duration is a strong environmental factor that operates over time and space to regulate the microbial composition of the active layer, transition zone, and permafrost in soils of arctic tundra. The duration of thaw that soils experience each summer correlates better than does soil physicochemistry with the dominant microbial taxa among regional sites, tundra types, and soil depth. Furthermore, long‐term thaw surveys indicate that while thaw depth is increasing over time, the transition‐zone microbiome is still very similar to the permafrost microbiome. This suggests that thaw frequency and duration in the transition zone are still too low to shift the transition‐zone microbiome composition away from that in the permafrost and towards that in the active layer. As climate warming increases and thaw frequency and especially thaw duration increases at depth, we predict that for any tundra soil the microbiome composition (and thus function) will shift from the relic permafrost taxa towards current active‐layer taxa. These shifts should follow the current thaw duration and microbial composition at each depth (e.g. Table , Figure ). Monitoring thaw duration and microbiome composition at depth may help predict the microbial response to future permafrost thaw. The author declares that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. Appendix S1: Supplementary Information. Click here for additional data file.
Measuring benefit from non‐surgical interventions in otolaryngology for different conditions, using the revised 5‐factor Glasgow Benefit Inventory
55741050-a405-4f74-943b-6d372eab3200
10092363
Otolaryngology[mh]
INTRODUCTION Originally described in 1996, the Glasgow Benefit Inventory (GBI) is an 18‐item questionnaire for measuring patient benefit after otorhinolaryngological (ORL) interventions. Administered after intervention, it measures the change in health status, whether positive (benefit) or negative (harm). It was designed to be patient‐orientated, sensitive to change after intervention and suitable for comparing different interventions. Because it requires no measurement before the intervention, it is easy to use and adaptable to various clinical situations. Since 1996, the GBI has been used on a wide range of ORL surgical operations, with Hendry et al reviewing 117 reports up to January 2015. We recently described a shorter (15‐question) version of the GBI which we refer to as 5‐factor GBI ( GBI‐5F ). This has five factor scores which give more detailed information on the specific areas of patient benefit. SENTOS was a prospective cohort study of patients attending outpatient ORL clinics at six Scottish NHS hospitals between 2001 and 2005. At that time, all audiological referrals were made via ORL. The study administered two outcome measures: the Health Utilities Index mark 3 (HUI‐3) and the GBI. Only the HUI‐3 results have been reported in detail. GBI questionnaires were completed by 4543 SENTOS participants 3–6 months after intervention, giving a considerable eset for analysis. This enables study of patient‐reported benefit from a wide range of interventions. To date, only one paper has reported a non‐surgical intervention (provision of hearing aids). The article's objective is to report the use of the GBI on a wider range of non‐surgical interventions. Our aim is, firstly, to demonstrate that non‐surgical interventions have measurable patient benefit and, secondly, that the five factors of the new GBI‐5F can give useful information on the pattern of patient benefit that is seen in different clinical situations. METHODS The dataset comprised GBI responses obtained from adult patients (16 years or older) attending an NHS Academic ORL outpatient appointment and completing the GBI for the SENTOS study. Details of this cohort have been published previously. Briefly, 9005 adult patients attending ORL outpatient clinics in one of six Scottish hospitals between 2001 and 2005 were sent the HUI‐3 and GBI questionnaires to complete sometime after the hospital attendance: 6 months later if they underwent surgery or were given hearing aids, 3 months later if they were managed medically or with no active intervention. The HUI‐3 results have already been reported in detail and will not be discussed further here. The participants completed the original 18‐question GBI, but 3 questions (Q9, Q10 and Q14) were removed to fit the GBI‐5F scheme. The five factors are Quality of life, Support, Social involvement, Self‐confidence and General health . Each of these, as well as the overall score, are calculated by scoring the responses to each question on a 5‐point scale from −2 to +2, adding up the question scores and then re‐scaling the result from −100 (maximum possible harm) to +100 (maximum possible benefit) and centred on 0 (no change). The total score and factor score each stand alone: they are not sub‐scale scores in the sense that the total score cannot be calculated by adding up the factor scores, for example. Data were analysed using SPSS version 26 (IBM Corporation, Armonk, New York, USA). Statistical comparisons were made using Kruskal‐Wallis one‐way ANOVA and Mann–Whitney U ‐test where appropriate. This report referenced to the COSMIN guideline for patient‐reported outcome measures. RESULTS 3.1 Patient characteristics Of the 9005 participants in the SENTOS study, 1774 (19.7%) were coded as undergoing surgery. The remaining 7231 (80.3%) did not. In total, 40 of the questionnaires had been administered to children aged 14 and 15 years, and these were excluded along with five adults undergoing cancer radiotherapy. Of the remaining adults, 4543 of 8960 (51%) completed a GBI questionnaire 6 months after completing treatment. This comprised 1939 men (42.7%) and 2604 women (57.3%), with a median age of 55 years (mean 54 years, range 16–101 years). Patients with a single clear diagnosis and a single intervention were identified. Those with combined interventions (e.g., ear medication and a hearing aid) were excluded. Patients referred outside ORL/audiology to other departments, including physiotherapy ( n = 39) and speech therapy ( n = 109), were excluded. We identified a series of common diagnosis or intervention groups from the dataset for which there was no surgical treatment option and for which there were at least 50 patients for analysis. This gave us the following patient groups: sensorineural hearing loss, conductive hearing loss managed with hearing aids, tinnitus, benign paroxysmal positional vertigo, otitis externa and laryngo‐pharyngeal reflux, plus a large heterogenous group of patients managed by means of reassurance and advice without any active intervention. 3.2 No active intervention/reassurance SENTOS contains a large group of patients coded as receiving ‘reassurance’ or ‘advice on self‐management’ with no active medical or surgical intervention. There were 1373 such patients (30% of those with a completed GBI), 550 men (40.1%) and 823 women with a median age of 55 years (range 16–93, mean 53.77 years). Their primary presenting symptoms included hearing impairment (370 cases, 26.9%), dizziness (217, 15.8%), tinnitus (140, 10.2%), otalgia (110, 8%), hoarseness (94, 6.8%), lump in throat (75, 5.5%) and sore throat (69, 5%). The most common diagnoses then given were ‘no abnormality demonstrated’ (335, 24.4%), ‘bilateral sensorineural hearing loss’ (294, 21.4%), ‘somatoform disease including hyperventilation and globus hystericus/pharyngeus’ (78, 5.7%), “tinnitus” (74, 5.4%), ‘dizziness and light‐headedness’ (50, 3.6%) and vestibular neuronitis (45, 3.3%). Apart from a very small number of outliers reporting large benefits and harms, most patients report no change in any factor, with 80% scoring zero for Support , 65% for General health , 60% for Quality of life , 80% for Self‐confidence and 77% for Social involvement . For the total score, 42% score exactly 0 and 62% score between −3.3 and +3.3 (Table ). 3.3 Sensorineural hearing loss There were 774 patients coded as having a bilateral sensorineural hearing loss, of whom 480 received hearing aids and 294 received only reassurance and advice. Benefit is greater in those given hearing aids, with the difference between their scores and those having reassurance and advice being statistically significant for all factors except General health (Figure ). 3.4 Comparison of hearing aid benefit between conductive and sensorineural impairments To make this comparison, a large cohort of those with a presumptive conductive impairment was required. A total of 72 patients were identified with a middle ear condition (28 otosclerosis, 19 inactive mucosal chronic otitis media, 17 other middle ear disorders such as adhesive otitis media and 8 previous middle ear surgery) for whom the provision of an aid was the management. Comparison of benefit ( Figure ) showed that the Quality of life benefit is significantly greater in those with a conductive impairment ( n = 72) than those with a sensorineural impairment ( n = 480). There is no significant difference in the other four factors or the overall score. 3.5 Interventions for benign paroxysmal positional vertigo Of the 53 patients diagnosed with benign paroxysmal positional vertigo (BPPV), 18 patients treated with reassurance and advice only, and 35 patients receiving an Epley or Semont manoeuvre with no other intervention. There is a significant difference in the total GBI‐5F score and the Quality of life factor score (Figure ), with both of these being higher in the group receiving an otolith repositioning manoeuvre. 3.6 Interventions for tinnitus Of the 102 adults with tinnitus, 28 were provided with a tinnitus masker or hearing aid and 74 given reassurance alone. There was a small improvement in Support in those given a hearing aid or masker compared with those just given reassurance (Mann–Whitney U ‐test, p = .034), but no other differences were identified (Figure ). 3.7 Interventions for otitis externa There were 123 patients diagnosed with otitis externa, of whom 89 were prescribed topical medications and 34 received only reassurance and advice. There is no difference in GBI‐5F total or factor scores between the two treatments, albeit both groups reported positive total scores and Quality of life factor scores ( Figure ) . 3.8 Interventions for laryngo‐pharyngeal reflux There were 195 patients with symptoms attributed to laryngo‐pharyngeal reflux of whom 176 were solely prescribed medication and 19 solely given reassurance. There is no significant difference between medication and reassurance for the total GBI‐5F score or any of the factor scores. Patient characteristics Of the 9005 participants in the SENTOS study, 1774 (19.7%) were coded as undergoing surgery. The remaining 7231 (80.3%) did not. In total, 40 of the questionnaires had been administered to children aged 14 and 15 years, and these were excluded along with five adults undergoing cancer radiotherapy. Of the remaining adults, 4543 of 8960 (51%) completed a GBI questionnaire 6 months after completing treatment. This comprised 1939 men (42.7%) and 2604 women (57.3%), with a median age of 55 years (mean 54 years, range 16–101 years). Patients with a single clear diagnosis and a single intervention were identified. Those with combined interventions (e.g., ear medication and a hearing aid) were excluded. Patients referred outside ORL/audiology to other departments, including physiotherapy ( n = 39) and speech therapy ( n = 109), were excluded. We identified a series of common diagnosis or intervention groups from the dataset for which there was no surgical treatment option and for which there were at least 50 patients for analysis. This gave us the following patient groups: sensorineural hearing loss, conductive hearing loss managed with hearing aids, tinnitus, benign paroxysmal positional vertigo, otitis externa and laryngo‐pharyngeal reflux, plus a large heterogenous group of patients managed by means of reassurance and advice without any active intervention. No active intervention/reassurance SENTOS contains a large group of patients coded as receiving ‘reassurance’ or ‘advice on self‐management’ with no active medical or surgical intervention. There were 1373 such patients (30% of those with a completed GBI), 550 men (40.1%) and 823 women with a median age of 55 years (range 16–93, mean 53.77 years). Their primary presenting symptoms included hearing impairment (370 cases, 26.9%), dizziness (217, 15.8%), tinnitus (140, 10.2%), otalgia (110, 8%), hoarseness (94, 6.8%), lump in throat (75, 5.5%) and sore throat (69, 5%). The most common diagnoses then given were ‘no abnormality demonstrated’ (335, 24.4%), ‘bilateral sensorineural hearing loss’ (294, 21.4%), ‘somatoform disease including hyperventilation and globus hystericus/pharyngeus’ (78, 5.7%), “tinnitus” (74, 5.4%), ‘dizziness and light‐headedness’ (50, 3.6%) and vestibular neuronitis (45, 3.3%). Apart from a very small number of outliers reporting large benefits and harms, most patients report no change in any factor, with 80% scoring zero for Support , 65% for General health , 60% for Quality of life , 80% for Self‐confidence and 77% for Social involvement . For the total score, 42% score exactly 0 and 62% score between −3.3 and +3.3 (Table ). Sensorineural hearing loss There were 774 patients coded as having a bilateral sensorineural hearing loss, of whom 480 received hearing aids and 294 received only reassurance and advice. Benefit is greater in those given hearing aids, with the difference between their scores and those having reassurance and advice being statistically significant for all factors except General health (Figure ). Comparison of hearing aid benefit between conductive and sensorineural impairments To make this comparison, a large cohort of those with a presumptive conductive impairment was required. A total of 72 patients were identified with a middle ear condition (28 otosclerosis, 19 inactive mucosal chronic otitis media, 17 other middle ear disorders such as adhesive otitis media and 8 previous middle ear surgery) for whom the provision of an aid was the management. Comparison of benefit ( Figure ) showed that the Quality of life benefit is significantly greater in those with a conductive impairment ( n = 72) than those with a sensorineural impairment ( n = 480). There is no significant difference in the other four factors or the overall score. Interventions for benign paroxysmal positional vertigo Of the 53 patients diagnosed with benign paroxysmal positional vertigo (BPPV), 18 patients treated with reassurance and advice only, and 35 patients receiving an Epley or Semont manoeuvre with no other intervention. There is a significant difference in the total GBI‐5F score and the Quality of life factor score (Figure ), with both of these being higher in the group receiving an otolith repositioning manoeuvre. Interventions for tinnitus Of the 102 adults with tinnitus, 28 were provided with a tinnitus masker or hearing aid and 74 given reassurance alone. There was a small improvement in Support in those given a hearing aid or masker compared with those just given reassurance (Mann–Whitney U ‐test, p = .034), but no other differences were identified (Figure ). Interventions for otitis externa There were 123 patients diagnosed with otitis externa, of whom 89 were prescribed topical medications and 34 received only reassurance and advice. There is no difference in GBI‐5F total or factor scores between the two treatments, albeit both groups reported positive total scores and Quality of life factor scores ( Figure ) . Interventions for laryngo‐pharyngeal reflux There were 195 patients with symptoms attributed to laryngo‐pharyngeal reflux of whom 176 were solely prescribed medication and 19 solely given reassurance. There is no significant difference between medication and reassurance for the total GBI‐5F score or any of the factor scores. DISCUSSION The GBI was developed to be applicable to interventions in ORL and audiology. To‐date, it has been primarily used and found valid for surgical interventions. The main objective of this article is to investigate its applicability to a wider range of ORL and audiological interventions, and specifically to do this with the recently reported version, GBI‐5F . For this, we are fortunate to have a prospective national audit of 9005 adults managed in ORL and audiology departments, describing the benefit from treatment as reported by patients. Of these patients, 4543 (51%) completed a GBI questionnaire 3–6 months later and were considered by the authors of the original study to be a representative subset of all the adults attending. Of course, they are only a subset of all the patients seen during the study period and we cannot know to what extent the patients we report on here are truly representative of all patients with these particular conditions and interventions. We cannot say that the GBI‐5F results we report here would be typical for all patients with these conditions, but we can say that at least some patients with these conditions will produce scores like the ones we report. As our main intention is to show that the five‐factor scores of the GBI‐5F can be used to demonstrate the pattern of areas of benefit after different non‐surgical interventions, the question of how representative the patients are is a secondary concern. It is for future studies to report on these conditions and interventions in more detail and with reference to clinical information such as age, sex, disease severity and presenting symptoms. In total, 80% of patients were managed without any surgery. Our data show for the first time that non‐surgical interventions can be shown to have large benefits using the GBI‐5F . The most striking example is that of otolith repositioning manoeuvres for BPPV, which produce benefit in the overall score and the Quality of life factor which are similar in magnitude to the benefits seen from the surgical treatment of conditions such as nasal polyps and tonsillitis. This would be in keeping with the dramatic and instant relief of disabling symptoms that such manoeuvres can produce. Hearing aid provision is another non‐surgical intervention that produces significant, measurable benefit for patients with hearing loss. Those with a conductive impairment have greater Quality of life benefit than those with a sensorineural impairment. While we cannot control for the severity of the hearing loss in each group of patients, laboratory studies suggest that hearing aids should be more beneficial for conductive hearing impairment. More surprising, perhaps, is the lack of improvement in Social involvement , which is a finding that requires further investigation. The benefit for patients with tinnitus who are given a hearing aid or a masking device is in one specific factor area ( Support ) compared with the more generalised benefit reported by patients with hearing loss (all factors except General health ). This serves to show how five factors of the GBI‐5F can shed light on the details of how they derive benefit from specific interventions. Of the 4543 adults with a GBI questionnaire submitted, 1373 (30%) received reassurance and advice on self‐management with no active therapy. It is important to report on these patients as they form such a large proportion of patients seen in ORL clinics. This may be because they have a condition which has settled symptomatically since referral, or because they have symptoms so minor that they do not merit active intervention. As the GBI measures a change due to an intervention, it is not surprising that the GBI‐5F total and factor scores were not significantly different from zero for this group. The small positive score in the Quality of life factor is perhaps due to the patient being reassured that there is no serious disease. It also illustrates that, in the majority of patients, the decision not to prescribe any active intervention did not lead to any harm for the patient, as any clinical deterioration over the subsequent 6 months would have produced negative scores. For otitis externa and laryngo‐pharyngeal reflux, where there is no surgical option, medical therapies in general show no greater benefit than reassurance. This does not necessarily indicate that they are ineffective, although that could well be the case for laryngo‐pharyngeal reflux given recent evidence on the ineffectiveness of proton pump inhibitors for throat symptoms. For both these conditions, many patients have already been commenced on medication by their general practitioner prior to specialist referral: telling them to continue with medication is unlikely to produce a large reported benefit. Additionally, Q11 of the GBI‐5F , part of the General Health factor, specifically asks about medication intake, hence any intervention increasing medication will automatically worsen the General Health score. We will report detailed comparisons between medication, surgery and reassurance for other conditions in a future paper, but there are some conditions where medication does lead to greater reported benefit than reassurance alone. For individual assessment of benefit, it is important to consider what are measurable score differences. A change of one point on the answer scale (from ‘no change’ to ‘a little better’, or from ‘a little better’ to ‘much better’) for one question will produce an improvement in the overall score of +3.33, and in the relevant factor score of +16.67. The GBI‐5F is therefore most effective when used as tool for audit or research to assess groups of patients. 4.1 Strengths and weaknesses The differences illustrated are from a large national audit completed in 2006. It is unlikely that substantially different results would be obtained on more recent data as there have been few major changes to management options for non‐malignant ORL and audiology conditions. Some might correctly argue that there have been some improvements, such as the technical advances in hearing aids. Such improvements are worth investigating and the GBI‐5F would be a reasonable outcome measure to do this. Because information was not available on the severity of the ORL conditions, the benefits must be seen as a reflection of real‐world outcomes where a range of severities are managed. To show the ‘true’ magnitude of the differences requires randomised controlled trials where the severity of the disease and associated disability can be controlled as there will always be a large placebo effect when surgery, or any technological intervention such as hearing aids, is used. Where medical therapy is being investigated then it would be advisable to have a condition‐specific or symptom‐specific questionnaire in addition to the generic GBI‐5F . 4.2 Conclusion The GBI‐5F is a uniquely useful tool, one which can identify differences in the magnitude of benefit from different interventions, across a wide variety of conditions, for non‐surgical as well as surgical interventions, and without the need for any pre‐intervention measurement. Therefore, it should continue to have broad application in routine audit of clinical practice and in research, especially in its revised 15‐question, 5‐factor format. Strengths and weaknesses The differences illustrated are from a large national audit completed in 2006. It is unlikely that substantially different results would be obtained on more recent data as there have been few major changes to management options for non‐malignant ORL and audiology conditions. Some might correctly argue that there have been some improvements, such as the technical advances in hearing aids. Such improvements are worth investigating and the GBI‐5F would be a reasonable outcome measure to do this. Because information was not available on the severity of the ORL conditions, the benefits must be seen as a reflection of real‐world outcomes where a range of severities are managed. To show the ‘true’ magnitude of the differences requires randomised controlled trials where the severity of the disease and associated disability can be controlled as there will always be a large placebo effect when surgery, or any technological intervention such as hearing aids, is used. Where medical therapy is being investigated then it would be advisable to have a condition‐specific or symptom‐specific questionnaire in addition to the generic GBI‐5F . Conclusion The GBI‐5F is a uniquely useful tool, one which can identify differences in the magnitude of benefit from different interventions, across a wide variety of conditions, for non‐surgical as well as surgical interventions, and without the need for any pre‐intervention measurement. Therefore, it should continue to have broad application in routine audit of clinical practice and in research, especially in its revised 15‐question, 5‐factor format. George G. Browning initiated the study, supported the development of the different themes in interpretation and wrote substantial parts of the article. Haytham Kubba performed the data analyses and wrote them up, and contributed to other aspects of the article. William M. Whitmer contributed to the analysis and to drafting the article. WMW was supported by the Medical Research Council [grant number MR/X003620/1] and the Chief Scientist Office of the Scottish Government. The author declares that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. The peer review history for this article is available at https://publons.com/publon/10.1111/coa.13992 . This study was performed on pre‐existing data, originally obtained for the Scottish ENT Outcomes Study which was approved by the Scottish Multi‐Centre Research Ethics Committee.
Accuracy of comparison decisions by forensic firearms examiners
1bdf8959-806e-4df2-a657-651ecbd1ccb4
10092368
Forensic Medicine[mh]
INTRODUCTION Forensic firearms examination, like other pattern evidence analysis disciplines (e.g., latent fingerprints and LFP), relies on expertise, training, and judgment to make comparisons between questioned, evidentiary specimens and known exemplars for source attribution decisions. The Federal Bureau of Investigation (FBI) Laboratory initiated research to strengthen the admissibility of pattern evidence examination decisions in 2006, starting with fingerprint comparisons, and publishing the first results in 2011 . In 2009, a committee convened by the National Research Council (NRC) offered recommendations for improvements to forensic science practice . Among these, Recommendation 3 emphasized the need for more studies to establish the scientific bases that demonstrate the validity, reliability, and accuracy of forensic methods. The President's Council of Advisors on Science and Technology (PCAST) published a 2016 report that reviewed the scientific validity of a number of feature comparison analysis methods, including LFP and firearms . PCAST reviewed a Department of Energy report in the public domain in the area of firearm examinations. Concerning fingerprints, PCAST concluded that the design of and results from an LFP study were instrumental in establishing the validity of LFP comparisons. The experimental methodology used in both these studies, particularly their open set designs, was described by PCAST to be of high quality. Nevertheless, PCAST, like the NRC report before it, recommended additional research. For the discipline of firearms analysis, Finding 6 prescribed additional well‐designed black box studies to determine error rates, establish foundational validity, and support testimony. Previous studies by forensic firearms examiners and independent researchers have examined the accuracy of firearm examiner decisions [ , , , , ]. Since the publication of the PCAST report, a number of firearms examiners and independent researchers have conducted additional investigations dealing with various aspects of comparative examinations. These include the estimation of examiner error rates [ , , , , , , , , ], statistical evaluation methods in the Identification of toolmarks , and efforts to produce either automated or computer‐based objective determinations [ , , , , , , , , , , , ]. Additionally, several compilations contain general discussions and document research efforts as applied to firearms and toolmark examinations [ , , , , , ]. This study reports results related to examiner accuracy, the ability of an examiner to correctly identify a known match or to eliminate a known nonmatch (error rate). It is part of a larger study that included intra‐examiner repeatability and inter‐examiner reproducibility of examination results and also examined effects related to firearm make, tool wear (related to manufacturing or firing order), and human factors (e.g., years of experience and perceived difficulty). Complementary papers will detail the latter aspects. The basic task of the study was the comparison of unknown cartridge cases and bullet specimens by a firearms examiner who volunteered to participate. Fired bullets and cartridge cases were obtained using three different manufacturers of firearms and a single brand of ammunition. The firearms and ammunition selected for this study were selected for their propensity to produce challenging and ambiguous test specimens, creating difficult comparisons for examiners. Firearms were chosen whose design precluded the creation of aperture drag (which is readily identifiable) and were likely to be highly similar and to display subclass characteristics (having been collected after consecutive or sequential manufacture and incorporating a variable range of firing intervals between the known and questioned specimens in each set). The ammunition used had steel cartridge cases and steel‐jacketed bullets (steel, being harder than brass, is less likely to be marked) . Thus, the study was designed to be a rigorous trial of examiner ability; as a result, error rates derived from this study may provide an upper bound to a possible error in operational casework, as evidentiary specimens may generally be assumed to be less challenging than those used in this study. Although generally analogous to the previous Ulery et al. LFP and Baldwin et al.'s firearms studies in terms of experimental design and methodology, this study was considerably broader than Baldwin in that it involved both cartridge cases and bullets and took into account additional parameters that might affect examiner accuracies such as challenging comparisons, manufacturing conditions, presence of subclass characteristics, and firing order separation. An open set design, where there may not necessarily be a match for every questioned specimen, was implemented. An open set design avoids the underestimation of false positives inherent in a closed set but may increase the number of Inconclusive decisions. MATERIALS AND METHODS Full details of the planning, design, and logistics of the study, including the rationale for choices of specific firearms and ammunition is provided by Monson, et al. . This study was designed as a declared double‐blind “black box” investigation, in which the examiners were aware of their participation in a study. Contact between the participating examiner subjects and the experimental team was precluded, both to preserve the anonymity of the participants and prevent any interactions between participants and investigators that might result in bias [ , , , , ]. Duties related to communication with the participants and generation and scoring of the specimens provided for examination were strictly compartmentalized. No specimen‐specific information was shared between the compartmentalized communication group and the experimental/analysis group. The study was reviewed and approved by the cognizant Institutional Review Board (IRB) and all results were kept anonymous, pursuant to IRB requirements. Given a large number of organizations represented and the number of specimens to be compared over multiple rounds of submission, conducting a fully double‐blind study (where the participants were unaware that they were participating in a research study and neither they nor the study administrators knew the correct answers) would have presented nearly insurmountable logistical challenges, potentially compromising the anonymity of participants, and creating the risk of co‐mingling experimental samples with real casework evidentiary materials. Moreover, conducting a fully double‐blind study was precluded by the statutory and IRB requirements to obtain informed consent from the participating examiners . Broad calls for volunteers were made through the Association of Firearm and Toolmark Examiners (AFTE) website; by announcements and presentations at national forensic meetings including AFTE, Association of Crime Laboratory Directors (ASCLD), and National Institute of Science and Technology (NIST); through e‐mail lists maintained by AFTE (AFTE membership was not required for participation); and through national/international listservs. Due to difficulties with mailing bullets and cartridge cases overseas, a decision was made to accept only examiners within the United States. Examiners associated with the FBI were excluded to eliminate possible conflicts of interest. Initially, 256 examiners expressed interest, but only those who were willing and able to commit to the substantial effort that was required persisted. This was a self‐selection process over which we had no control. A total of 173 qualified examiners working in 41 states were active participants in the accuracy study. Of the 157 participants who indicated whether their laboratory was accredited, 18 responded negatively. Median examiner experience was 9 years . The ammunition used was Wolf Polyformance 9 mm Luger (9 × 19 mm). These cartridges are polymer coated, having steel cartridge cases with brass primers and 115‐grain, copper‐coated, steel‐jacketed, and lead bullets . Fired cartridge cases were collected from 10 Jimenez JA‐Nine and 27 Beretta M9A3‐FDE semiautomatic pistols. Fired bullets were collected from 11 Ruger SR‐9c and the same 27 Beretta semiautomatic pistols. The number of specimens collected for this study and use in other aspects of the research program was 700 specimens per Beretta firearm and 850 specimens per non‐Beretta firearm, for both cartridge cases and bullets, from 28,250 test fires. The majority of the firearms had newly and consecutively or sequentially manufactured barrels and slides. Four used Beretta firearms, chosen at random from those retained from adjudicated cases and therefore of unknown history, served as ground truth nonmatch firearms. Each new firearm was test fired before specimen collection began (30 times for Jimenez and 60 times for the others). The break‐in firings were employed to stabilize internal wear within the firearms and achieve consistent and reproducible toolmarks . All firearms were cleaned with a dry linen patch after firing every 250 cartridges during the collection process. Test packets and comparison sets were assembled using the following parameters: An open set design was used, i.e., there was no match for every questioned specimen. Only cartridge cases and bullets fired from the same make and model firearm were in each comparison set. Each comparison set, consisting of one questioned item and two reference items, represented an independent comparison, unrelated to any other set in the test packet. The overall proportion of known (true) matches in the test packets averaged 33% but varied from 20% to 46% between bullets and cartridge cases within a test packet and across all test packets. The ratio of non‐Beretta‐to‐Beretta specimens (for cartridge cases and bullets) in a test packet was 1:2. Each test packet mailed to examiners consisted of 30 comparison specimen sets, with 15 cartridge case sets and 15 bullet sets. Each comparison set consisted of a single questioned specimen to be compared to two known specimens, the latter fired within the same sequence group of 50 and from the same firearm. The cartridge case comparisons were 5 sets of Jimenez and 10 sets of Beretta specimens, and the bullet comparisons were 5 sets of Ruger and 10 sets of Beretta specimens. Bar code labeling, distribution, and tracking of specimens are described in Supplement . The firing order was not disclosed to participating subjects but was tracked to evaluate any effect of firearm wear on examiners' analysis results. Results related to firearm manufacture and wear will be discussed in another paper. Specimens were provided to the volunteers through a series of mailings. Via an instruction sheet , participants were specifically asked not to use their laboratory or agency quality assurance processes and not to discuss their conclusions with others. Following examination using a comparison microscope, examiners were asked to render a decision for each individual comparison set analyzed as Identification , Elimination , Inconclusive (A, B, or C), or Unsuitable using the AFTE range of conclusions shown in Table . They retained the prerogative not to declare exclusions based on individual characteristics if that was their laboratory's policy, which applied to 7% of the examiners. The 173 participating firearm examiners provided comparison results for a total of 668 test packets, resulting in 8640 comparisons of fired cartridge cases and bullets. If a decision error was noted, the comparison set was barcode read, and the information was compared to ground truth to verify the error. RESULTS A summary of the evaluations for each of the 4320 bullets and 4320 cartridge case comparisons used to determine accuracy, by reference to the ground truth status of each comparison set, is given in Table . The heading “ID” indicates an examiner made an Identification decision; and Inconcl‐A, B, and C decisions correspond to the Inconclusive determinations defined by the AFTE range of conclusions (Table ). The final column labeled “Other” in Table includes unrecorded conclusions (6 bullet sets and 7 cartridge case sets), those recorded as Inconclusive without a level designation (A, B, or C; occurred for 1 cartridge case set), or where multiple Inconclusive levels were recorded (9 bullet sets and 3 cartridge case sets). Throughout this study, an error is defined as an instance in which Elimination was declared for a true matching set, or Identification was declared for a true nonmatching set. The counts of such errors are highlighted in bold in Table . Counts recorded as unsuitable or in the other category are not included in accuracy calculations, as no comparison was performed. Summary conclusion percentages are computed by dividing each of the entries in Table by its corresponding row sum and are presented in Table . For example, the proportion of false positives (False‐Pos) equals the total number of incorrect Identification conclusions over the total number of conclusions: (1) False − Pos = 100 % * ID ID + Inconcl ‐ A + Inconcl ‐ B + Inconcl ‐ C + Elimination = 100 % * 20 20 + 268 + 848 + 745 + 961 = 0.704 % and the proportion of Elimination conclusions among matching bullet sets (or false negatives, False‐Neg) is (2) False − Neg = 100 % * Elimination ID + Inconcl ‐ A + Inconcl ‐ B + Inconcl ‐ C + Elimination = 100 % * 41 1076 + 127 + 848 + 125 + 36 + 41 = 2.92 % after the removal of the comparisons represented in the Unsuitable and Other columns of Table . The numbers of examiners making each type of error are shown in Table . Error prevalence showed no correlation with the type of training received or years of professional experience (data not shown). The false‐positive and false‐negative errors were made by a relatively small subset of the examiners, as was reported previously . No errors were made by 139 of 173 examiners (80%), either false positive or false negative when examining bullets; 137 examiners made no errors of either kind when examining cartridge cases (79%). Errors were made by 34 of the 173 examiners when examining bullets (20%) and 36 of 173 examiners for cartridge cases (21%). Six participants made errors with both specimen types. Examiners showed marked variability in their frequency of making definitive conclusions. This is illustrated in Figure for comparisons of known matching and nonmatching bullets and cartridge cases. Figure shows the percentage of completed comparisons (which varies by the examiner) in which each AFTE decision category was invoked (ordinate) by each of the 173 examiners (abscissa). In charts of matching comparisons, the data were sorted by Identifications in descending order. Nonmatching comparisons were sorted by Eliminations in descending order. Correct definitive conclusions are in green (i.e., Identification of known matches and Elimination of known nonmatches), while errors are in red (incorrect Identification of known nonmatches and Elimination of known matches). In all charts, levels of Inconclusive are coded: Inc‐A in pale green, Inc‐B in yellow, and Inc‐C in amber. It is emphasized that, although the comparison sets were similar in that they were derived from the same ammunition and group of firearms, every comparison set was different, and the number of comparisons completed by each examiner varied. The number of comparison sets reported by different examiners varied from 2 to 17 (matching sets) and from 7 to 28 (nonmatching sets). As also observed among LFP examiners , differences in the observed rate of definitive conclusions are due to a combination of examiner skill, risk tolerance, and the number of, and challenges presented by, the particular comparison sets each examiner received. For known matches, the overall trend in Identification and Inconclusive decision frequency is similar for bullet and cartridge case sets, with the latter showing more examiners making Identifications. As in Table and Figure , comparisons of nonmatching bullet sets resulted in fewer eliminations, consequently with more Inconclusive decisions, than seen in comparisons of nonmatching cartridge case sets. Figure shows a high rate of definitive conclusions by examiners, particularly for known matches. Many examiners correctly identified every set that they compared (i.e., declared not Inconclusives) of known matching cartridge cases (26% of examiners) and bullets (20% of examiners). Figure also illustrates poor performance by a few examiners. The worst performers in each comparison set category declared: 4 of 18 (22%) and 4 of 22 (18%) nonmatching bullet sets to be Identifications; 3 of 11 (27%) and 3 of 20 (15%) of nonmatching cartridge case sets to be Identifications; 4 of 9 (44%) and 4 of 11 (36%) of matching bullet sets to be Eliminations; and 2 of 3 (67%) of matching cartridge case sets to be Eliminations. Examples of comparisons that resulted in false‐positive errors are provided in Figures (bullets) and Figure (cartridge cases). Examples of comparisons that resulted in false‐negative errors are provided in Figure (bullets) and Figure (cartridge cases). In addition to exemplifying erroneous conclusions, Figures , , , illustrate the difficulty level of many of the comparisons in this study. An obvious concern is a possibility that error probabilities are different for individual examiners. If true, then regarding each comparison in the entire collection of examinations of matching bullet sets as having the same probability of being mistakenly labeled an Elimination (for example) is not an appropriate assumption. To examine this possibility, chi‐square tests for independence were performed on tables of counts with 173 rows (one for each examiner) and 5 columns for examination results. For matching sets, the proportions of Identification evaluations versus pooled Elimination and Inconclusive evaluations were compared, and for nonmatching sets, the proportions of Elimination evaluations versus pooled Identification and Inconclusive evaluations were compared. Pooling of counts was used for these statistical tests because errors are relatively rare and, if maintained as a separate category, would result in many zero counts, which are problematic in chi‐square tests, e.g., [ , pp. 156–157]. For both matching and nonmatching sets, and for both bullets and cartridge cases, the hypothesis of independence was rejected ( p < 0.001) and the effect size was large (Cohen's d > 1) [ , pp., 24‐27], strongly suggesting that the probabilities associated with each conclusion are not the same for each examiner. Consequently, the most common methods of computing confidence intervals for proportions based on an assumption of equal probabilities for each evaluation category, e.g., the Clopper–Pearson intervals , are not appropriate. A more appropriate procedure assumes that each examiner has an individual error probability that these probabilities are adequately represented by a beta distribution, which is a flexible two‐parameter probability distribution on the unit interval, across the population of examiners, and that the number of errors made by each examiner follows a binomial distribution characterized by that examiner's individual probability. Estimates and confidence intervals for the false‐positive error rate were also calculated using a beta‐binomial model, as in the Baldwin study ; an example of its use in another application is given in . Usual confidence intervals, in contrast, are based on an assumption that there is only one relevant binomial distribution, and that all examiners operate with the same error probability, an assumption our analysis strongly contradicts. Based on the beta‐binomial model, maximum‐likelihood estimates and 95% confidence intervals for false‐positive and false‐negative error probabilities, integrated over all examiners, were calculated using the R statistics package, including the VGAM package . An expanded explanation is provided in Supplement . The results are summarized in Table . The maximum‐likelihood estimates and confidence intervals are estimates of the mean of the examiner‐specific error probabilities. Table gives the commonly reported indices of sensitivity and specificity computed from our data. Sensitivity is defined as the number of Identification evaluations reported divided by the number of total known matches based on ground truth. It is a measure of the study participants' ability to identify a match between two specimens when they are from the same source. Similarly, specificity is the number of Elimination evaluations reported divided by the number of total nonmatches based on the ground truth. Note that sensitivity would be 1 minus the false‐negative error rate, and specificity would be 1 minus the false‐positive rate, if Inconclusive evaluations were not allowed (and not accommodating variation in individual examiner error rates). These indices are simple ratios of bulk counts, and in light of our discussion concerning unequal error probabilities among examiners, are intended primarily for comparison to other studies rather than as preferred estimates of meaningful underlying parameters. Higher values exist for sensitivity while lower values were obtained for specificity. By comparison, another study of cartridge cases examined using light microscopy reported 80.08% sensitivity and 12.50% specificity , while a fingerprint analysis study noted 68% sensitivity and 87% specificity . Lower specificity values for firearms comparisons indicate that it was more difficult for examiners to justify an Elimination decision within ground truth nonmatches than an Identification within ground truth matching specimens. This observation accords with research in cognitive science, which has demonstrated that the more similar are two images, the more difficult it is to say that they are different, particularly as it becomes harder to bring the images into alignment . Also contributing to lower specificity is the fact that 7% of examiners were habituated to laboratory policy not to declare Elimination based on individual characteristics. In view of the difficulty of the comparisons encountered in this study, lower specificity, i.e., reticence in making decisions of being “definitely different” (i.e., Elimination), is not unexpected. Anonymized results of all comparisons of bullet and cartridge case sets that were conducted by the participating examiners are provided in Supplement . DISCUSSION 4.1 Accuracy The error rates in this study are somewhat higher but generally consistent with the overall error rates reported in recent firearms studies with an open set design (Table ) [ , , , , , , ]. Exact correspondence in error rates across different studies would not necessarily be expected due to differences in study design, in the firearms and ammunition chosen to produce test specimens, and in the way that error rates are calculated. For cartridge cases, the false‐positive error rates in this study are comparable to the overall error rates reported by Baldwin et al. . Estimated error rates are higher in this study than those reported in a recent study involving bullets fired from 30 consecutively machined Beretta barrels, which reported false‐positive rates of 0.08% (with 95% upper confidence limit of 0.4%) and false‐negative error rates of 0.16% . Experimental parameters of the present study were challenging by design, including using particular firearms that tend to mark more poorly, steel versus brass cartridge cases, steel‐jacketed bullets, and promoting the presence of subclass characteristics . The consecutively or sequentially manufactured barrels and slides used in this study suggest a source of subclass characteristics. Anecdotally, the Jimenez firearm is known to generate gross marks with high occurrences of subclass characteristics both for breech face marks and firing pin impressions , as compared to higher cost‐point firearms such as the Beretta. The lack of a tilting barrel recoil mechanism in all the firearms used in this study increases the difficulty level of comparisons due to the absence of distinctive aperture shear marks. Higher false negatives recorded are possibly also due to greater difficulties when faced with the steel Wolf Polyformance cartridge cases rather than softer brass used in other studies. Many examiners commented that they felt brass provides better marks for Identification than steel. Lacking access to the firearm that produced the known specimens, as is typical in most casework, also made comparisons more difficult. Several examiners commented that, without having the actual firearm in hand to test, they found it difficult to render an exclusion, particularly when there was no information given as to the firing sequence gap between the collection of the unknown and the collection of the known exemplars. This limitation is elaborated on in a proposed standard . Casework comparisons often offer the opportunity to produce test fires within a relatively close interval from the shooting incident under investigation. (However, studies have shown little effect on Identification performance in the absence of the firearm [ , , , ].) Errors tended to be concentrated within a relatively small number of examiners (Table , Figure ), as observed in other studies [ , , ]. Examination of the data using chi‐square tests for independence showed that the cited error estimates cannot be applied equally to all examiners. Most examiners will perform better than the point estimates in Table , while a few will perform more poorly. No errors were made by approximately 80% of examiners (Table ). Point estimates and confidence intervals were calculated under the assumption (supported by our analysis) that examiners have different error probabilities and that the collection of examiner‐specific probabilities can be represented by a beta distribution. The confidence intervals shown in Table should not be interpreted as bounding the error probabilities of any one examiner. Again, the error probabilities of individual examiners are assumed to be different, and the data available for any one examiner are limited. A valid alternative explanation of the confidence interval is that, if many examiners were randomly selected from the population and individually asked to make a single determination for a (different) comparison set, the intervals specified would bound, with stated confidence, the overall proportion of errors made in this process. It should also be noted that this method is not completely assumption free (even though the assumptions are less restrictive than those on which the Clopper–Pearson intervals are based). Specifically, it is assumed without formal evidence that the beta distribution is appropriate for modeling the population of examiner‐specific error probabilities. The flexibility of the beta‐distribution family (i.e., the variety of shapes the distribution can take, controlled by its parameters) ensures that the methodology can be appropriate for a wide variety of situations. Because the examiner‐specific error probabilities are not directly observable, and there is relatively limited information available on the accuracy of each examiner's determinations, it would be difficult to build a supportable case for more appropriate distribution. Even if a different distribution was available, the beta distribution is certainly a more appropriate approximation than the single‐value distribution assumed by the Clopper–Pearson approach. Given the above provisos, the results in Table should still be considered approximate since the model of a firearm and the positioning of known and questioned specimens in the firing sequence for a firearm also appear to affect error probabilities, and these considerations are not taken into account in this calculation; these effects will be discussed in another paper. Still, differences among examiners are likely the greatest source of nonindependence in the data, and the assumptions underlying the method used here are more appropriate than those upon which simpler methods are based. The participants in this study were directed to use the AFTE range of conclusions , which predominates in North America, to express their comparison decisions. Alternative scales, which describe conclusions in terms of strength of support, are under consideration . If adopted by the community, the value of studies using the AFTE range will endure. The proposed scale is highly comparable to the AFTE range, being essentially a change in nomenclature. The term Elimination is replaced by Exclusion, while Identification remains. The middle three conclusions of the proposed scale closely approximate the definitions of the three AFTE levels of Inconclusive. 4.2 Inconclusive decisions Forensic firearms comparison must be regarded as at least a two‐level process. The first level is an evaluation of the class characteristics. If they are congruent, the second step involves comparing the quality and quantity of microscopic correspondence of individual characteristics. (Individual laboratory policy may permit elimination using a difference in microscopic marks at the second step.) As with any instrument (the examiner being the instrument), there are limits on their ability to the interpretation of the quality/quantity of the data/information presented. For many reasons, fired bullets and cartridge cases do not always carry marks sufficient to support a definitive conclusion of Identification or Elimination [ , , ]. Sufficient agreement in quality and/or quantity of individual characteristics is dependent on toolmark reproduction and/or survivability. The following factors may influence toolmark reproduction (some apply only to casework specimens): Limited obturation—obturation is the enlargement of a cartridge case or a bullet base to seal the chamber during the expansion of gases. When there is incomplete/limited obturation, the reproduction of the toolmark is negatively affected. Factors such as ridged substrates and/or loose manufacturing tolerances can impact the reproduction of a toolmark. Intermediate substrate—whether intentional (e.g., primer lacquer) or accidental, an intermediate substrate such as debris (e.g., lubricant, dirt, and sooting) can inhibit toolmark reproduction. Interference—secondary toolmark obstructs primary toolmark comparison, e.g., cartridge case mouth striations on a bullet merging with striations produced from the barrel. Longevity of toolmark (persistence)—through long‐term use of a tool, erosion of the original toolmark can occur. High velocities—when velocities are high, the increased pressure on the bearing surface of a bullet can reduce toolmark reproduction. Intentional alteration—numerous methods to obliterate an original toolmark through mechanical means exist (e.g., sanding and grinding) to conceal the originally manufactured toolmark; however, this generates a new toolmark that is different from the original. Environmental exposure—depending on the environmental conditions and/or the metal substrate, the original toolmark is susceptible to alteration due to corrosion. Damage—due to the velocity of an impact or the active nature of a crime scene (e.g., evidence being trodden upon), toolmarks on bullets/cartridge cases can be damaged, obscured, or obliterated. Substrate—may not be suitable for toolmark reproduction (e.g., hard metallics) When there is inadequate reproduction of a toolmark, the quality and/or quantity of individual characteristics available for comparison may be insufficient to conclude an Identification or Elimination. The forensic community, as well as independent researchers, are in agreement that the appropriate recourse for an examiner is then a decision of Inconclusive [ , , , , , ]. By recording an Inconclusive decision, an examiner is providing a conclusion that the information/data observed do not meet the high standards for Identification or Elimination. Rather than guessing, they say they are unsure and return an Inconclusive decision. Biederman et al. make trenchant arguments for the utility of Inconclusive decisions and that they should not be considered errors. First, they point out that calling an Inconclusive conclusion from a known match comparison an error is a contradiction in terms, being that an Inconclusive decision makes no reference to ground truth. Second, they show through formal decision theoretic analysis that an Inconclusive decision actually has high practical value (utility), particularly in individual cases and within the precept that false positives are highly undesirable. In this study, we have not imputed error to Inconclusive conclusions. In a controlled research study where the ground truth match/nonmatch status of every comparison is known, one may readily count the number of known matches and known nonmatches that were deemed Inconclusive by examiners, as in Table . Such counts have been dubbed “incorrect Inconclusives,” which some authors propose should be considered systematic errors inherent to the process of firearms comparisons, as they fail to indicate what is known, although emphasizing that they should not be considered individual errors [ , , , ]. Such counts are highly dependent on the overall difficulty level of comparisons in each study and on intra‐ and inter‐examiner variations in decision‐making . Another approach that has been advocated for casework (where true match status is unknowable) is to convene a group of experts to arrive at a consensus opinion. Any deviation from consensus would be considered an “incorrect Inconclusive” . The general approach to characterize so‐called “incorrect Inconclusives” would be to have a group of examiners each compare the same group of specimen pairs to establish a consensus opinion. The consensus method can also be applied to a research study. One such study, involving 13 firearms examiners, achieved unanimity only for the 7 known matches among 20 comparison sets, but demonstrated a somewhat higher proportion of ground truth conclusions for most examiners by including some of their Inconclusive decisions that fell within the interquartile range of the consensus . Wide variation in ground truth accuracy for each set was attributed to an inadequate number of examiners forming the consensus group. A study assessing the value for Identification of latent fingerprints by 23 participants achieved unanimity on 48% of 520 mated pairs . Another study involving the analysis of 12,279 palm prints by 226 examiners found a high level of variability in decisions of value, with 25% disagreement from consensus . The latter study also found the unnerving result with respect to casework that, in 45 instances, an Identification conclusion differed from the consensus decision of exclusion (reached by a large number of examiners), but the consensus was wrong in 36 of those instances. Seeking consensus was first suggested to support an accused's right to appeal or to assess whether particular latent fingerprint examiners within a laboratory may be overly cautious or aggressive—but “not necessarily wrong in absolute terms” [ , section 3.3.6.2]. (Technical review of laboratory reports prior to release, integral to many quality‐assurance protocols, is a related concept that usually involves two experts, while consensus involves a group.) Counting known matching and known nonmatching sets that were judged Inconclusive (at any subdivision) would reveal examiners who treat some comparisons that offer minimal, inadequate, or ambiguous discriminating information either more conservatively or aggressively than other examiners but might also imply they made errors. Several practical limitations beset the method of consensus, including operational overhead, determining membership and size of the august group, use of majority vs. unanimous opinion, and differences in outcome arising from differences in group membership, training, and level of conservatism. In reporting the results of the present black box study of examiner accuracy, which is intrinsically agnostic to process, we have taken the position that Inconclusive conclusions shall not be considered errors. Combining the false positive error rate with the rate of Inconclusive decisions would exemplify the same ubiquitous “systematic error” due to limitations in our ability to sense or measure any natural phenomenon. It would result in an inappropriate and misleading amalgam of an important metric (false‐positive rate) with one that is noninculpatory and inherently agnostic (Inconclusive rate) . Inconclusive decisions are not systematic errors; rather, they are an essential part of the firearms discipline and Inconclusive decisions provide a check against bad Identifications. 4.3 Bias mitigation A reliable research study is designed to anticipate, recognize, and mitigate potential sources of bias . Some initial volunteers discontinued their participation in the project without reviewing any specimen packets once they realized that their daily workload was incompatible with the high level of effort required to participate. Research can rarely test an entire population so must address the representativeness of a population sample. We solicited volunteers among American firearms examiners representing every employment situation except self‐employment. Thus, it was not a random sample, but a sample of convenience.As volunteers, the participants were, of necessity, self‐selected and aware of study participation due to statutory requirements for the protection of human subjects in research (Materials and Methods section) and by being willing and able to devote extensive time and effort above and beyond the demands of casework. Selection bias is a potential concern that forms the basis for assertions that self‐selection/voluntary participation results in a no‐representative sample (the participants) from the sampling frame (all forensic examiners). The implied consequences of nonrepresentativeness are that the participants self‐select into the study for reasons that might cause them to perform better than the examiner population at large, lowering the calculated error rate. There is no empirical basis for an assumption of superior performance by those who opted for participation. Our assumption in designing the experiment is that any potential sources of bias are compensating, and thus, the sample pool is sufficiently representative of the larger population. We specifically requested that voluntary participants be “qualified examiners,” in the assumption that most crime laboratories do not classify their qualified examiners into various arbitrary and subjective performance or proficiency levels. An examiner is either “qualified” or not; “proficient” or not, without further gradation. Another source of potential bias is the Hawthorne effect, which is a phenomenon in which individuals behave differently when they are being observed . Similar effects have been postulated related to selection bias, viz., that individuals who engage diligently for the duration of the study are somehow more skilled as examiners, while those who terminate their role in the research prior to completion are less skillful or less conscientious. The resulting imputation is that error rates are artificially suppressed. In the case of our research, compensating phenomena offset potential Hawthorne effects. Increased “diligence” in performing the comparisons for the experiment that might result in “better” results should be offset by the absence of typical casework protocols that would reduce errors further, such as secondary reviews and blind verifications. Furthermore, we could reasonably postulate diminished “diligence” since the experimental specimens do not represent “real” casework, with all the attendant consequences. As noted above, some enrolled examiners did not complete the full course of comparisons. Among the examiners making false‐positive Identifications of either bullets or cartridge cases, their degree of participation in processing multiple submissions of sample sets was similar to that observed among examiners making no false Identifications. To test for substructure in the sample population due to unequal participation, results were stratified according to what was reported by two distinct groups of examiners: those who performed 345 bullet comparisons and those who performed 690 comparisons (Table ). Because the reported number of errors is small, being equal to 0 in one case, a two‐sided Fisher's exact test was used to test for nonrandom associations between results in the two subgroups. The exact probability is 0.309, indicating no significant difference between the accuracy of examiners who withdrew from the study and those who remained. Nor were there any indications of population substructure due to the use of the CMS method (which involves determining whether the number of consecutive matching striae meets a minimum criterion for correspondence that was empirically determined from best known nonmatches ) or whether the employing laboratory was accredited. Among 10 examiners who committed false‐positive errors with bullets (Tables , and Figure ), one included CMS as part of the comparison process (a very small fraction of all examiners did so; the number is withheld to protect anonymity), and another is employed by a nonaccredited laboratory (of the 18 participating examiners in nonaccredited laboratories). Similarly, for the 18 examiners making false‐positive Identifications of cartridge cases (Table and Figure ), one (the same one) is a practitioner of the CMS approach for bullets and another is employed by a nonaccredited laboratory. Accuracy The error rates in this study are somewhat higher but generally consistent with the overall error rates reported in recent firearms studies with an open set design (Table ) [ , , , , , , ]. Exact correspondence in error rates across different studies would not necessarily be expected due to differences in study design, in the firearms and ammunition chosen to produce test specimens, and in the way that error rates are calculated. For cartridge cases, the false‐positive error rates in this study are comparable to the overall error rates reported by Baldwin et al. . Estimated error rates are higher in this study than those reported in a recent study involving bullets fired from 30 consecutively machined Beretta barrels, which reported false‐positive rates of 0.08% (with 95% upper confidence limit of 0.4%) and false‐negative error rates of 0.16% . Experimental parameters of the present study were challenging by design, including using particular firearms that tend to mark more poorly, steel versus brass cartridge cases, steel‐jacketed bullets, and promoting the presence of subclass characteristics . The consecutively or sequentially manufactured barrels and slides used in this study suggest a source of subclass characteristics. Anecdotally, the Jimenez firearm is known to generate gross marks with high occurrences of subclass characteristics both for breech face marks and firing pin impressions , as compared to higher cost‐point firearms such as the Beretta. The lack of a tilting barrel recoil mechanism in all the firearms used in this study increases the difficulty level of comparisons due to the absence of distinctive aperture shear marks. Higher false negatives recorded are possibly also due to greater difficulties when faced with the steel Wolf Polyformance cartridge cases rather than softer brass used in other studies. Many examiners commented that they felt brass provides better marks for Identification than steel. Lacking access to the firearm that produced the known specimens, as is typical in most casework, also made comparisons more difficult. Several examiners commented that, without having the actual firearm in hand to test, they found it difficult to render an exclusion, particularly when there was no information given as to the firing sequence gap between the collection of the unknown and the collection of the known exemplars. This limitation is elaborated on in a proposed standard . Casework comparisons often offer the opportunity to produce test fires within a relatively close interval from the shooting incident under investigation. (However, studies have shown little effect on Identification performance in the absence of the firearm [ , , , ].) Errors tended to be concentrated within a relatively small number of examiners (Table , Figure ), as observed in other studies [ , , ]. Examination of the data using chi‐square tests for independence showed that the cited error estimates cannot be applied equally to all examiners. Most examiners will perform better than the point estimates in Table , while a few will perform more poorly. No errors were made by approximately 80% of examiners (Table ). Point estimates and confidence intervals were calculated under the assumption (supported by our analysis) that examiners have different error probabilities and that the collection of examiner‐specific probabilities can be represented by a beta distribution. The confidence intervals shown in Table should not be interpreted as bounding the error probabilities of any one examiner. Again, the error probabilities of individual examiners are assumed to be different, and the data available for any one examiner are limited. A valid alternative explanation of the confidence interval is that, if many examiners were randomly selected from the population and individually asked to make a single determination for a (different) comparison set, the intervals specified would bound, with stated confidence, the overall proportion of errors made in this process. It should also be noted that this method is not completely assumption free (even though the assumptions are less restrictive than those on which the Clopper–Pearson intervals are based). Specifically, it is assumed without formal evidence that the beta distribution is appropriate for modeling the population of examiner‐specific error probabilities. The flexibility of the beta‐distribution family (i.e., the variety of shapes the distribution can take, controlled by its parameters) ensures that the methodology can be appropriate for a wide variety of situations. Because the examiner‐specific error probabilities are not directly observable, and there is relatively limited information available on the accuracy of each examiner's determinations, it would be difficult to build a supportable case for more appropriate distribution. Even if a different distribution was available, the beta distribution is certainly a more appropriate approximation than the single‐value distribution assumed by the Clopper–Pearson approach. Given the above provisos, the results in Table should still be considered approximate since the model of a firearm and the positioning of known and questioned specimens in the firing sequence for a firearm also appear to affect error probabilities, and these considerations are not taken into account in this calculation; these effects will be discussed in another paper. Still, differences among examiners are likely the greatest source of nonindependence in the data, and the assumptions underlying the method used here are more appropriate than those upon which simpler methods are based. The participants in this study were directed to use the AFTE range of conclusions , which predominates in North America, to express their comparison decisions. Alternative scales, which describe conclusions in terms of strength of support, are under consideration . If adopted by the community, the value of studies using the AFTE range will endure. The proposed scale is highly comparable to the AFTE range, being essentially a change in nomenclature. The term Elimination is replaced by Exclusion, while Identification remains. The middle three conclusions of the proposed scale closely approximate the definitions of the three AFTE levels of Inconclusive. Inconclusive decisions Forensic firearms comparison must be regarded as at least a two‐level process. The first level is an evaluation of the class characteristics. If they are congruent, the second step involves comparing the quality and quantity of microscopic correspondence of individual characteristics. (Individual laboratory policy may permit elimination using a difference in microscopic marks at the second step.) As with any instrument (the examiner being the instrument), there are limits on their ability to the interpretation of the quality/quantity of the data/information presented. For many reasons, fired bullets and cartridge cases do not always carry marks sufficient to support a definitive conclusion of Identification or Elimination [ , , ]. Sufficient agreement in quality and/or quantity of individual characteristics is dependent on toolmark reproduction and/or survivability. The following factors may influence toolmark reproduction (some apply only to casework specimens): Limited obturation—obturation is the enlargement of a cartridge case or a bullet base to seal the chamber during the expansion of gases. When there is incomplete/limited obturation, the reproduction of the toolmark is negatively affected. Factors such as ridged substrates and/or loose manufacturing tolerances can impact the reproduction of a toolmark. Intermediate substrate—whether intentional (e.g., primer lacquer) or accidental, an intermediate substrate such as debris (e.g., lubricant, dirt, and sooting) can inhibit toolmark reproduction. Interference—secondary toolmark obstructs primary toolmark comparison, e.g., cartridge case mouth striations on a bullet merging with striations produced from the barrel. Longevity of toolmark (persistence)—through long‐term use of a tool, erosion of the original toolmark can occur. High velocities—when velocities are high, the increased pressure on the bearing surface of a bullet can reduce toolmark reproduction. Intentional alteration—numerous methods to obliterate an original toolmark through mechanical means exist (e.g., sanding and grinding) to conceal the originally manufactured toolmark; however, this generates a new toolmark that is different from the original. Environmental exposure—depending on the environmental conditions and/or the metal substrate, the original toolmark is susceptible to alteration due to corrosion. Damage—due to the velocity of an impact or the active nature of a crime scene (e.g., evidence being trodden upon), toolmarks on bullets/cartridge cases can be damaged, obscured, or obliterated. Substrate—may not be suitable for toolmark reproduction (e.g., hard metallics) When there is inadequate reproduction of a toolmark, the quality and/or quantity of individual characteristics available for comparison may be insufficient to conclude an Identification or Elimination. The forensic community, as well as independent researchers, are in agreement that the appropriate recourse for an examiner is then a decision of Inconclusive [ , , , , , ]. By recording an Inconclusive decision, an examiner is providing a conclusion that the information/data observed do not meet the high standards for Identification or Elimination. Rather than guessing, they say they are unsure and return an Inconclusive decision. Biederman et al. make trenchant arguments for the utility of Inconclusive decisions and that they should not be considered errors. First, they point out that calling an Inconclusive conclusion from a known match comparison an error is a contradiction in terms, being that an Inconclusive decision makes no reference to ground truth. Second, they show through formal decision theoretic analysis that an Inconclusive decision actually has high practical value (utility), particularly in individual cases and within the precept that false positives are highly undesirable. In this study, we have not imputed error to Inconclusive conclusions. In a controlled research study where the ground truth match/nonmatch status of every comparison is known, one may readily count the number of known matches and known nonmatches that were deemed Inconclusive by examiners, as in Table . Such counts have been dubbed “incorrect Inconclusives,” which some authors propose should be considered systematic errors inherent to the process of firearms comparisons, as they fail to indicate what is known, although emphasizing that they should not be considered individual errors [ , , , ]. Such counts are highly dependent on the overall difficulty level of comparisons in each study and on intra‐ and inter‐examiner variations in decision‐making . Another approach that has been advocated for casework (where true match status is unknowable) is to convene a group of experts to arrive at a consensus opinion. Any deviation from consensus would be considered an “incorrect Inconclusive” . The general approach to characterize so‐called “incorrect Inconclusives” would be to have a group of examiners each compare the same group of specimen pairs to establish a consensus opinion. The consensus method can also be applied to a research study. One such study, involving 13 firearms examiners, achieved unanimity only for the 7 known matches among 20 comparison sets, but demonstrated a somewhat higher proportion of ground truth conclusions for most examiners by including some of their Inconclusive decisions that fell within the interquartile range of the consensus . Wide variation in ground truth accuracy for each set was attributed to an inadequate number of examiners forming the consensus group. A study assessing the value for Identification of latent fingerprints by 23 participants achieved unanimity on 48% of 520 mated pairs . Another study involving the analysis of 12,279 palm prints by 226 examiners found a high level of variability in decisions of value, with 25% disagreement from consensus . The latter study also found the unnerving result with respect to casework that, in 45 instances, an Identification conclusion differed from the consensus decision of exclusion (reached by a large number of examiners), but the consensus was wrong in 36 of those instances. Seeking consensus was first suggested to support an accused's right to appeal or to assess whether particular latent fingerprint examiners within a laboratory may be overly cautious or aggressive—but “not necessarily wrong in absolute terms” [ , section 3.3.6.2]. (Technical review of laboratory reports prior to release, integral to many quality‐assurance protocols, is a related concept that usually involves two experts, while consensus involves a group.) Counting known matching and known nonmatching sets that were judged Inconclusive (at any subdivision) would reveal examiners who treat some comparisons that offer minimal, inadequate, or ambiguous discriminating information either more conservatively or aggressively than other examiners but might also imply they made errors. Several practical limitations beset the method of consensus, including operational overhead, determining membership and size of the august group, use of majority vs. unanimous opinion, and differences in outcome arising from differences in group membership, training, and level of conservatism. In reporting the results of the present black box study of examiner accuracy, which is intrinsically agnostic to process, we have taken the position that Inconclusive conclusions shall not be considered errors. Combining the false positive error rate with the rate of Inconclusive decisions would exemplify the same ubiquitous “systematic error” due to limitations in our ability to sense or measure any natural phenomenon. It would result in an inappropriate and misleading amalgam of an important metric (false‐positive rate) with one that is noninculpatory and inherently agnostic (Inconclusive rate) . Inconclusive decisions are not systematic errors; rather, they are an essential part of the firearms discipline and Inconclusive decisions provide a check against bad Identifications. Bias mitigation A reliable research study is designed to anticipate, recognize, and mitigate potential sources of bias . Some initial volunteers discontinued their participation in the project without reviewing any specimen packets once they realized that their daily workload was incompatible with the high level of effort required to participate. Research can rarely test an entire population so must address the representativeness of a population sample. We solicited volunteers among American firearms examiners representing every employment situation except self‐employment. Thus, it was not a random sample, but a sample of convenience.As volunteers, the participants were, of necessity, self‐selected and aware of study participation due to statutory requirements for the protection of human subjects in research (Materials and Methods section) and by being willing and able to devote extensive time and effort above and beyond the demands of casework. Selection bias is a potential concern that forms the basis for assertions that self‐selection/voluntary participation results in a no‐representative sample (the participants) from the sampling frame (all forensic examiners). The implied consequences of nonrepresentativeness are that the participants self‐select into the study for reasons that might cause them to perform better than the examiner population at large, lowering the calculated error rate. There is no empirical basis for an assumption of superior performance by those who opted for participation. Our assumption in designing the experiment is that any potential sources of bias are compensating, and thus, the sample pool is sufficiently representative of the larger population. We specifically requested that voluntary participants be “qualified examiners,” in the assumption that most crime laboratories do not classify their qualified examiners into various arbitrary and subjective performance or proficiency levels. An examiner is either “qualified” or not; “proficient” or not, without further gradation. Another source of potential bias is the Hawthorne effect, which is a phenomenon in which individuals behave differently when they are being observed . Similar effects have been postulated related to selection bias, viz., that individuals who engage diligently for the duration of the study are somehow more skilled as examiners, while those who terminate their role in the research prior to completion are less skillful or less conscientious. The resulting imputation is that error rates are artificially suppressed. In the case of our research, compensating phenomena offset potential Hawthorne effects. Increased “diligence” in performing the comparisons for the experiment that might result in “better” results should be offset by the absence of typical casework protocols that would reduce errors further, such as secondary reviews and blind verifications. Furthermore, we could reasonably postulate diminished “diligence” since the experimental specimens do not represent “real” casework, with all the attendant consequences. As noted above, some enrolled examiners did not complete the full course of comparisons. Among the examiners making false‐positive Identifications of either bullets or cartridge cases, their degree of participation in processing multiple submissions of sample sets was similar to that observed among examiners making no false Identifications. To test for substructure in the sample population due to unequal participation, results were stratified according to what was reported by two distinct groups of examiners: those who performed 345 bullet comparisons and those who performed 690 comparisons (Table ). Because the reported number of errors is small, being equal to 0 in one case, a two‐sided Fisher's exact test was used to test for nonrandom associations between results in the two subgroups. The exact probability is 0.309, indicating no significant difference between the accuracy of examiners who withdrew from the study and those who remained. Nor were there any indications of population substructure due to the use of the CMS method (which involves determining whether the number of consecutive matching striae meets a minimum criterion for correspondence that was empirically determined from best known nonmatches ) or whether the employing laboratory was accredited. Among 10 examiners who committed false‐positive errors with bullets (Tables , and Figure ), one included CMS as part of the comparison process (a very small fraction of all examiners did so; the number is withheld to protect anonymity), and another is employed by a nonaccredited laboratory (of the 18 participating examiners in nonaccredited laboratories). Similarly, for the 18 examiners making false‐positive Identifications of cartridge cases (Table and Figure ), one (the same one) is a practitioner of the CMS approach for bullets and another is employed by a nonaccredited laboratory. CONCLUSIONS This black box study demonstrated a high level of performance by 173 qualified firearms examiners who performed 8640 challenging comparisons. No false‐positive or false‐negative errors were made by the majority of examiners when examining bullets or cartridge cases (80% and 79% of examiners, respectively). Estimates for overall false‐positive and false‐negative error probabilities were calculated as 0.656% and 2.87% for bullets and 0.93% and 1.87%, for cartridge cases, respectively. The 95% confidence intervals for false positives and false negatives are (0305%, 1.42%) and (1.89%, 4.26%), respectively, for bullets. Similarly, for cartridge cases the 95% confidence intervals are (0.548%, 1.57%) and (1.16%, 2.99%) for false positives and false negatives, respectively. This study presented a challenging test of examiner capabilities by using: a fully randomized open set design; firearms that do not produce aperture shear/firing pin drag; conditions promoting the production of subclass characteristics (via separation in manufacturing sequence and/or related to aspects of firearm design); steel ammunition (cartridge cases and jacketed bullets); variable separation in the firing order (up to 850 firings); no verification by a second examiner; and unavailability of firearms or barrel casts, either for further analysis or production of additional reference specimens. The majority of errors were made by a limited number of examiners. For example, 13 examiners account for almost half of all the errors (54 of 112). A subset of these 13 is the 6 most error‐prone examiners, who accounted for almost 30% of the total errors (33 of 112). Because error rates vary by the examiner, 95% confidence limits on error probabilities were estimated using a beta‐binomial model that assumes a separate error probability for each examiner. The population of participating examiners was a sample of convenience, but there are no discernible indications of any characteristics that might set those examiners apart, including duration of study participation, laboratory accreditation, or use of CMS. Conclusions are somewhat tenuous because the fraction of participating examiners who made errors is small. The study sample likely was reasonably representative of the population of qualified firearms examiners who are employed by public forensic laboratories in the United States. Examiners demonstrated a high rate of definitive conclusions, particularly for known matches. Differences in the observed rate of definitive conclusions are attributable to the challenges presented by the particular comparison sets each examiner received, as well as to examiner skill and risk tolerance. For ground truth matches, the overall rate of Identification was similar for bullet and cartridge case sets, while the rate of Inconclusive decisions was somewhat higher for the latter. For ground truth nonmatches, relative to comparisons of cartridge case sets, bullet sets resulted in fewer Eliminations, and consequently more Inconclusive decisions. The results of this study add to the ever increasing body of empirical data that firearms examiners conduct comparisons with a high level of accuracy. This and related studies address the “known or potential rate of error” of the Daubert court , Recommendation 3 of the NAS Report , and several recommendations of the PCAST report by measuring the accuracy and reliability of forensic analyses. Black box studies assess the overall reliability of a forensic discipline, not that of any particular examiner. The present study included a reasonably large and representative sample of practicing examiners, and many comparisons in a range of difficulties, conducted in a declared double‐blind, open‐set format. The results will offer additional resources to the courts as they weigh the admissibility and value of firearms testimony . The authors have no conflicts of interest to declare. Appendix S1 Click here for additional data file. Appendix S2 Click here for additional data file. Appendix S3 Click here for additional data file.
‘That's what makes me better’: Investigating children and adolescents' experiences of pain communication with healthcare professionals in paediatric rheumatology
8892083d-55ff-427e-9534-aada509ec1ed
10092465
Internal Medicine[mh]
INTRODUCTION Paediatric rheumatology receives a wide range of referrals for which chronic musculoskeletal pain is the main presenting concern (Clinch & Eccleston, ; Davies & Copeman, ; Kimura & Walco, ; McGhee et al., ). Children and adolescents' experiences of chronic musculoskeletal pain can vary widely along multiple dimensions such as the location, intensity and quality of pain; moreover, there may be associated impact of pain on both physical function and emotional well‐being (Edmond & Keefe, ; Huguet et al., ; Khanom et al., ; Schanberg et al., ). Comprehensive and developmentally appropriate assessment and communication about pain features are essential for validating an individual's report and experiences of pain (Defenderfer et al., ; Lang et al., ), as well as for informing treatment approaches and achieving optimal outcomes (Hadjistavropoulos et al., ; Hirschfeld, ). In UK paediatric rheumatology settings, some healthcare professionals perceive the assessment and communication of chronic musculoskeletal pain in children and adolescents to be hindered by time, resources and constraints in healthcare professionals' training in how to ask questions about and address paediatric pain (Lee et al., ; Lee et al., ). However, other research investigating healthcare professionals' perspectives (particularly nurses) has found that when pain communication does occur in clinical practice, healthcare professionals believe these conversations help to contextualize pain, educate and empower patients and support patient and family coping with pain (Jordan et al., ). Pain education in paediatrics (communicating about and teaching a child/adolescent about the underlying biopsychosocial mechanisms of pain, ultimately leading to modifications in their concept of pain) has been found to be a key component of effective multi‐disciplinary pain management (Harrison et al., ). Validating children's and adolescents' pain experiences in this way has been specifically associated with improved pain outcomes. Research exploring the perspective of children and adolescents about healthcare communication, particularly about chronic pain, is limited (Beresford & Sloper, ). Some literature has highlighted that children and adolescents believe that healthcare professionals do not understand their pain and healthcare professionals rarely provide them with strategies on how to manage pain in specialized chronic pain programs (Dell'Api et al., ). Children and adolescents with juvenile idiopathic arthritis report that it can be challenging to engage directly in conversation with healthcare professionals, often relying on parents to relay important information in general to healthcare professionals (Lundberg et al., ). Further research exploring pain communication in paediatric rheumatology practice is needed, given the inconsistent and incomplete perspectives in extant literature and the prominent role that pain has for children and adolescents being managed in this setting. Therefore, the aim of this study was to investigate children and adolescents' experiences of pain communication from their history of interactions with healthcare professionals in paediatric rheumatology in the United Kingdom. In particular, we were interested in children's and adolescents' experiences of key stakeholders in pain communication (e.g., which healthcare professionals communicated about pain and the role of parents in these conversations), children's and adolescents' insights about the structure and content of these conversations as well as the appropriateness of pain communication styles from children and adolescents' perspectives. METHODS 2.1 Design This study was a qualitative semi‐structured telephone interview study with children and adolescents. The study has been structured in accordance with the consolidated criteria for reporting qualitative research (COREQ) (Tong et al., ) (Please see Appendix for a completed COREQ checklist). 2.2 Participants Participants were recruited from specialist paediatric rheumatology centres at three tertiary paediatric hospitals across the United Kingdom. Children and adolescents were considered for inclusion if they were aged between 5 and 19 years of age and were under the care of the paediatric rheumatology team and able to communicate in English. Exclusion criteria included children and adolescents who were within 3 months of discharge from the paediatric rheumatology service, being transferred to adult services and not being able to communicate in English. There were no exclusion criteria based on cognitive impairment. A purposive sample ( n = 118) of children and adolescents of different ages, different conditions and different durations of illness were approached for participation in the study to ensure a diverse sample of experiences and perspectives were captured. 2.3 Procedure Ethical approval to conduct this study was provided by the East Midlands Nottingham Research Ethics Committee (20/EM/0195). Eligible participants were identified from clinical databases by healthcare professionals within the paediatric rheumatology team and clinical research nurses. Written information about the purpose and procedures of the study was sent directly to potentially eligible children and adolescents (and their parents). Alternatively, healthcare professionals and clinical research nurses discussed the study with eligible children and adolescents during/immediately after their clinical consultations. These discussions were structured with standardized information which had been provided to healthcare professionals by the research team. Interested participants were asked to return a reply slip in a pre‐paid envelope to the research team. The lead researcher (RRL), a female postdoctoral researcher with experience in conducting qualitative research with families and healthcare professionals, contacted interested children, adolescents and parents to discuss the study further. The researcher had no contact with children and adolescents prior to this study. For children and adolescents who wished to participate in the study, a convenient date and time for a telephone interview were arranged. Informed consent from parents and assent from children and adolescents under 16 years of age was audio‐recorded before the interview took place. For participants aged 16 years or older, informed consent was provided by the adolescent themselves and audio‐recorded in the same way before the interview began. All interviews with participants were conducted by the lead researcher (RRL) between April and October 2021 (during the COVID‐19 pandemic). Social distancing guidelines were in place during the data collection period for this study, therefore telephone interviews were conducted. Telephone interviews are considered to be an acceptable and valuable mode of interviewing, arguably yielding as rich and reliable data as interviews conducted face‐to‐face (Sturges & Hanrahan, ). Telephone interviews have several advantages which were particularly useful in the context of interviewing children and adolescents. For example, telephone interviews are useful for engaging with hard‐to‐reach groups which were particularly pertinent to the current study in which the researcher required access to children/adolescents via their parents or guardians. Furthermore, telephone interviews were viewed as more feasible and time‐efficient by participants. For the current study, participants were not required to travel to interview locations and interviews could be more easily co‐ordinated around school and other commitments. It has been found that telephone interviews can be viewed as contributing to a stronger sense of anonymity by participants. This was clearly apparent in the current study where we found that children and adolescents talked freely and openly with the researcher during the interviews. At the beginning of the interviews, children and adolescents were asked to provide information about their date of birth, gender, diagnoses, age at diagnosis (of the condition for which they were referred to paediatric rheumatology for) and medications. Where children and adolescents were unable to provide this information, the interviewer liaised with parents to capture these details. Following this, children and adolescents were asked to provide information about their interactions with healthcare professionals within the paediatric rheumatology team, including how many healthcare professionals in total they saw from the team, which healthcare professional they saw the most frequently, and which healthcare professional they believed talked to them the most about pain during their consultations. The interview topic guide was based upon an earlier study by the research team which explored healthcare professional perspectives on pain assessment and communication in paediatric rheumatology (Lee et al., ). The interview schedule was initially drafted by the lead researcher (RRL) and refined through meetings with the study team. There was additional direct input from children and adolescents from patient advisory groups (specifically YOURRHEUM which is a young person's advisory group for those with rheumatic conditions, https://yourrheum.org/ ), charities and individual patient collaborators on the project (see Table for final interview schedule). This involvement led to changes in the specific wording and order of the interview questions asked but not in the main topic areas covered by the interview topic guide. At the end of the study, participants received a study debrief sheet. A letter of participation for participants was sent to hospital sites for storage within their clinical notes. All children and adolescents who took part in the study were provided with a £20 shopping voucher and a certificate of participation in the study. All semi‐structured interviews were audio‐recorded using an encrypted audio‐recorder and transcribed verbatim for analysis. 2.4 Data analysis A framework analysis approach was used to understand the similarities and divergences in experiences of participants (Ritchie & Lewis, ; Ritchie & Spencer, ). This analytical approach to data was selected after consideration to other approaches as framework analysis allows multi‐disciplinary teams of researchers to manage, interpret and reduce large data sets, whilst still retaining a holistic and comparable overview of themes across and within the entire data set (Gale, 2013). The theoretical underpinnings of this approach originated from social policy research but it has become increasingly advocated for use in medical and health research because of these advantages. Two authors (RRL and DM) were the main data analysts. DM was a PhD student at the time of the study being conducted (MSc, female) and had experience of conducting qualitative research. NVivo version 12 (QSR International, Warrington, UK) was used to facilitate qualitative data analyses. Consistent with recommended procedures for framework analysis, our analytical approach to the data involved: (1) familiarization, (2) coding, (3) identification of a thematic framework, (4) indexing, (5) charting and (6) interpretation (Ritchie & Lewis, ; Ritchie & Spencer, ). These procedures were performed in a non‐sequential order, with the researchers going back and forth between the steps throughout the analysis of data. In the familiarization stage (step 1), the two main data analysts read/re‐read interview transcripts and listened/re‐listened to audio‐recorded interviews. After familiarization, both analysts began coding the transcripts and created reflective notes about data and codes independently (step 2). In step 3, these codes were used to inform and build a written ‘working’ thematic framework which was developed based upon a priori aims/questions (deductively) and emerging patterns of experiences and perspectives (inductively) from the participant accounts which were being coded. At this point, the framework was described as a ‘working’ framework as further iterative coding, interpretation and re‐coding of data where appropriate was fed back into the framework so it became more exhaustive and robust over time. The ‘working’ framework was discussed amongst the research team until clarity and consensus were gained about the initial themes and codes identified. In step 4 of the analysis, the framework was then applied to the sorting of data, with relevant fragments of the data indexed in NVivo according to the themes outlined in the framework. During this indexing phase, there was even further refinement of the initial framework during which new themes and sub‐themes were identified by the two main data analysts and fed back in to the framework again. In step 5 of the analysis, indexed data were transformed into a chart, which consisted of columns (themes as identified in the framework) and rows (participant interviews) complete with analytical summaries added by the two analysts throughout columns and rows. Please see Appendix (Framework analysis charting process of participant quotes within and across themes) for the full indexing and charting process. Once the chart was complete across all themes and interviews, the two analysts looked across all summaries to interpret the data (step 6 of the framework analysis). Connections between themes and participant accounts were summarized into narratives. Throughout all of the steps involved in the framework analysis, the analysts (RRL and DM) each kept a reflexive journal independently to keep account of their philosophical standpoints, their ongoing thought processes about the data and any of their potential biases which they felt were influencing the interpretation of data. This is in line with recommended techniques to establish transparency and enhance the trustworthiness in the identification of qualitative themes (Lincoln & Guba, ). Broader analytical discussions about the interpretations of data included consideration to these reflexive accounts amongst all members of the research team. The themes retained and presented were decided upon by consensus reached across all members of the research team who were in agreement about the relevance and importance of the themes in light of the research question posed in the study. There were no retention criteria with regard to the endorsement of themes or sub‐themes by a specified number of participants. The research team decided a priori to the interviews and analysis being conducted that data collection would stop when the data that were being collected were repeating what had been expressed in participant accounts already captured in the study. This is a concept called ‘data saturation’. Data saturation occurs when any further data collection is considered to be unnecessary as no new additional data are being identified which could further inform the themes and sub‐themes (Saunders et al., ). In the current study, the point of data saturation occurring was initially determined by the two researchers who were analysing the data (RRL and DM). The analysts' perceptions about data saturation being reached were discussed amongst all members of the research team, to ensure that all research team members were in agreement with this before data collection ended. Data collection ended after 26 interviews when the whole research team was in consensus that data saturation had occurred. Design This study was a qualitative semi‐structured telephone interview study with children and adolescents. The study has been structured in accordance with the consolidated criteria for reporting qualitative research (COREQ) (Tong et al., ) (Please see Appendix for a completed COREQ checklist). Participants Participants were recruited from specialist paediatric rheumatology centres at three tertiary paediatric hospitals across the United Kingdom. Children and adolescents were considered for inclusion if they were aged between 5 and 19 years of age and were under the care of the paediatric rheumatology team and able to communicate in English. Exclusion criteria included children and adolescents who were within 3 months of discharge from the paediatric rheumatology service, being transferred to adult services and not being able to communicate in English. There were no exclusion criteria based on cognitive impairment. A purposive sample ( n = 118) of children and adolescents of different ages, different conditions and different durations of illness were approached for participation in the study to ensure a diverse sample of experiences and perspectives were captured. Procedure Ethical approval to conduct this study was provided by the East Midlands Nottingham Research Ethics Committee (20/EM/0195). Eligible participants were identified from clinical databases by healthcare professionals within the paediatric rheumatology team and clinical research nurses. Written information about the purpose and procedures of the study was sent directly to potentially eligible children and adolescents (and their parents). Alternatively, healthcare professionals and clinical research nurses discussed the study with eligible children and adolescents during/immediately after their clinical consultations. These discussions were structured with standardized information which had been provided to healthcare professionals by the research team. Interested participants were asked to return a reply slip in a pre‐paid envelope to the research team. The lead researcher (RRL), a female postdoctoral researcher with experience in conducting qualitative research with families and healthcare professionals, contacted interested children, adolescents and parents to discuss the study further. The researcher had no contact with children and adolescents prior to this study. For children and adolescents who wished to participate in the study, a convenient date and time for a telephone interview were arranged. Informed consent from parents and assent from children and adolescents under 16 years of age was audio‐recorded before the interview took place. For participants aged 16 years or older, informed consent was provided by the adolescent themselves and audio‐recorded in the same way before the interview began. All interviews with participants were conducted by the lead researcher (RRL) between April and October 2021 (during the COVID‐19 pandemic). Social distancing guidelines were in place during the data collection period for this study, therefore telephone interviews were conducted. Telephone interviews are considered to be an acceptable and valuable mode of interviewing, arguably yielding as rich and reliable data as interviews conducted face‐to‐face (Sturges & Hanrahan, ). Telephone interviews have several advantages which were particularly useful in the context of interviewing children and adolescents. For example, telephone interviews are useful for engaging with hard‐to‐reach groups which were particularly pertinent to the current study in which the researcher required access to children/adolescents via their parents or guardians. Furthermore, telephone interviews were viewed as more feasible and time‐efficient by participants. For the current study, participants were not required to travel to interview locations and interviews could be more easily co‐ordinated around school and other commitments. It has been found that telephone interviews can be viewed as contributing to a stronger sense of anonymity by participants. This was clearly apparent in the current study where we found that children and adolescents talked freely and openly with the researcher during the interviews. At the beginning of the interviews, children and adolescents were asked to provide information about their date of birth, gender, diagnoses, age at diagnosis (of the condition for which they were referred to paediatric rheumatology for) and medications. Where children and adolescents were unable to provide this information, the interviewer liaised with parents to capture these details. Following this, children and adolescents were asked to provide information about their interactions with healthcare professionals within the paediatric rheumatology team, including how many healthcare professionals in total they saw from the team, which healthcare professional they saw the most frequently, and which healthcare professional they believed talked to them the most about pain during their consultations. The interview topic guide was based upon an earlier study by the research team which explored healthcare professional perspectives on pain assessment and communication in paediatric rheumatology (Lee et al., ). The interview schedule was initially drafted by the lead researcher (RRL) and refined through meetings with the study team. There was additional direct input from children and adolescents from patient advisory groups (specifically YOURRHEUM which is a young person's advisory group for those with rheumatic conditions, https://yourrheum.org/ ), charities and individual patient collaborators on the project (see Table for final interview schedule). This involvement led to changes in the specific wording and order of the interview questions asked but not in the main topic areas covered by the interview topic guide. At the end of the study, participants received a study debrief sheet. A letter of participation for participants was sent to hospital sites for storage within their clinical notes. All children and adolescents who took part in the study were provided with a £20 shopping voucher and a certificate of participation in the study. All semi‐structured interviews were audio‐recorded using an encrypted audio‐recorder and transcribed verbatim for analysis. Data analysis A framework analysis approach was used to understand the similarities and divergences in experiences of participants (Ritchie & Lewis, ; Ritchie & Spencer, ). This analytical approach to data was selected after consideration to other approaches as framework analysis allows multi‐disciplinary teams of researchers to manage, interpret and reduce large data sets, whilst still retaining a holistic and comparable overview of themes across and within the entire data set (Gale, 2013). The theoretical underpinnings of this approach originated from social policy research but it has become increasingly advocated for use in medical and health research because of these advantages. Two authors (RRL and DM) were the main data analysts. DM was a PhD student at the time of the study being conducted (MSc, female) and had experience of conducting qualitative research. NVivo version 12 (QSR International, Warrington, UK) was used to facilitate qualitative data analyses. Consistent with recommended procedures for framework analysis, our analytical approach to the data involved: (1) familiarization, (2) coding, (3) identification of a thematic framework, (4) indexing, (5) charting and (6) interpretation (Ritchie & Lewis, ; Ritchie & Spencer, ). These procedures were performed in a non‐sequential order, with the researchers going back and forth between the steps throughout the analysis of data. In the familiarization stage (step 1), the two main data analysts read/re‐read interview transcripts and listened/re‐listened to audio‐recorded interviews. After familiarization, both analysts began coding the transcripts and created reflective notes about data and codes independently (step 2). In step 3, these codes were used to inform and build a written ‘working’ thematic framework which was developed based upon a priori aims/questions (deductively) and emerging patterns of experiences and perspectives (inductively) from the participant accounts which were being coded. At this point, the framework was described as a ‘working’ framework as further iterative coding, interpretation and re‐coding of data where appropriate was fed back into the framework so it became more exhaustive and robust over time. The ‘working’ framework was discussed amongst the research team until clarity and consensus were gained about the initial themes and codes identified. In step 4 of the analysis, the framework was then applied to the sorting of data, with relevant fragments of the data indexed in NVivo according to the themes outlined in the framework. During this indexing phase, there was even further refinement of the initial framework during which new themes and sub‐themes were identified by the two main data analysts and fed back in to the framework again. In step 5 of the analysis, indexed data were transformed into a chart, which consisted of columns (themes as identified in the framework) and rows (participant interviews) complete with analytical summaries added by the two analysts throughout columns and rows. Please see Appendix (Framework analysis charting process of participant quotes within and across themes) for the full indexing and charting process. Once the chart was complete across all themes and interviews, the two analysts looked across all summaries to interpret the data (step 6 of the framework analysis). Connections between themes and participant accounts were summarized into narratives. Throughout all of the steps involved in the framework analysis, the analysts (RRL and DM) each kept a reflexive journal independently to keep account of their philosophical standpoints, their ongoing thought processes about the data and any of their potential biases which they felt were influencing the interpretation of data. This is in line with recommended techniques to establish transparency and enhance the trustworthiness in the identification of qualitative themes (Lincoln & Guba, ). Broader analytical discussions about the interpretations of data included consideration to these reflexive accounts amongst all members of the research team. The themes retained and presented were decided upon by consensus reached across all members of the research team who were in agreement about the relevance and importance of the themes in light of the research question posed in the study. There were no retention criteria with regard to the endorsement of themes or sub‐themes by a specified number of participants. The research team decided a priori to the interviews and analysis being conducted that data collection would stop when the data that were being collected were repeating what had been expressed in participant accounts already captured in the study. This is a concept called ‘data saturation’. Data saturation occurs when any further data collection is considered to be unnecessary as no new additional data are being identified which could further inform the themes and sub‐themes (Saunders et al., ). In the current study, the point of data saturation occurring was initially determined by the two researchers who were analysing the data (RRL and DM). The analysts' perceptions about data saturation being reached were discussed amongst all members of the research team, to ensure that all research team members were in agreement with this before data collection ended. Data collection ended after 26 interviews when the whole research team was in consensus that data saturation had occurred. RESULTS 3.1 Participant recruitment and sample description The final study sample of 26 study participants had a median age of 14 years (Range = 6–18 years) and a median duration of illness of 3 years (Range = 1–11 years) (See Table ). Twelve children/adolescents were recruited from one hospital, and seven were recruited from each of the other two hospitals involved in the study ( n = 14 combined). The hospitals have not been specifically named to protect the anonymity of the children/adolescents and healthcare professionals involved in the study. Diagnoses included juvenile idiopathic arthritis (JIA, n = 16), Chronic Idiopathic Pain Syndromes (CIPS) (including Chronic Regional Pain Syndrome [CRPS], n = 5) and Ehlers Danlos Syndrome (EDS)/hypermobility ( n = 5). Interview times ranged from 14 to 51 min (Mean = 18.73 min). Participants reported having consultations with between one and six different healthcare professionals in the paediatric rheumatology team. Participants reported the most frequent contact with rheumatologists (69.23%). Rheumatologists were the healthcare professionals with whom children and adolsecent's were most likely to discuss pain with (53.85%) (See Table ). 3.2 Themes, subthemes and interpretation Four overarching themes were identified: (1) Co‐ordination of pain communication, (2) Barriers to pain communication, (3) Facilitators of pain communication, (4) Dissatisfaction with pain communication. Participant recruitment and sample description The final study sample of 26 study participants had a median age of 14 years (Range = 6–18 years) and a median duration of illness of 3 years (Range = 1–11 years) (See Table ). Twelve children/adolescents were recruited from one hospital, and seven were recruited from each of the other two hospitals involved in the study ( n = 14 combined). The hospitals have not been specifically named to protect the anonymity of the children/adolescents and healthcare professionals involved in the study. Diagnoses included juvenile idiopathic arthritis (JIA, n = 16), Chronic Idiopathic Pain Syndromes (CIPS) (including Chronic Regional Pain Syndrome [CRPS], n = 5) and Ehlers Danlos Syndrome (EDS)/hypermobility ( n = 5). Interview times ranged from 14 to 51 min (Mean = 18.73 min). Participants reported having consultations with between one and six different healthcare professionals in the paediatric rheumatology team. Participants reported the most frequent contact with rheumatologists (69.23%). Rheumatologists were the healthcare professionals with whom children and adolsecent's were most likely to discuss pain with (53.85%) (See Table ). Themes, subthemes and interpretation Four overarching themes were identified: (1) Co‐ordination of pain communication, (2) Barriers to pain communication, (3) Facilitators of pain communication, (4) Dissatisfaction with pain communication. 1.1 Expectation of pain communication Children and adolescents unanimously agreed that it was important for paediatric rheumatology healthcare professionals to ask them about their pain at some point during their consultations, as is evident in the quotes below. There was an expectation that pain should be addressed in consultations because this was part of the professional's job and how children and adolescents with pain ‘got better’; “It's important because that's their job, isn't it? Like, I think that's important, so you know what's happening”, Participant 2, 15 year old male, JIA. “Because that's the thing about pain, it's a question I'm obviously expecting. The consultant has been very, very helpful to me and I understand that speaking about my pain is the best way to get help. So, I am as comfortable as I can be speaking about it because I understand it is really important to my treatment”, Participant 18, 16 year old female, JIA. “Because that's what makes me better”, Participant 19, 6 year old female, JIA. 1.2 Purpose of pain communication As described in the following quotes from the perspective of children and adolescents, the purpose of pain conversations was so that the healthcare professional was able to see how pain had changed, decide what the best treatment was and to investigate any progress with treatments since their prior consultation; “Because if they ask you how the medication's affecting you and how you are feeling, then they'll be able to work out whether it's a good thing that you're on the medication or a bad thing, they'll know like what other things you need”, Participant 25, 15 year old male, JIA. Participants also talked about how their pain reports were important for informing healthcare professionals on how to help other children and adolescents with similar pain presentations in the future; “So they can help me and so they can help other people, because if they get symptoms that I have for something they might think one day when someone else comes in, like, oh, she used to have this, could follow up on that.”, Participant 24, 13 year old female, EDS/Hypermobility. It was important to children and adolescents that they could tell healthcare professionals everything about their pain, as all pain information had an important purpose to communicate with them; “I tell them how bad my pain is and I try to tell them in‐depth what it's like and stuff. I'd say that all pain is important. I think it's very important because if they don't know what sort of pain I'm in they might suggest the wrong thing and the wrong treatment and that could make things worse”, Participant 13, 14 year old female, JIA. 1.3 Mixed roles and values in parents' pain reporting Children and adolescents had mixed perspectives on the role and value of their parent's involvement in pain communication with healthcare professionals, as demonstrated in the range of quotes provided below. Some children and adolescents reported that healthcare professionals directed questions about pain to their parents. Some children and adolescents viewed this as problematic, creating particular difficulties for those who did not want to reveal the full extent of their pain to their parents; “But they were asking all the questions to my mum and I was sat there thinking this is my appointment, it should be me speaking. My mum doesn't know the pain that I'm in. She can see the pain that I'm in but she can't feel the pain I'm in. And then I stopped letting my mum come to appointments with me because it started to turn out like I'd be sat there in the corner whilst my mum was having a consultation with the doctors. I show my mum what I want…I hide my pain very well. So the doctor could be sat there asking my mum how I've been and mum's been like, yeah, yeah, she's fine, she's fine, she's not complained much about it this week, but realistically I could have”, Participant 3, 18 year old female, EDS/Hypermobility. For other children and adolescents, parents were the main person they told about their pain, more so than healthcare professionals, which made parents valuable advocates in pain reporting, as described below; “Yeah, my mum always comes in….I think my mum talks about it more than I do because you'll get more out of my mum than you do out of me… sometimes they'll ask me, but if they want to get anything out properly, they'll ask my mum…I'll be like, I'm fine, or something like that, and then my mum will be like, you're not fine…If they want to know about how bad the pain has really been, they know to ask my mum and not me. Because I'll tell my mum but I won't tell them”, Participant 15, 16 year old female, CIPS. As the below quote suggests, children and adolescents highlighted how parents could describe an outside perspective about pain, which was useful for identifying where the child/adolescent had been struggling but not noticed these difficulties themselves; “Sometimes I don't notice like my walking and stuff, so she'll describe like if my walking's been bad or something like that…I think my mum talks more than me…my mum gets more worried about like if she notices something that I don't notice. So it's more like smaller things that she's picked up on that's starting to occur”, Participant 22, 16 year old male, CIPS. Parents were also key for reminding children and adolescents to report pain episodes they had forgotten about; “Yeah, if I've forgotten something like, cause I'm quite forgetful sometimes. But that…other than that she just leaves me to talk to them.”, Participant 10, 13 year old female, JIA. 1.4 Methods in pain communication Children and adolescents referred to being verbally asked to rate their pain from 1 to 10 or to rate their pain upon a body manikin tool which was used by a range of healthcare professionals. Participants talked about the difficulties of using these tools, as the pain they were asked to reflect upon may have changed in the past week, as evident in the following quotes; “When you're kind of in the waiting area to go for your appointment, they give you the sheets of paper to answer how you've been doing. They kind of give you like a scale to answer how much pain you've been in. But the only thing with those is it's how you've been feeling for the past month or past week, which I find quite hard because sometimes I could be feeling quite bad one day but then good the next”, Participant 11, 12 year old female, JIA. “Then she'll bring out this piece of paper with like a body on it, and then I'll tell her where my joints have been hurting and everything”, Participant 25, 15 year old male, JIA. Other participants mentioned ‘surveys’ or ‘quizzes’ such as the ‘smiley faces’. Participants talked about how pain ratings using these tools would be returned without being asked for further elaboration from the healthcare professional; “Like one to ten, things like that… I think they write it down and put it in (the clinical notes)  and that's it. I know the consultant used to have a little stick man and label where the pain was…They'd go into more depth, of how you're feeling and how it's affecting you and things like that or if it's affecting anything else in your life”, Participant 16, 16 year old female, JIA. 1.5 Specific questions asked about pain Remembering how pain had been in the past and breaking pain down into its components to tell healthcare professionals about was seen as difficult when broad questions about pain were asked, as demonstrated in the following reflection; “What type of pain are you feeling, is it niggly pains, is it dull, where have you been feeling those pains. So, it's always how have you been coping in college with your pain. I get a lot of stuff like that. Or how have you been, how has it affected your schoolwork. I get a lot of that. Sometimes I wish there was kind of a way of being able to break it down a bit easier to explain it…I know it has to be asked, how has your pain been, but sometimes that's such a broad question, my brain starts thinking of everything”, Participant 7, 18 year old female, CIPS. Participants described a range of specific questions they were asked by healthcare professionals about pain, including questions about potential pain causes or pain triggers, pain location, pain qualities, pain frequency, pain timing, pain interference with activities (particularly schoolwork), pain coping, pain changes and pain management strategies tried; “He'll ask what type of pain it is, and how often, and what time it comes and if there's a point where it gets worse during the day, and how I'm getting on with the medication that I'm currently on. Yeah…It'll just be like, oh, so like, it's just the same questions”, Participant 12, 15 year old female, CIPS. “She just asks like what kicks it off, how do I solve it, stuff like that…She asks like is there any certain things that set it off or is there like any movements that you find difficult when it sets of”, Participant 22, 16 year old male, CIPS. Children and adolescents unanimously agreed that it was important for paediatric rheumatology healthcare professionals to ask them about their pain at some point during their consultations, as is evident in the quotes below. There was an expectation that pain should be addressed in consultations because this was part of the professional's job and how children and adolescents with pain ‘got better’; “It's important because that's their job, isn't it? Like, I think that's important, so you know what's happening”, Participant 2, 15 year old male, JIA. “Because that's the thing about pain, it's a question I'm obviously expecting. The consultant has been very, very helpful to me and I understand that speaking about my pain is the best way to get help. So, I am as comfortable as I can be speaking about it because I understand it is really important to my treatment”, Participant 18, 16 year old female, JIA. “Because that's what makes me better”, Participant 19, 6 year old female, JIA. As described in the following quotes from the perspective of children and adolescents, the purpose of pain conversations was so that the healthcare professional was able to see how pain had changed, decide what the best treatment was and to investigate any progress with treatments since their prior consultation; “Because if they ask you how the medication's affecting you and how you are feeling, then they'll be able to work out whether it's a good thing that you're on the medication or a bad thing, they'll know like what other things you need”, Participant 25, 15 year old male, JIA. Participants also talked about how their pain reports were important for informing healthcare professionals on how to help other children and adolescents with similar pain presentations in the future; “So they can help me and so they can help other people, because if they get symptoms that I have for something they might think one day when someone else comes in, like, oh, she used to have this, could follow up on that.”, Participant 24, 13 year old female, EDS/Hypermobility. It was important to children and adolescents that they could tell healthcare professionals everything about their pain, as all pain information had an important purpose to communicate with them; “I tell them how bad my pain is and I try to tell them in‐depth what it's like and stuff. I'd say that all pain is important. I think it's very important because if they don't know what sort of pain I'm in they might suggest the wrong thing and the wrong treatment and that could make things worse”, Participant 13, 14 year old female, JIA. Children and adolescents had mixed perspectives on the role and value of their parent's involvement in pain communication with healthcare professionals, as demonstrated in the range of quotes provided below. Some children and adolescents reported that healthcare professionals directed questions about pain to their parents. Some children and adolescents viewed this as problematic, creating particular difficulties for those who did not want to reveal the full extent of their pain to their parents; “But they were asking all the questions to my mum and I was sat there thinking this is my appointment, it should be me speaking. My mum doesn't know the pain that I'm in. She can see the pain that I'm in but she can't feel the pain I'm in. And then I stopped letting my mum come to appointments with me because it started to turn out like I'd be sat there in the corner whilst my mum was having a consultation with the doctors. I show my mum what I want…I hide my pain very well. So the doctor could be sat there asking my mum how I've been and mum's been like, yeah, yeah, she's fine, she's fine, she's not complained much about it this week, but realistically I could have”, Participant 3, 18 year old female, EDS/Hypermobility. For other children and adolescents, parents were the main person they told about their pain, more so than healthcare professionals, which made parents valuable advocates in pain reporting, as described below; “Yeah, my mum always comes in….I think my mum talks about it more than I do because you'll get more out of my mum than you do out of me… sometimes they'll ask me, but if they want to get anything out properly, they'll ask my mum…I'll be like, I'm fine, or something like that, and then my mum will be like, you're not fine…If they want to know about how bad the pain has really been, they know to ask my mum and not me. Because I'll tell my mum but I won't tell them”, Participant 15, 16 year old female, CIPS. As the below quote suggests, children and adolescents highlighted how parents could describe an outside perspective about pain, which was useful for identifying where the child/adolescent had been struggling but not noticed these difficulties themselves; “Sometimes I don't notice like my walking and stuff, so she'll describe like if my walking's been bad or something like that…I think my mum talks more than me…my mum gets more worried about like if she notices something that I don't notice. So it's more like smaller things that she's picked up on that's starting to occur”, Participant 22, 16 year old male, CIPS. Parents were also key for reminding children and adolescents to report pain episodes they had forgotten about; “Yeah, if I've forgotten something like, cause I'm quite forgetful sometimes. But that…other than that she just leaves me to talk to them.”, Participant 10, 13 year old female, JIA. Children and adolescents referred to being verbally asked to rate their pain from 1 to 10 or to rate their pain upon a body manikin tool which was used by a range of healthcare professionals. Participants talked about the difficulties of using these tools, as the pain they were asked to reflect upon may have changed in the past week, as evident in the following quotes; “When you're kind of in the waiting area to go for your appointment, they give you the sheets of paper to answer how you've been doing. They kind of give you like a scale to answer how much pain you've been in. But the only thing with those is it's how you've been feeling for the past month or past week, which I find quite hard because sometimes I could be feeling quite bad one day but then good the next”, Participant 11, 12 year old female, JIA. “Then she'll bring out this piece of paper with like a body on it, and then I'll tell her where my joints have been hurting and everything”, Participant 25, 15 year old male, JIA. Other participants mentioned ‘surveys’ or ‘quizzes’ such as the ‘smiley faces’. Participants talked about how pain ratings using these tools would be returned without being asked for further elaboration from the healthcare professional; “Like one to ten, things like that… I think they write it down and put it in (the clinical notes)  and that's it. I know the consultant used to have a little stick man and label where the pain was…They'd go into more depth, of how you're feeling and how it's affecting you and things like that or if it's affecting anything else in your life”, Participant 16, 16 year old female, JIA. Remembering how pain had been in the past and breaking pain down into its components to tell healthcare professionals about was seen as difficult when broad questions about pain were asked, as demonstrated in the following reflection; “What type of pain are you feeling, is it niggly pains, is it dull, where have you been feeling those pains. So, it's always how have you been coping in college with your pain. I get a lot of stuff like that. Or how have you been, how has it affected your schoolwork. I get a lot of that. Sometimes I wish there was kind of a way of being able to break it down a bit easier to explain it…I know it has to be asked, how has your pain been, but sometimes that's such a broad question, my brain starts thinking of everything”, Participant 7, 18 year old female, CIPS. Participants described a range of specific questions they were asked by healthcare professionals about pain, including questions about potential pain causes or pain triggers, pain location, pain qualities, pain frequency, pain timing, pain interference with activities (particularly schoolwork), pain coping, pain changes and pain management strategies tried; “He'll ask what type of pain it is, and how often, and what time it comes and if there's a point where it gets worse during the day, and how I'm getting on with the medication that I'm currently on. Yeah…It'll just be like, oh, so like, it's just the same questions”, Participant 12, 15 year old female, CIPS. “She just asks like what kicks it off, how do I solve it, stuff like that…She asks like is there any certain things that set it off or is there like any movements that you find difficult when it sets of”, Participant 22, 16 year old male, CIPS. 2.1 Appropriate timing of pain communication Children and adolescents found that they were asked about pain straight away during consultations, when they would have preferred to take their time building up to questions about pain, as described in the below participant accounts; “So, sometimes it is just me dragging myself there and it's kind of I know I just want to be in bed right now. So, bringing up the things that's making me in pain, I don't want to sort of be hit with it straightaway. I kind of want to take my time with it”, Participant 7, 18 year old female, CIPS. “It's definitely the professional. It's like the first thing that's asked when I come and sit down”, Participant 18, 16 year old female, JIA. Participants explained that questions about pain were predominantly asked about during physical examinations conducted by healthcare professionals. A repercussion of being asked about pain during physical examinations involving manipulation of joints meant that children and adolescents could be in pain as a consequence of the examination; “It always starts with the professional. I was doing my exercises and she could see they were starting to be really hurting me, even though I have to do a certain amount so she could analyse me”, Participant 9, 18 year old female, EDS/Hypermobility. “I think, sometimes, they ask me to do stuff like move my foot around and does that hurt?”, Participant 14, 8 year old male, JIA. For children and adolescents under the care of several specialities, it could become tedious repeating the same information about pain between different specialities seen at similar time intervals, as highlighted below; “Sometimes it can be a bit tedious like if I'm seeing orthopaedics and then I'm going to rheumatology and I'm kind of having to repeat myself, but I know it's got to be done”, Participant 7, 18 year old female, CIPS. Participants were frustrated when they felt that healthcare professionals had not given them the time to say what they wanted to say about pain before leaving their consultation; “I do occasionally (write a list), but then sometimes I think it's like, not worth it anyway…sometimes if you do it, and they don't as, then it just feels even worse, you know what I mean? Because you had all this you wanted to say, and you never got to say it”, Participant 12, 15 year old female, CIPS. It was important that children and adolescents did not feel like the healthcare professional was not listening, as this could be perceived as though they were uninterested; “They ask me first…It feels like they are trying to talk about something different and they are not like that interested”, Participant 17, 10 year old male, EDS/Hypermobility. It was also important that children and adolescents did not feel forced to provide information about pain if it was something they did not want to talk about that day; “I don't want to talk about this because when you talk about the pain, it's in the front of your mind. You have to think about it”, Participant 7, 18 year old female, CIPS. 2.2 Difficulties finding the terminology to express pain Children and adolescents sometimes did not know how to describe their pain and they were unsure about what terminology to use in asking or answering healthcare professionals' questions about pain, as discussed in the participant reflections below; “But then she just starts talking to my mum…about stuff I don't know and words I kind of know, but not much”, Participant 14, 8 year old male, JIA. “I didn't really know how to answer them because I didn't really know how to describe it, if you know what I mean… I don't really know how I felt…I mean, sort of, like, what sort of pain is it, I'm like, I don't know, it just hurts”, Participant 16, 16 year old female, JIA. For other children and adolescents, talking about pain had become normal to them and in these instances, they found it easier to have conversations about pain with healthcare professionals. As can be seen in the quotes below, participants explained that with age, they were able to understand how to talk about their pain better and developed more confidence with pain discussions; “I mean obviously because I was a child it was a lot harder to explain the pain, because I didn't really understand it… But those questionnaires they were really helpful…I mean obviously I've been doing it for quite a long time, so I'm used to it… I think it's because I used to see her a lot more and she seemed really, really supportive”, Participant 20, 17 year old female, JIA. “To me it's a normal everyday thing… if I had to mention it to people that weren't doctors or people that weren't my mates I'd be okay…it's easy to talk about”, Participant 24, 13 year old female, EDS/Hypermobility. 2.3 Feeling nervous, scared and/or overwhelmed Participants talked about how they sometimes felt ‘nervous’ and ‘scared’ to report pain to healthcare professionals; “Sometimes I do get a tiny bit nervous… but never full on I don't want to tell you, but sometimes…I've never been like that…I think it's more I'm just a bit scared or something like that, I think… Like, I know I'm not scared, but I just feel weird inside…Like butterflies in your tummy”, Participant 14, 8 year old male, JIA. These feelings appeared to arise from concerns about possible additional or new treatments including new medications, additional exercises to do, or additional referrals and investigations that might find ‘something else’ wrong with them, as demonstrated in the following quotations; “Because if I tell the physiotherapist, then she'll just make me do loads of exercises. I think just sometimes if I'm just not in the mood, I will just not be in the mood and I won't mention it, I'll just keep it to myself”, Participant 1, 17 year old male, JIA. “There's times when I've not wanted to bring up my pain…because I didn't want the stress of knowing that possibly I could have something else wrong with me. Do you get what I mean? The ways they ask me is obviously good, because they don't force anything out of me like, they don't get frustrated or thingy if I have a little emotional tic…Because when I first started to go I didn't want to accept that I had anything wrong with me. I didn't want to accept that I was poorly or that was me”, Participant 3, 18 year old female, Hypermobility. Talking about pain confirmed feelings of difference in relation to peers, as described in the quotes below; “Because it's not very nice, it's…it's not very nice. Sometimes because you don't want to talk about it because it makes you feel different. Because normally, people of my age, you don't really have to say if you've been feeling well, if you've been hurting or not. So it kind of just makes you feel quite different from everyone else”, Participant 11, 12 year old female, JIA. “I don't like speaking about the pain…I think it's just because I know I'll start tearing up because then it just makes me feel like I can't do what other people can do”, Participant 15, 16 year old female, CIPS. Pain conversations were also viewed as worrisome to some participants as it reminded them that their pain was going to affect them for the rest of their lives; “I was quite scared to be honest. It was like thrown in the deep end and I didn't really understand what it was…I didn't even realise children could get arthritis, so when they were asking me all these questions it was like oh it's quite scary because it's going to affect my life…I mean that was the one thing I was really worried about because I feel…I know when I'm older I'm probably going to have to have a joint replacement. I can already feel my joints grinding. So that always was on my mind”, Participant 20, 17 year old female, JIA. Children and adolescents reported sometimes generally feeling overwhelmed with talking about pain because they were tired of feeling and thinking about pain, as demonstrated in the below quote; “They are fine questions. They've changed depending on how better or worse the pain has got…Because I'm fed up of my pain and I just don't really want to talk about it”, Participant 17, 10 year old male, EDS/Hypermobility. 2.4 Pain uncertainty Participants talked about the difficulties of managing pain uncertainty. These difficulties could be exacerbated through communication about pain with healthcare professionals, particularly when negative test results were provided; “Well, I'd wish that I could tell them the pain, where it is, and they could snap their fingers and give me a diagnosis…Because with me it's all the uncertainty that gets me wound more than anything. I wish that I knew what was going on with my body, but unfortunately I don't”, Participant 3, 18 year old female, EDS/Hypermobility. “Not with sort of the departments I've been seeing recently. I felt like sometimes when I started going and I was with orthopaedics, it was sort of that…It was sort of that, well, your x‐rays are not really showing anything, your MRI is showing a bit. And then you say, but I'm in pain”, Participant 7, 18 year old female, CIPS. Participants were cognisant about the impact that this uncertainty may have had on the healthcare professionals who were trying to talk to them about pain, suggesting that healthcare professionals may have felt ‘helpless’ at times; “That's really all they could do to control the pain…Saying that you feel pain and they don't know what to do, it might make them feel a bit helpless”, Participant 20, 17 year old female, JIA. Participants tried to manage their own uncertainties by not thinking too much about what ‘might’ happen, as demonstrated in the following quote; “Well, really, when I was younger, I was quite confused, but now I'm just, like, that's my arthritis, I won't worry about it because I mean, I know it's not the end of the world now that I've got it…But I used to think bad stuff about it, but now I'm…just wait and see if this will happen because there's no point thinking in my mind, oh, this is going to happen when you don't know that it's going to happen”, Participant 14, 8 year old male, JIA. 2.5 Pain dismissal Participants felt as though pain was less of a focus in consultations when pain was low or was not believed to be related to their arthritis. They talked about feeling like pain was ‘brushed off’ when it was referred to by the healthcare professional as mechanical pain, even though to children and adolescents these pains felt the same and had the same impact, as seen below; “I feel like they're less focused on it…to me, it's the most important thing…I don't think is it the arthritis pain or is it this pain or is it that pain, I just think my leg's hurting…I think they refer to the other one as a mechanical pain and the arthritis one as arthritis pain. So the word mechanical… Puts metal in my mind… I think sometimes it could have been dismissed even though…I could be feeling a lot of pain and it could be affecting me in my day to day, but if there's no signs of inflammation or there's no swelling that they can see, then it sometimes got brushed off”, Participant 1, 17 year old male, JIA. Participants talked about pain being dismissed when investigations showed no cause for pain or when pain was occurring when the disease was in remission; “Maybe if I'm in remission or not…for the last few appointments. I find it unlikely given my condition. It's known for pain, that's what I'm there for really”, Participant 8, 13 year old female, JIA. Some participants felt that healthcare professionals did not listen to them about their pain and instead, the healthcare professional's response would be to close down conversations about pain in order to talk about something different; “It feels like they are trying to talk about something different”, Participant 17, 10 year old male, EDS/Hypermobility. Participants talked about how professionals could sometimes make it feel as though they were exaggerating their pain, even though children and adolescents recognized that healthcare professionals might not have intentionally meant to make them feel this way; “Because obviously I'm not making it up, but sometimes I feel like, they don't mean to do it, but sometimes the way some of them can talk to you, makes you feel like it's not as bad as you're making it out to be. Yeah…I mean sometimes it's upset me”, Participant 12, 15 year old female, CIPS. Some children and adolescents had been told to ‘push the pain away’ by not thinking about it, as described below; “One of the coping mechanisms we've been kind of using is pushing the pain away. Yes. On those bad days you're just like I don't even want to be here”, Participant 7, 18 year old female, CIPS. Children and adolescents found that they were asked about pain straight away during consultations, when they would have preferred to take their time building up to questions about pain, as described in the below participant accounts; “So, sometimes it is just me dragging myself there and it's kind of I know I just want to be in bed right now. So, bringing up the things that's making me in pain, I don't want to sort of be hit with it straightaway. I kind of want to take my time with it”, Participant 7, 18 year old female, CIPS. “It's definitely the professional. It's like the first thing that's asked when I come and sit down”, Participant 18, 16 year old female, JIA. Participants explained that questions about pain were predominantly asked about during physical examinations conducted by healthcare professionals. A repercussion of being asked about pain during physical examinations involving manipulation of joints meant that children and adolescents could be in pain as a consequence of the examination; “It always starts with the professional. I was doing my exercises and she could see they were starting to be really hurting me, even though I have to do a certain amount so she could analyse me”, Participant 9, 18 year old female, EDS/Hypermobility. “I think, sometimes, they ask me to do stuff like move my foot around and does that hurt?”, Participant 14, 8 year old male, JIA. For children and adolescents under the care of several specialities, it could become tedious repeating the same information about pain between different specialities seen at similar time intervals, as highlighted below; “Sometimes it can be a bit tedious like if I'm seeing orthopaedics and then I'm going to rheumatology and I'm kind of having to repeat myself, but I know it's got to be done”, Participant 7, 18 year old female, CIPS. Participants were frustrated when they felt that healthcare professionals had not given them the time to say what they wanted to say about pain before leaving their consultation; “I do occasionally (write a list), but then sometimes I think it's like, not worth it anyway…sometimes if you do it, and they don't as, then it just feels even worse, you know what I mean? Because you had all this you wanted to say, and you never got to say it”, Participant 12, 15 year old female, CIPS. It was important that children and adolescents did not feel like the healthcare professional was not listening, as this could be perceived as though they were uninterested; “They ask me first…It feels like they are trying to talk about something different and they are not like that interested”, Participant 17, 10 year old male, EDS/Hypermobility. It was also important that children and adolescents did not feel forced to provide information about pain if it was something they did not want to talk about that day; “I don't want to talk about this because when you talk about the pain, it's in the front of your mind. You have to think about it”, Participant 7, 18 year old female, CIPS. Children and adolescents sometimes did not know how to describe their pain and they were unsure about what terminology to use in asking or answering healthcare professionals' questions about pain, as discussed in the participant reflections below; “But then she just starts talking to my mum…about stuff I don't know and words I kind of know, but not much”, Participant 14, 8 year old male, JIA. “I didn't really know how to answer them because I didn't really know how to describe it, if you know what I mean… I don't really know how I felt…I mean, sort of, like, what sort of pain is it, I'm like, I don't know, it just hurts”, Participant 16, 16 year old female, JIA. For other children and adolescents, talking about pain had become normal to them and in these instances, they found it easier to have conversations about pain with healthcare professionals. As can be seen in the quotes below, participants explained that with age, they were able to understand how to talk about their pain better and developed more confidence with pain discussions; “I mean obviously because I was a child it was a lot harder to explain the pain, because I didn't really understand it… But those questionnaires they were really helpful…I mean obviously I've been doing it for quite a long time, so I'm used to it… I think it's because I used to see her a lot more and she seemed really, really supportive”, Participant 20, 17 year old female, JIA. “To me it's a normal everyday thing… if I had to mention it to people that weren't doctors or people that weren't my mates I'd be okay…it's easy to talk about”, Participant 24, 13 year old female, EDS/Hypermobility. Participants talked about how they sometimes felt ‘nervous’ and ‘scared’ to report pain to healthcare professionals; “Sometimes I do get a tiny bit nervous… but never full on I don't want to tell you, but sometimes…I've never been like that…I think it's more I'm just a bit scared or something like that, I think… Like, I know I'm not scared, but I just feel weird inside…Like butterflies in your tummy”, Participant 14, 8 year old male, JIA. These feelings appeared to arise from concerns about possible additional or new treatments including new medications, additional exercises to do, or additional referrals and investigations that might find ‘something else’ wrong with them, as demonstrated in the following quotations; “Because if I tell the physiotherapist, then she'll just make me do loads of exercises. I think just sometimes if I'm just not in the mood, I will just not be in the mood and I won't mention it, I'll just keep it to myself”, Participant 1, 17 year old male, JIA. “There's times when I've not wanted to bring up my pain…because I didn't want the stress of knowing that possibly I could have something else wrong with me. Do you get what I mean? The ways they ask me is obviously good, because they don't force anything out of me like, they don't get frustrated or thingy if I have a little emotional tic…Because when I first started to go I didn't want to accept that I had anything wrong with me. I didn't want to accept that I was poorly or that was me”, Participant 3, 18 year old female, Hypermobility. Talking about pain confirmed feelings of difference in relation to peers, as described in the quotes below; “Because it's not very nice, it's…it's not very nice. Sometimes because you don't want to talk about it because it makes you feel different. Because normally, people of my age, you don't really have to say if you've been feeling well, if you've been hurting or not. So it kind of just makes you feel quite different from everyone else”, Participant 11, 12 year old female, JIA. “I don't like speaking about the pain…I think it's just because I know I'll start tearing up because then it just makes me feel like I can't do what other people can do”, Participant 15, 16 year old female, CIPS. Pain conversations were also viewed as worrisome to some participants as it reminded them that their pain was going to affect them for the rest of their lives; “I was quite scared to be honest. It was like thrown in the deep end and I didn't really understand what it was…I didn't even realise children could get arthritis, so when they were asking me all these questions it was like oh it's quite scary because it's going to affect my life…I mean that was the one thing I was really worried about because I feel…I know when I'm older I'm probably going to have to have a joint replacement. I can already feel my joints grinding. So that always was on my mind”, Participant 20, 17 year old female, JIA. Children and adolescents reported sometimes generally feeling overwhelmed with talking about pain because they were tired of feeling and thinking about pain, as demonstrated in the below quote; “They are fine questions. They've changed depending on how better or worse the pain has got…Because I'm fed up of my pain and I just don't really want to talk about it”, Participant 17, 10 year old male, EDS/Hypermobility. Participants talked about the difficulties of managing pain uncertainty. These difficulties could be exacerbated through communication about pain with healthcare professionals, particularly when negative test results were provided; “Well, I'd wish that I could tell them the pain, where it is, and they could snap their fingers and give me a diagnosis…Because with me it's all the uncertainty that gets me wound more than anything. I wish that I knew what was going on with my body, but unfortunately I don't”, Participant 3, 18 year old female, EDS/Hypermobility. “Not with sort of the departments I've been seeing recently. I felt like sometimes when I started going and I was with orthopaedics, it was sort of that…It was sort of that, well, your x‐rays are not really showing anything, your MRI is showing a bit. And then you say, but I'm in pain”, Participant 7, 18 year old female, CIPS. Participants were cognisant about the impact that this uncertainty may have had on the healthcare professionals who were trying to talk to them about pain, suggesting that healthcare professionals may have felt ‘helpless’ at times; “That's really all they could do to control the pain…Saying that you feel pain and they don't know what to do, it might make them feel a bit helpless”, Participant 20, 17 year old female, JIA. Participants tried to manage their own uncertainties by not thinking too much about what ‘might’ happen, as demonstrated in the following quote; “Well, really, when I was younger, I was quite confused, but now I'm just, like, that's my arthritis, I won't worry about it because I mean, I know it's not the end of the world now that I've got it…But I used to think bad stuff about it, but now I'm…just wait and see if this will happen because there's no point thinking in my mind, oh, this is going to happen when you don't know that it's going to happen”, Participant 14, 8 year old male, JIA. Participants felt as though pain was less of a focus in consultations when pain was low or was not believed to be related to their arthritis. They talked about feeling like pain was ‘brushed off’ when it was referred to by the healthcare professional as mechanical pain, even though to children and adolescents these pains felt the same and had the same impact, as seen below; “I feel like they're less focused on it…to me, it's the most important thing…I don't think is it the arthritis pain or is it this pain or is it that pain, I just think my leg's hurting…I think they refer to the other one as a mechanical pain and the arthritis one as arthritis pain. So the word mechanical… Puts metal in my mind… I think sometimes it could have been dismissed even though…I could be feeling a lot of pain and it could be affecting me in my day to day, but if there's no signs of inflammation or there's no swelling that they can see, then it sometimes got brushed off”, Participant 1, 17 year old male, JIA. Participants talked about pain being dismissed when investigations showed no cause for pain or when pain was occurring when the disease was in remission; “Maybe if I'm in remission or not…for the last few appointments. I find it unlikely given my condition. It's known for pain, that's what I'm there for really”, Participant 8, 13 year old female, JIA. Some participants felt that healthcare professionals did not listen to them about their pain and instead, the healthcare professional's response would be to close down conversations about pain in order to talk about something different; “It feels like they are trying to talk about something different”, Participant 17, 10 year old male, EDS/Hypermobility. Participants talked about how professionals could sometimes make it feel as though they were exaggerating their pain, even though children and adolescents recognized that healthcare professionals might not have intentionally meant to make them feel this way; “Because obviously I'm not making it up, but sometimes I feel like, they don't mean to do it, but sometimes the way some of them can talk to you, makes you feel like it's not as bad as you're making it out to be. Yeah…I mean sometimes it's upset me”, Participant 12, 15 year old female, CIPS. Some children and adolescents had been told to ‘push the pain away’ by not thinking about it, as described below; “One of the coping mechanisms we've been kind of using is pushing the pain away. Yes. On those bad days you're just like I don't even want to be here”, Participant 7, 18 year old female, CIPS. 3.1 Informal conversations It was important that healthcare professionals asked about things other than pain during the course of the consultation, such as hobbies, as this created a ‘friendly environment’; “We'll have a five or ten minute conversation about like what we've done on the weekend and everything. Feels dead comfortable that. She always asks about my music every time I come in… So it's a really friendly environment… And I know that the rheumatologist is here to help me and I've got a bond with them we've formed a relationship”, Participant 3, 18 year old female, EDS/Hypermobility. Being asked about other things distracted children and adolescents from questions focused on pain, which they found valuable, as described in the following quote; “She'd ask me how was the pain, but other than that, it was mostly how I am around the house, or how is…? So it wouldn't be focused on just the pain, she'd try and distract me from other things, from just talking about my hip and the pain…She'd ask me how do I think that the pain is triggered or stuff like that”, Participant 15, 16 year old female, CIPS. 3.2 Feeling reassured and cared for Participants spoke about the importance of feeling reassured when they reported pain and its impact to healthcare professionals, with reassurance particularly being provided that being affected by their pain was acceptable; “Sometimes I just want to be told it's okay to have a bad day. It's okay to not want to go out with your friends if you're not feeling too great in yourself. Because I know that I'll try and push myself and I'll just make myself feel worse”, Participant 6, 9 year old female, CIPS. Being asked about pain gave children and adolescents an opportunity to offload about their experiences and made them feel that professionals cared about them which was comforting to them; “I think it's really important because I think it shows that they care. And it is nice to be asked how I've been coping or how much pain I'm in because I think it just gets it off my chest really. “Cause when I'm in pain I don't really say anything”, Participant 8, 13 year old female, JIA. For some children and adolescents, not feeling like a burden was important as they did not want to feel as though they were a hindrance to healthcare professionals by reporting pain; “I just don't like bothering people… I didn't really want to bother anyone with it. Yeah, like I thought it was unneeded because I didn't think I'd have anything wrong with it”, Participant 23, 12 year old male, JIA. 3.3 Familiarity Familiarity with healthcare professionals within the paediatric rheumatology team was important, as children and adolescents felt that this enabled the professionals to understand their condition better and it could help them to tailor their approaches to pain management, as evident in the quotes below; “I think as well the team's been able to understand more because they've been able to get to know me more…now it's sort of how do we deal with pain if we were in the middle of a drama lesson. Stuff like that. We've been able to tailor it more”, Participant 7, 18 year old female, CIPS. “I think it's because she knows me…so there's less crying”, Participant 15, 16 year old female, CIPS. For other children and adolescents, familiarity with healthcare professionals did not necessarily mean that conversations about pain became easier as it still meant that new questions regarding their diagnosis or prognosis could be identified which could be challenging to experience; “Because it's like every time I go there's a new question that arises. But the rheumatologist has grown up with me in these past four years. And when I see the rheumatologist I don't feel obliged or anything to talk”, Participant 3, 18 year old female, EDS/Hypermobility. Other participants found the familiar format of the questions asked during consultations to be redundant because being asked the same questions every time meant that nothing changed; “Yeah, they ask the same like five questions every time I go in. And, by the time I come out, nothing gets changed on what we're doing…it's a bit of a waste of time”, Participant 21, 12 year old male, EDS/Hypermobility. 3.4 Communicating and managing the emotional impact of pain Several participants talked about the importance of healthcare professionals asking not only about what pain physically limits them from being able to do, but also how pain impacts upon their mental health, as demonstrated in the following interview excerpt; “You hear it a lot with mental health services…Sort of when it comes to pain, I think the mental health side's forgotten a little bit. I just want to sort of reiterate the fact it affects both physical and mental health, and I think they need to be addressed both. Yes, I think that's just sort of been a growing up realisation. And as well in hospital, I think it was the consultant where I said, oh, I've been having a few bad, down days and she kind of just brought up is it pain bad, down days or is it mental bad, down days. And that's when it sort of… It was almost a lightbulb moment where I thought, oh, my gosh, I can have bad mental health days. That is normal”, Participant 6, 9 year old female, CIPS. Participants described a variety of ways in which they had learned to self‐manage the emotional impact of their own pain where healthcare professionals had been unable to offer them any advice or solutions. For example, children and adolescents explained how they would try ‘masking’ or ‘distracting’ themselves from pain with other types of pain or activities; “I just mask it with other pain…even though it wouldn't help me in the long run, it would get me out of it for that moment…there's that, so I just have to push myself”, Participant 1, 17 year old male, JIA. Accepting that something was wrong with them meant that they were able to deal with the emotional side of pain better. However, children and adolescents talked about hiding the true extent of pain and not dwelling on the fact that there was something wrong with them in their conversations with healthcare professionals; “I didn't want to accept that I was poorly or that was me. But now that I know what's going on a little bit more I can deal with the emotional side of it a lot better than I could…I'm one of them people that I don't want to dwell on the fact that there's things wrong with me”, Participant 3, 18 year old female, EDS/Hypermobility. It was important that healthcare professionals asked about things other than pain during the course of the consultation, such as hobbies, as this created a ‘friendly environment’; “We'll have a five or ten minute conversation about like what we've done on the weekend and everything. Feels dead comfortable that. She always asks about my music every time I come in… So it's a really friendly environment… And I know that the rheumatologist is here to help me and I've got a bond with them we've formed a relationship”, Participant 3, 18 year old female, EDS/Hypermobility. Being asked about other things distracted children and adolescents from questions focused on pain, which they found valuable, as described in the following quote; “She'd ask me how was the pain, but other than that, it was mostly how I am around the house, or how is…? So it wouldn't be focused on just the pain, she'd try and distract me from other things, from just talking about my hip and the pain…She'd ask me how do I think that the pain is triggered or stuff like that”, Participant 15, 16 year old female, CIPS. Participants spoke about the importance of feeling reassured when they reported pain and its impact to healthcare professionals, with reassurance particularly being provided that being affected by their pain was acceptable; “Sometimes I just want to be told it's okay to have a bad day. It's okay to not want to go out with your friends if you're not feeling too great in yourself. Because I know that I'll try and push myself and I'll just make myself feel worse”, Participant 6, 9 year old female, CIPS. Being asked about pain gave children and adolescents an opportunity to offload about their experiences and made them feel that professionals cared about them which was comforting to them; “I think it's really important because I think it shows that they care. And it is nice to be asked how I've been coping or how much pain I'm in because I think it just gets it off my chest really. “Cause when I'm in pain I don't really say anything”, Participant 8, 13 year old female, JIA. For some children and adolescents, not feeling like a burden was important as they did not want to feel as though they were a hindrance to healthcare professionals by reporting pain; “I just don't like bothering people… I didn't really want to bother anyone with it. Yeah, like I thought it was unneeded because I didn't think I'd have anything wrong with it”, Participant 23, 12 year old male, JIA. Familiarity with healthcare professionals within the paediatric rheumatology team was important, as children and adolescents felt that this enabled the professionals to understand their condition better and it could help them to tailor their approaches to pain management, as evident in the quotes below; “I think as well the team's been able to understand more because they've been able to get to know me more…now it's sort of how do we deal with pain if we were in the middle of a drama lesson. Stuff like that. We've been able to tailor it more”, Participant 7, 18 year old female, CIPS. “I think it's because she knows me…so there's less crying”, Participant 15, 16 year old female, CIPS. For other children and adolescents, familiarity with healthcare professionals did not necessarily mean that conversations about pain became easier as it still meant that new questions regarding their diagnosis or prognosis could be identified which could be challenging to experience; “Because it's like every time I go there's a new question that arises. But the rheumatologist has grown up with me in these past four years. And when I see the rheumatologist I don't feel obliged or anything to talk”, Participant 3, 18 year old female, EDS/Hypermobility. Other participants found the familiar format of the questions asked during consultations to be redundant because being asked the same questions every time meant that nothing changed; “Yeah, they ask the same like five questions every time I go in. And, by the time I come out, nothing gets changed on what we're doing…it's a bit of a waste of time”, Participant 21, 12 year old male, EDS/Hypermobility. Several participants talked about the importance of healthcare professionals asking not only about what pain physically limits them from being able to do, but also how pain impacts upon their mental health, as demonstrated in the following interview excerpt; “You hear it a lot with mental health services…Sort of when it comes to pain, I think the mental health side's forgotten a little bit. I just want to sort of reiterate the fact it affects both physical and mental health, and I think they need to be addressed both. Yes, I think that's just sort of been a growing up realisation. And as well in hospital, I think it was the consultant where I said, oh, I've been having a few bad, down days and she kind of just brought up is it pain bad, down days or is it mental bad, down days. And that's when it sort of… It was almost a lightbulb moment where I thought, oh, my gosh, I can have bad mental health days. That is normal”, Participant 6, 9 year old female, CIPS. Participants described a variety of ways in which they had learned to self‐manage the emotional impact of their own pain where healthcare professionals had been unable to offer them any advice or solutions. For example, children and adolescents explained how they would try ‘masking’ or ‘distracting’ themselves from pain with other types of pain or activities; “I just mask it with other pain…even though it wouldn't help me in the long run, it would get me out of it for that moment…there's that, so I just have to push myself”, Participant 1, 17 year old male, JIA. Accepting that something was wrong with them meant that they were able to deal with the emotional side of pain better. However, children and adolescents talked about hiding the true extent of pain and not dwelling on the fact that there was something wrong with them in their conversations with healthcare professionals; “I didn't want to accept that I was poorly or that was me. But now that I know what's going on a little bit more I can deal with the emotional side of it a lot better than I could…I'm one of them people that I don't want to dwell on the fact that there's things wrong with me”, Participant 3, 18 year old female, EDS/Hypermobility. 4.1 Challenges interpreting pain advice Participants explained how they did not always understand healthcare professional's advice on how to manage pain following communication, as evident in the following quotes. Healthcare professionals often gave mixed messages about pain which were difficult to put into practice: for example, if children and adolescents were doing too much they should do less, and if they were not doing enough, they should do more; “So if I go out too much, then it would be, like, take it easy. If I don't go out enough, then just do some physio or something”, Participant 2, 15 year old male, JIA. Participants considered there to be a fine line in knowing their triggers between these two extremes, as described in the following quote; “She says that to me to keep myself busy but not to overdo it. It's finding that fine line of when it's time to stop distracting myself… They tell me to distract myself, like try not to think about it as much, try not to google it, don't google it, whatever you do, don't google your symptoms”, Participant 3, 18 year old female, EDS/Hypermobility. Some participants talked about how they had found that nothing made their pain better from advice provided to them by healthcare professionals; “Nothing actually makes it better. It sort of gets worse by each hour, it gets gradually worse pain”, Participant 6, 9 year old female, CIPS. 4.2 Anger at healthcare professionals’ pain management explanations Many participants talked about the ‘boom and bust’ cycle that healthcare professionals had used in their pain management explanations to children and adolescents. Some participants felt angry with this explanation, particularly when they felt like they had not done enough activity to ‘bust’, but h professionals advised them that they had; “She'll tell me that I'll know when to stop, but I won't stop because I think that I can do it … Like she's got this thing called bust and booms… I've pushed myself, and other times I haven't, but they think I have. So it just makes me feel that… I don't know if it's anger”, Participant 15, 16 year old female, CIPS. Sometimes healthcare professionals would talk about how other children and adolescents were managing their pain, which participants did not find helpful for explaining their own pain, as highlighted in the following interview excerpt; “Just the same over and over again. Just get a hot bath, get some water bottles, take some painkiller. Because she always kept on going on about other people. It would always be like other people are struggling as well, and I'm just like yes, I know that…It's just like every session she'd just go on about other people. It would get really annoying”, Participant 24, 13 year old female, EDS/Hypermobility. Some participants felt like they were provided with no advice on how to manage pain and if they suggested potential solutions themselves, the healthcare professional would disagree which left children and adolescents own perspectives feeling overlooked, as described below; “Any ideas I suggest is…is a swift no. Or once, they even said, we'll get back to you, and then in like six months' time they came back and said no…Because it's like, you go in, you talk about why it's so bad and then whenever you give any ideas, they just fob you off”, Participant 21, 12 year old male, EDS/Hypermobility. Participants explained how they did not always understand healthcare professional's advice on how to manage pain following communication, as evident in the following quotes. Healthcare professionals often gave mixed messages about pain which were difficult to put into practice: for example, if children and adolescents were doing too much they should do less, and if they were not doing enough, they should do more; “So if I go out too much, then it would be, like, take it easy. If I don't go out enough, then just do some physio or something”, Participant 2, 15 year old male, JIA. Participants considered there to be a fine line in knowing their triggers between these two extremes, as described in the following quote; “She says that to me to keep myself busy but not to overdo it. It's finding that fine line of when it's time to stop distracting myself… They tell me to distract myself, like try not to think about it as much, try not to google it, don't google it, whatever you do, don't google your symptoms”, Participant 3, 18 year old female, EDS/Hypermobility. Some participants talked about how they had found that nothing made their pain better from advice provided to them by healthcare professionals; “Nothing actually makes it better. It sort of gets worse by each hour, it gets gradually worse pain”, Participant 6, 9 year old female, CIPS. Many participants talked about the ‘boom and bust’ cycle that healthcare professionals had used in their pain management explanations to children and adolescents. Some participants felt angry with this explanation, particularly when they felt like they had not done enough activity to ‘bust’, but h professionals advised them that they had; “She'll tell me that I'll know when to stop, but I won't stop because I think that I can do it … Like she's got this thing called bust and booms… I've pushed myself, and other times I haven't, but they think I have. So it just makes me feel that… I don't know if it's anger”, Participant 15, 16 year old female, CIPS. Sometimes healthcare professionals would talk about how other children and adolescents were managing their pain, which participants did not find helpful for explaining their own pain, as highlighted in the following interview excerpt; “Just the same over and over again. Just get a hot bath, get some water bottles, take some painkiller. Because she always kept on going on about other people. It would always be like other people are struggling as well, and I'm just like yes, I know that…It's just like every session she'd just go on about other people. It would get really annoying”, Participant 24, 13 year old female, EDS/Hypermobility. Some participants felt like they were provided with no advice on how to manage pain and if they suggested potential solutions themselves, the healthcare professional would disagree which left children and adolescents own perspectives feeling overlooked, as described below; “Any ideas I suggest is…is a swift no. Or once, they even said, we'll get back to you, and then in like six months' time they came back and said no…Because it's like, you go in, you talk about why it's so bad and then whenever you give any ideas, they just fob you off”, Participant 21, 12 year old male, EDS/Hypermobility. DISCUSSION The current study explored experiences of pain communication in paediatric rheumatology from the perspectives of children and adolescents with a broad range of long‐term musculoskeletal conditions. Participants provided insight into how pain communication was coordinated, barriers and facilitators to conversations about pain and dissatisfaction with elements of pain communication with healthcare professionals. Children and adolescents could remember many of the processes and outcomes involved in pain conversations, and they highly valued conversations about their pain with healthcare professionals. Many were comfortable directly engaging in pain discussions with healthcare professionals because they expected questions about pain to be asked, felt cared about when asked questions about pain and found talking about pain natural as it had become a normal part of everyday life. Challenges in pain communication identified by participants included augmenting the feeling of being different from peers and concerns about management plans changing as a result of conversations with healthcare professionals. There has been mixed evidence on whether pain communication routinely occurs in clinical paediatric rheumatology settings (Jordan et al., ; Lee et al., ; Lee et al., ). Importantly, the current study found compelling evidence from the perspectives of children and adolescents to suggest that effective pain conversations are featured in paediatric rheumatology settings. For example, multi‐dimensional pain assessment appears to be commonly done at least informally through asking about potential pain causes/triggers, location, qualities, frequency, timing, interference with activities and/or schoolwork, coping, changes and management strategies. There was also evidence in our study of formal pain assessments being used (such as pain rating scales), although children and adolescents emphasized the limits of these tools in describing their pain to others. This study extends findings established in previous research about the roles and values of parents in relaying information to healthcare professionals (Lundberg et al., ). Similar to our study, Lundberg et al. ( ) found mixed perspectives from children and adolescents about the value of parents in pain communication. In this research, some children and adolescents believed parents provided a useful external perspective and they spoke to them frequently about their pain. Other children and adolescents intentionally or unintentionally concealed aspects of their pain experience from parents, such that parents were unaware of the true extent of their child's pain. This presents a challenge for healthcare professionals. Effective conversations about pain with parents present as stakeholders in these conversations may first require healthcare professionals to explore and evaluate the child's preferences for the role of their parent in pain communication. Preferences may vary based on age and developmental level (particularly the cognitive abilities) of the child/adolescent in reporting their own pain (Emerson & Bursch, ). For example, younger children may rely on their parents more than older children to report their pain on their behalf, because they may not have developed the understanding and/or vocabulary to be able to describe pain experiences to a healthcare professional (Chan & von Baeyer, ). Most theories of cognitive development posit that it takes up to around 11 years of age to develop the intellectual capacity to process complex information to understand and to then describe complex concepts or experiences such as pain (Caplan & Bursch, ). Furthermore, the degree to which healthcare interactions are co‐ordinated according to developmentally appropriate principles will further influence the degree of participation by younger patients (Rapley et al., ). Nurturing the child/adolescent skills in self‐reporting and self‐managing their own pain is important wherever possible. In the current study, children and adolescents described how pain dismissal was a barrier to effective pain communication. These findings contribute to the growing evidence base of pain dismissal occurring during childhood and adolescent healthcare consultations (Defenderfer et al., ; Edmond & Keefe, ; Igler et al., ; Lang et al., ). In this study, children and adolescents felt that pain was overlooked when low in intensity or when deemed by healthcare professionals to be ‘mechanical’ rather than due to the underlying rheumatological condition. This occurred despite children and adolescents explaining that these pains felt the same and had the same impact. Pain dismissal has implications for future pain communication as literature suggests that children and adolescents feel a sense of hostility towards the individual who dismissed their pain, which ultimately damages the relationship as they become disengaged and less likely to adhere to recommended management plans (Defenderfer et al., ). Dismissal and the subsequent impact on relationships with healthcare professionals infiltrates into and affects other aspects of care. In past literature, adolescents have reported significant negative reactions to pain dismissal and subsequent invalidation of their pain experiences by healthcare professionals, such as depression, anxiety, anger and feeling isolated (Wakefield et al., ). However, some children and adolescents in this study appeared to appreciate that healthcare professionals may not have intentionally invalidated their report of pain. Past research from healthcare professionals’ perspectives has found that they can sometimes feel helpless, frustrated and uncomfortable with handling the unexplained nature of pain within consultations (Lefèvre et al., ), in line with how children and adolescents interpreted elements of pain dismissal in the present study. Our findings on children and adolescents' acceptance of healthcare professionals' explanations for pain were similar to themes in Sørensen and Christiansen's ( ) study which highlighted that children and adolescents feel anxious when given conflicting explanations and pain management advice. However, our study found children and adolescents' reactions were more closely related to anger, as opposed to uncertainty. Sørenson and Christiansen's study also found that children and adolescents experienced despair when presented with negative investigation results which highlighted they were different from their peers, which was consistent with children and adolescents' reactions to inconclusive tests in the present study. When presented with the option of receiving psychological support for pain management later on in their care, children and adolescents in both this and Sørenson et al's study felt as though healthcare professionals were reluctant to introduce any further management strategies themselves. An important implication of this finding is that psychological support should be integrated into pain management as early as possible to emphasize the importance of psychological support and reduce later barriers to access. Findings should be interpreted in light of several study limitations. One potential limitation of the current study was that experiences about pain communication came from children and adolescents who were patients at specialist tertiary paediatric rheumatology centres across the United Kingdom, which had reputations for providing high‐quality healthcare. Thus, findings may not generalize to other settings where children and adolescents with chronic musculoskeletal conditions receive care (e.g., primary or secondary care settings). In addition, a wide age range was included in the sample but there were only a small number of children and adolescents representing each age group, which makes it challenging to compare themes by age or to identify unique developmental preferences in pain communication. Future research should aim to investigate whether there are age and/or developmental specific, or even diagnosis specific, pain communication preferences. Future research should also explore parental experiences and perspectives on pain communication during healthcare consultations, as well as the varying role of parents and the different values placed upon their position (by children/adolescents) within these often triadic healthcare communication encounters. These avenues remain unexplored despite parents being significant key stakeholders in communication during consultations. 4.1 Recommendations for the future The findings of this study highlight a range of effective and ineffective pain communication approaches from the experiences of children and adolescents with chronic musculoskeletal conditions. We propose several recommendations for healthcare professionals communicating about chronic pain specifically (taking into account that there are general communication recommendations elsewhere [Kim & White, ]). Recommendations are grouped according to the themes identified; 4.1.1 Co‐ordination of pain communication Ask about pain in every consultation, as children and adolescents generally expect this. Allow the child/adolescent to settle into the consultation before beginning to ask questions specifically about pain. Ask children/adolescents for more than one average pain rating to take into account temporal differences in pain and pain that can be provoked vs unprovoked. For example, healthcare professionals could also ask about pain at different times of the day/week and/or pain before, during and/or after particular activities. Break down questions about pain into different components (e.g. ask about location, intensity, qualities and interference individually rather than asking broadly about pain). Explore the child/adolescents' preferences for their parents to be included or not included within pain reporting. 4.1.2 Barriers to pain communication Avoid forcing information (if appropriate) about pain in circumstances where children and adolescents seem reluctant to discuss pain in detail. Avoid verbal or nonverbal cues suggesting frustration or dismissal of pain reports, and instead convey a willingness and commitment of time to discuss pain that the child/adolescent wants to report or expand upon. 4.1.3 Facilitators of pain communication Balance discussion about pain with asking about non‐medical topics such as hobbies/interests of the child/adolescent, if time allows. Ask children and adolescents about how pain is affecting them emotionally and cognitively, in addition to asking about how it physically limits them. Make sure terminology is appropriate to the developmental level of the child/adolescent and make sure that the vocabulary used for pain explanations resonates with their own descriptions and explanations by asking what they understand by particular terms (e.g., the term ‘mechanical pain’ may be problematic). 4.1.4 Dissatisfaction with pain communication Provide clear and tailored pain management advice (e.g., what is considered to be too much or too little activity for that child/adolescent). Elicit children and adolescents' own perceptions of their limits and their own ideas about how they think they could best manage their pain. These recommendations will improve pain communication between healthcare professionals, children and adolescents, specifically those with chronic musculoskeletal pain who are managed in paediatric rheumatology settings. However, future research efforts which focus on the translation and mobilization of our findings into real‐world practice are needed. There is a need to address the gap between our improved understanding of paediatric pain derived from research such as this, and current clinical practices (Chambers, ). Additional efforts are required to translate research findings and implement evidence‐based recommendations in clinical settings. Recommendations for the future The findings of this study highlight a range of effective and ineffective pain communication approaches from the experiences of children and adolescents with chronic musculoskeletal conditions. We propose several recommendations for healthcare professionals communicating about chronic pain specifically (taking into account that there are general communication recommendations elsewhere [Kim & White, ]). Recommendations are grouped according to the themes identified; 4.1.1 Co‐ordination of pain communication Ask about pain in every consultation, as children and adolescents generally expect this. Allow the child/adolescent to settle into the consultation before beginning to ask questions specifically about pain. Ask children/adolescents for more than one average pain rating to take into account temporal differences in pain and pain that can be provoked vs unprovoked. For example, healthcare professionals could also ask about pain at different times of the day/week and/or pain before, during and/or after particular activities. Break down questions about pain into different components (e.g. ask about location, intensity, qualities and interference individually rather than asking broadly about pain). Explore the child/adolescents' preferences for their parents to be included or not included within pain reporting. 4.1.2 Barriers to pain communication Avoid forcing information (if appropriate) about pain in circumstances where children and adolescents seem reluctant to discuss pain in detail. Avoid verbal or nonverbal cues suggesting frustration or dismissal of pain reports, and instead convey a willingness and commitment of time to discuss pain that the child/adolescent wants to report or expand upon. 4.1.3 Facilitators of pain communication Balance discussion about pain with asking about non‐medical topics such as hobbies/interests of the child/adolescent, if time allows. Ask children and adolescents about how pain is affecting them emotionally and cognitively, in addition to asking about how it physically limits them. Make sure terminology is appropriate to the developmental level of the child/adolescent and make sure that the vocabulary used for pain explanations resonates with their own descriptions and explanations by asking what they understand by particular terms (e.g., the term ‘mechanical pain’ may be problematic). 4.1.4 Dissatisfaction with pain communication Provide clear and tailored pain management advice (e.g., what is considered to be too much or too little activity for that child/adolescent). Elicit children and adolescents' own perceptions of their limits and their own ideas about how they think they could best manage their pain. These recommendations will improve pain communication between healthcare professionals, children and adolescents, specifically those with chronic musculoskeletal pain who are managed in paediatric rheumatology settings. However, future research efforts which focus on the translation and mobilization of our findings into real‐world practice are needed. There is a need to address the gap between our improved understanding of paediatric pain derived from research such as this, and current clinical practices (Chambers, ). Additional efforts are required to translate research findings and implement evidence‐based recommendations in clinical settings. Co‐ordination of pain communication Ask about pain in every consultation, as children and adolescents generally expect this. Allow the child/adolescent to settle into the consultation before beginning to ask questions specifically about pain. Ask children/adolescents for more than one average pain rating to take into account temporal differences in pain and pain that can be provoked vs unprovoked. For example, healthcare professionals could also ask about pain at different times of the day/week and/or pain before, during and/or after particular activities. Break down questions about pain into different components (e.g. ask about location, intensity, qualities and interference individually rather than asking broadly about pain). Explore the child/adolescents' preferences for their parents to be included or not included within pain reporting. Barriers to pain communication Avoid forcing information (if appropriate) about pain in circumstances where children and adolescents seem reluctant to discuss pain in detail. Avoid verbal or nonverbal cues suggesting frustration or dismissal of pain reports, and instead convey a willingness and commitment of time to discuss pain that the child/adolescent wants to report or expand upon. Facilitators of pain communication Balance discussion about pain with asking about non‐medical topics such as hobbies/interests of the child/adolescent, if time allows. Ask children and adolescents about how pain is affecting them emotionally and cognitively, in addition to asking about how it physically limits them. Make sure terminology is appropriate to the developmental level of the child/adolescent and make sure that the vocabulary used for pain explanations resonates with their own descriptions and explanations by asking what they understand by particular terms (e.g., the term ‘mechanical pain’ may be problematic). Dissatisfaction with pain communication Provide clear and tailored pain management advice (e.g., what is considered to be too much or too little activity for that child/adolescent). Elicit children and adolescents' own perceptions of their limits and their own ideas about how they think they could best manage their pain. These recommendations will improve pain communication between healthcare professionals, children and adolescents, specifically those with chronic musculoskeletal pain who are managed in paediatric rheumatology settings. However, future research efforts which focus on the translation and mobilization of our findings into real‐world practice are needed. There is a need to address the gap between our improved understanding of paediatric pain derived from research such as this, and current clinical practices (Chambers, ). Additional efforts are required to translate research findings and implement evidence‐based recommendations in clinical settings. CONCLUSIONS This study presents a comprehensive overview of children and adolescents' experiences of pain communication in paediatric rheumatology in the United Kingdom, highlighting the importance and value of these processes to those who experience chronic musculoskeletal pain. Our findings highlight a range of effective and ineffective assessment and communication approaches, which have informed recommendations to improve healthcare professionals' communication about pain in line with children and adolescents expectations and needs. All authors were responsible for the conception and study design. JMcD was involved in participant recruitment. RRL performed the data collection, analysis and manuscript writing. DM contributed to the analysis of the data. All authors discussed the results and critically revised the manuscript. This work was supported by a Foundation Fellowship award from Versus Arthritis (Grant 22433). Aspects of this work were also supported by funding from the Centre for Epidemiology Versus Arthritis (Grant 20380) and the NIHR Manchester Biomedical Research Centre. All authors declare no conflict of interest. Appendix S1 Click here for additional data file. Appendix S2 Click here for additional data file.
Isthmus morphology influences debridement efficacy of activated irrigation: A laboratory study involving biofilm mimicking hydrogel removal and high‐speed imaging
9d356fdb-d896-4126-aba6-e3997296de37
10092478
Debridement[mh]
Irrigation is an essential aspect of chemomechanical root canal preparation. The irrigating solutions serve to clean the parts of the canal system that have escaped the mechanical instrumentation (Peters et al., ), to remove the smear layer (Mader et al., ) and accumulated hard tissue debris produced by these instruments (Paqué et al., ), along with removal of bacteria, toxins, and organic debris from the root canal in order to prevent or heal periapical disease. The root canal anatomy of the mesial root of mandibular molar teeth can be extremely challenging, in part due to the high prevalence of isthmuses in these roots (Keles & Keskin, ; Tahmasbi et al., ). Isthmuses are narrow corridors or transverse anastomoses connecting two root canals (Weller et al., ). The presence of isthmuses was found to be as high as 85% for both mandibular first and second molars (Fan et al., , , ; Von Arx, ) and these isthmuses are found most frequently in the 3–6 mm section from the apex (de Pablo et al., ; Gu, Kim, et al., ; Gu, Wei, et al., ; Mannocci et al., ). Isthmus dimensions may vary. Their median (Q1, Q3) major and minor diameters are ranging from 2.725 (2.181, 3.204) mm to 0.070 (0.050, 0.113) mm, respectively (Yin et al., ). Isthmuses represent a considerable challenge for root canal shaping, cleaning and obturation. Their ribbon‐shape with confined dimensions and their lateral extension from the main canal make them inaccessible to mechanical preparation (Leoni et al., ). This results in frequent clogging with dentinal debris (Paqué et al., ) whose removal with conventional irrigation methods is challenging (Leoni et al., ; Neelakantan et al., ). The residual micro‐organisms and debris contained in the isthmus might lead to failure of orthograde treatment (Alves et al., ), failure of endodontic microsurgery (Kim et al., ) and leaking of the root filling (De–Deus et al., ). The conventional root canal irrigation method is by means of a syringe with a needle. However, the penetration of the irrigant in the apical third and beyond the main canal is limited (Versiani et al. ), resulting in suboptimal cleaning of needle irrigation (NI). Irrigation efficacy can be optimized using irrigant activation techniques. These facilitate irrigant flow and distribution within the complex three‐dimensional anatomy of the root canal system (Gu, Kim, et al., ; Gu, Wei, et al., ). Eddy (EDDY) is a sonically activated irrigation device (Eddy; VDW). It operates a smooth polymer tip that oscillates at 6000 Hz by means of a sonic scaler. The manufacturer declares that the cleaning is enhanced by cavitation and acoustic streaming within the irrigant, but it was reported that no cavitation occurs with sonically oscillating instruments as the movement of the tip is below the threshold of cavitation (Macedo et al., ; Swimberghe et al., ). Laser‐activated irrigation (LAI) is another activation method, shown very efficient in removing debris (Arslan et al., ; De Moor et al., ; Swimberghe et al., ). Pulsed erbium lasers produce optical cavitation, causing expansion and implosion of a vapour bubble at the fibre tip (Song et al., ), responsible for very rapid fluid movement, secondary cavitation bubbles and photoacoustic effects (Blanken et al., ; Gregorcic et al., ). Recently, this phenomenon can be amplified by generating a second pulse extremely fast after the first one, thereby accelerating the collapse of the first bubble and increasing photoacoustic effects (SWEEPS; Fotona) (Lukač & Jezeršek, ). Previous studies have addressed the cleaning of isthmuses by various irrigant activation devices (Iandolo et al., ; Rödig et al., ; Rodrigues et al., ; Swimberghe et al., ). The majority of these studies investigated the removal of hard tissue debris from the isthmus in mesial roots of extracted mandibular molars using micro‐CT. A recent systematic review with network‐meta‐analysis addressing hard tissue debris reduction from the mesial roots of mandibular molars, found that none of the activation methods rendered the canal anatomy completely free of hard tissue debris, but laser‐activated irrigation groups fared better than activation protocols based on intracanal placement of oscillating tips or needles (Natansabapathy et al., ). Swimberghe et al. ( ), investigating the removal of a biofilm‐mimicking hydrogel containing dentin debris from the isthmus in an acrylic isthmus model, came to similar conclusions. Malentacca et al. ( ) investigated the efficacy of different activated irrigation techniques in removal of pulp tissue from the isthmus in a transparent extracted tooth. They found ultrasonically driven techniques to be superior than negative apical pressure. Alsubait et al. ( ) investigated the isthmus cleanliness in horizontal sections of the roots of mandibular molars after different adjunctive irrigation steps. They found that passive ultrasonic irrigation, mechanical activation with the XP‐endo Finisher and manual dynamic irrigation with gutta‐percha equally improved canal cleanliness. Although such experiments provide valuable data, they only allow a partial understanding of the involved mechanisms during activation of the irrigant, as human teeth lack standardization and studies using artificial isthmus models use single and simplified anatomies and do not consider the large variability in isthmus morphology found clinically. In order to better understand the influence of various morphological isthmus parameters on the irrigation, it is necessary to provide a standardized model with biomimetic morphology. Therefore, the aim of this study was to investigate the influence of isthmus morphology (length and width) on the removal of an artificial biofilm by LAI compared with EDDY and NI as control in new realistic 3D‐printed isthmus models; and to explain the methods of action of LAI and EDDY by means of high‐speed imaging in these models. The null hypothesis was that there is no influence of isthmus morphology on the removal of artificial biofilm during activated irrigation. The manuscript of this laboratory study has been written according to Preferred Reporting Items for Laboratory studies in Endodontology (PRILE) 2021 guidelines (Figure ). 3D ‐printed isthmus models A micro‐CT STL dataset of an intact mandibular first molar with a resolution of 20 μm served as the basis for development of the model. After loading the data in CAD software (Catia; Dassault systèmes), only the two mesial root canals were kept and used as a pattern/template to design a fully editable 3D virtual model of the root canal system. The mesial canals were modified in order to obtain an apical canal size of 0.30 mm, a 0.06 taper, a 23° curvature (Schneider, ). A ribbon‐shaped isthmus was designed, connecting the two canals and providing a Vertucci type VI configuration to the root canal system. The floor and the roof of the isthmus were positioned at 3 and 6 mm from the apical foramina, respectively. The entire isthmus was given a convex shape both in buccolingual and mesiodistal direction, conform isthmus morphology observed in vivo (Keles & Keskin, ). An access cavity (width: 3 or 7 mm, length: 9 mm, height: 5 mm) was designed, mimicking the anatomical reality. The canals were 15 mm long (Figure ). Four different root canal systems were designed according to the length (L, bucco‐lingual distance) and width (W, mesiodistal distance) of the isthmus: long‐wide (L: 4 mm; W: 0.4 mm), short‐wide (L: 2 mm; W: 0.4), long‐narrow (L: 4 mm; W: 0.15) and short‐narrow (L: 2 mm; W: 0.15) (Figure ). The 3D models were created in a virtual block. A removable part was designed at the level of the isthmus to allow access to the isthmus. Screw holes were accommodated to allow firm attachment of the removable part. Data were transferred to stereolithographic software (Preform; FormLabs) and equipment (3D Form 2, FormLabs) in. STL format for 3D‐printing (resolution: 25 μm, resin: Clear V4). The 3D‐printed models were washed in isopropyl alcohol for 20 min to remove noncured resin (Form Wash, FormLabs). Finally, the models were postcured at 60°C for 30 min (Form Cure, FormLabs) and the supporting pillars were removed. Artificial biofilm The artificial biofilm was a hydrogel based on the work of Macedo et al. ( ) and modified by Swimberghe et al. ( ) by incorporating dentine debris. Three grams of gelatin (Merck) and 0.03 g hyaluronan (sodium hyaluronate 95%; Fisher) were dissolved in 22.5 ml deionized water at 40°C under stirring. 0.1 g hollow glass spheres (diameter: 10 μm; density: 1100 kg·m −3 ; Sigma Aldrich) and a red dye were added to the mixture. Dentin debris was obtained by grinding bovine dentine and sieving the powder with a mesh 100 sieve to obtain particles smaller than 150 μm. Debris was added to the hydrogel (30/70 w/w) to simulate hard tissue debris accumulation in the isthmus that occurs during the mechanical preparation. The artificial biofilm was gently positioned in the isthmus under operating microscope. The volume of artificial biofilm was standardized and corresponded to the volume of the isthmuses. The 3D‐printed models were closed and tightened with two screws and two nuts, the apical foramina were sealed with wax to obtain a closed system and were filled with water. Experimental groups After assembly, the models were randomly assigned to one of three irrigant activation groups. This was done for the four isthmus designs, and each irrigation condition was repeated 20 times, conform the sample size calculation in Choi et al., , yielding a total of 240 tests. The irrigant used was water. In the NI group (control), canals were irrigated by means of a 3‐ml manual syringe equipped with a 30G notched needle (Vista Appli‐vac 30G; Vista Dental Products). A gentle up‐and‐down movement was applied to the needle ranging from 1‐ to 5 mm from the apex with a flow rate of 3 ml/20 s. Each root canal was irrigated for 3 × 20 s. In the EDDY group, activation occurred with the Eddy device (VDW, GmbH). The Eddy tip was driven by a sonic scaler (Proxeo, W&H) following manufacturer's instructions (6 kHz) and was moved up and down over a distance of 4 mm, starting 1 mm from the apical terminus. Irrigant was continuously replenished. Activation was also performed for 2 × 30 s and the canals were flushed with 3 ml (3 ml/20 s) water in‐between and after each activation cycle. In the LAI samples, an Er:YAG‐laser (LightWalker; Fotona), equipped with a H14 handpiece (Fotona) and a flat fibre tip (SWEEPS 400/9) was used to activate the irrigant in auto‐SWEEPS mode (40 mJ, 15 Hz, 0.6 w, air and water turned off). The fibre tip was positioned 2 mm above each root canal entrance, and activation was performed 2 × 30 s with irrigant being continuously replenished during activation. The canals were flushed with 3 ml (3 ml/20 s) irrigant in between and after each activation cycle using a 30G notched needle (Vista Appli‐vac 30G). Irrigation and activation were performed in both mesiobuccal and mesiolingual canals. During irrigation, the operator was blinded to the isthmuses by means of a silicone cover, covering the outer walls of the models. Determination of hydrogel removal Standardized high resolution (3216 × 2136 pixels) images of the model were taken before and after irrigant activation using a custom‐made setup, allowing the exact repositioning of each 3D‐printed isthmus model (Nikon D300 with a 120‐mm macro lens (Medical Nikkor [1/1], f = 4)). The hydrogel‐covered isthmus area was determined in each image using image analysis software (Fiji, https://imagej.net/Fiji ). To this end, the isthmus was outlined and a segmentation procedure (Simple Interactive Object Extraction in Fiji) was then carried out in this area to select the hydrogel. The percentage of remaining hydrogel was calculated as the number of selected pixels after activation relative to the number of pixels before activation, and the percentage of biofilm removal was deduced from this. Values were stored in a database (IBM SPSS Statistics version 27; SPSS Inc.). High‐speed imaging and analysis High‐speed imaging was used to analyse irrigant flow in the isthmus and to detect the presence of transient or stable cavitation by each activation method. This was done for both open and closed isthmus models. In order to produce a closed isthmus, one of the canals was blocked by means of silicon The isthmus models were positioned in front of a Fastcam SA‐X2 camera (Photron) equipped with a micro lens zoom system. A high intensity LED light source (LA‐HDF7010; Hayashi) illuminated the model from behind. Recordings were done both in empty isthmus models and models with the isthmus closed at one side. In order to visualize irrigant flow, glass microspheres (0.1 g; 10 μm diameter, density: 1100 kg/m 3 ; Sigma Aldrich) were added to the irrigant, and irrigant activation cycles were recorded at a frame rate of 5000 fps. In order to study the transient and stable cavitation, activation cycles were recorded at a frame rate of 30 000 fps, no glass particles added. In this way, high‐speed videos were obtained for the two activation methods in the four isthmus morphologies. Photron Fastcam Viewer V4 software (Photron) was used to measure the maximum speed of the particles (three measurements per video) and to describe the flow pattern in the isthmus, and to describe cavitation phenomena in the isthmus. Statistical analysis Hydrogel removal data were analysed using SPSS Statistics software (IBM) ( α = 5%). As the data were not normally distributed (according to Kolmogorov–Smirnov test), the Kruskal–Wallis test followed by Dunn's multiple comparison post hoc test and Bonferroni correction was performed to test for differences between isthmus configurations, and between activation methods. ‐printed isthmus models A micro‐CT STL dataset of an intact mandibular first molar with a resolution of 20 μm served as the basis for development of the model. After loading the data in CAD software (Catia; Dassault systèmes), only the two mesial root canals were kept and used as a pattern/template to design a fully editable 3D virtual model of the root canal system. The mesial canals were modified in order to obtain an apical canal size of 0.30 mm, a 0.06 taper, a 23° curvature (Schneider, ). A ribbon‐shaped isthmus was designed, connecting the two canals and providing a Vertucci type VI configuration to the root canal system. The floor and the roof of the isthmus were positioned at 3 and 6 mm from the apical foramina, respectively. The entire isthmus was given a convex shape both in buccolingual and mesiodistal direction, conform isthmus morphology observed in vivo (Keles & Keskin, ). An access cavity (width: 3 or 7 mm, length: 9 mm, height: 5 mm) was designed, mimicking the anatomical reality. The canals were 15 mm long (Figure ). Four different root canal systems were designed according to the length (L, bucco‐lingual distance) and width (W, mesiodistal distance) of the isthmus: long‐wide (L: 4 mm; W: 0.4 mm), short‐wide (L: 2 mm; W: 0.4), long‐narrow (L: 4 mm; W: 0.15) and short‐narrow (L: 2 mm; W: 0.15) (Figure ). The 3D models were created in a virtual block. A removable part was designed at the level of the isthmus to allow access to the isthmus. Screw holes were accommodated to allow firm attachment of the removable part. Data were transferred to stereolithographic software (Preform; FormLabs) and equipment (3D Form 2, FormLabs) in. STL format for 3D‐printing (resolution: 25 μm, resin: Clear V4). The 3D‐printed models were washed in isopropyl alcohol for 20 min to remove noncured resin (Form Wash, FormLabs). Finally, the models were postcured at 60°C for 30 min (Form Cure, FormLabs) and the supporting pillars were removed. The artificial biofilm was a hydrogel based on the work of Macedo et al. ( ) and modified by Swimberghe et al. ( ) by incorporating dentine debris. Three grams of gelatin (Merck) and 0.03 g hyaluronan (sodium hyaluronate 95%; Fisher) were dissolved in 22.5 ml deionized water at 40°C under stirring. 0.1 g hollow glass spheres (diameter: 10 μm; density: 1100 kg·m −3 ; Sigma Aldrich) and a red dye were added to the mixture. Dentin debris was obtained by grinding bovine dentine and sieving the powder with a mesh 100 sieve to obtain particles smaller than 150 μm. Debris was added to the hydrogel (30/70 w/w) to simulate hard tissue debris accumulation in the isthmus that occurs during the mechanical preparation. The artificial biofilm was gently positioned in the isthmus under operating microscope. The volume of artificial biofilm was standardized and corresponded to the volume of the isthmuses. The 3D‐printed models were closed and tightened with two screws and two nuts, the apical foramina were sealed with wax to obtain a closed system and were filled with water. After assembly, the models were randomly assigned to one of three irrigant activation groups. This was done for the four isthmus designs, and each irrigation condition was repeated 20 times, conform the sample size calculation in Choi et al., , yielding a total of 240 tests. The irrigant used was water. In the NI group (control), canals were irrigated by means of a 3‐ml manual syringe equipped with a 30G notched needle (Vista Appli‐vac 30G; Vista Dental Products). A gentle up‐and‐down movement was applied to the needle ranging from 1‐ to 5 mm from the apex with a flow rate of 3 ml/20 s. Each root canal was irrigated for 3 × 20 s. In the EDDY group, activation occurred with the Eddy device (VDW, GmbH). The Eddy tip was driven by a sonic scaler (Proxeo, W&H) following manufacturer's instructions (6 kHz) and was moved up and down over a distance of 4 mm, starting 1 mm from the apical terminus. Irrigant was continuously replenished. Activation was also performed for 2 × 30 s and the canals were flushed with 3 ml (3 ml/20 s) water in‐between and after each activation cycle. In the LAI samples, an Er:YAG‐laser (LightWalker; Fotona), equipped with a H14 handpiece (Fotona) and a flat fibre tip (SWEEPS 400/9) was used to activate the irrigant in auto‐SWEEPS mode (40 mJ, 15 Hz, 0.6 w, air and water turned off). The fibre tip was positioned 2 mm above each root canal entrance, and activation was performed 2 × 30 s with irrigant being continuously replenished during activation. The canals were flushed with 3 ml (3 ml/20 s) irrigant in between and after each activation cycle using a 30G notched needle (Vista Appli‐vac 30G). Irrigation and activation were performed in both mesiobuccal and mesiolingual canals. During irrigation, the operator was blinded to the isthmuses by means of a silicone cover, covering the outer walls of the models. Standardized high resolution (3216 × 2136 pixels) images of the model were taken before and after irrigant activation using a custom‐made setup, allowing the exact repositioning of each 3D‐printed isthmus model (Nikon D300 with a 120‐mm macro lens (Medical Nikkor [1/1], f = 4)). The hydrogel‐covered isthmus area was determined in each image using image analysis software (Fiji, https://imagej.net/Fiji ). To this end, the isthmus was outlined and a segmentation procedure (Simple Interactive Object Extraction in Fiji) was then carried out in this area to select the hydrogel. The percentage of remaining hydrogel was calculated as the number of selected pixels after activation relative to the number of pixels before activation, and the percentage of biofilm removal was deduced from this. Values were stored in a database (IBM SPSS Statistics version 27; SPSS Inc.). High‐speed imaging was used to analyse irrigant flow in the isthmus and to detect the presence of transient or stable cavitation by each activation method. This was done for both open and closed isthmus models. In order to produce a closed isthmus, one of the canals was blocked by means of silicon The isthmus models were positioned in front of a Fastcam SA‐X2 camera (Photron) equipped with a micro lens zoom system. A high intensity LED light source (LA‐HDF7010; Hayashi) illuminated the model from behind. Recordings were done both in empty isthmus models and models with the isthmus closed at one side. In order to visualize irrigant flow, glass microspheres (0.1 g; 10 μm diameter, density: 1100 kg/m 3 ; Sigma Aldrich) were added to the irrigant, and irrigant activation cycles were recorded at a frame rate of 5000 fps. In order to study the transient and stable cavitation, activation cycles were recorded at a frame rate of 30 000 fps, no glass particles added. In this way, high‐speed videos were obtained for the two activation methods in the four isthmus morphologies. Photron Fastcam Viewer V4 software (Photron) was used to measure the maximum speed of the particles (three measurements per video) and to describe the flow pattern in the isthmus, and to describe cavitation phenomena in the isthmus. Hydrogel removal data were analysed using SPSS Statistics software (IBM) ( α = 5%). As the data were not normally distributed (according to Kolmogorov–Smirnov test), the Kruskal–Wallis test followed by Dunn's multiple comparison post hoc test and Bonferroni correction was performed to test for differences between isthmus configurations, and between activation methods. Hydrogel removal Hydrogel removal by the three irrigation protocols in the four isthmus types is summarized in Table and graphically presented in Figure . LAI, EDDY and NI showed significantly greater biofilm removal in short‐wide isthmuses than in narrow isthmuses ( p < .001). No significant difference in biofilm removal was found between long and short isthmuses of the same width, regardless of irrigation protocol ( p > .05), except for LAI who showed greater biofilm removal in short‐wide than in long‐wide isthmuses ( p < .001). LAI and EDDY resulted in significantly greater biofilm removal than NI in every isthmus configuration ( p < .001), but no significant difference was found between LAI and EDDY ( p > .05). Representative postoperative images of each isthmus type are shown in Figure . High‐speed imaging EDDY produced a continuous and steady flow, while the flows produced by LAI were pulsed: short and fast flows were followed by periods of relative little flow (Videos and ). Irrigant flow patterns in the isthmus are shown in Figure . In open isthmuses, the general pattern with both activation devices was fluid displacement from the activated to the other root canal. Activation by EDDY produced revolving currents (eddies) in the isthmus; their location and number differed with the isthmus type (Figure ). Maximum fluid speeds ranged between 0.3 ± 0.0 m/s (in the long narrow type) and 2.1 ± 0.4 m/s (in the short wide type). Fluid speeds by LAI ranged between 0.5 ± 0.1 m/s (in the long wide type) and 3.8 ± 0.9 m/s (in the short narrow type). The short‐wide isthmuses displayed eddies for both activation methods and showed the most marked ones (Figure ) (video provided in Videos , ). In closed isthmuses, fluid displacement from one canal to the other did not take place. EDDY, as in the open isthmuses, produced steady revolving currents, fluid speeds now ranging between 0.1 ± 0.0 m/s and 0.6 ± 0.1 m/s. LAI showed very rapid horizontal back‐and‐forth movements of the irrigant with every pulse (fluid speeds from 0.5 ± 0.1 m/s to 3.9 ± 0.1 m/s). LAI had higher fluid speed in closed short isthmuses than in long isthmuses and in EDDY. The high‐speed recordings at 30 000 fps revealed the events taking place in the canals and the isthmus during LAI and EDDY activation. These demonstrated stable cavitation of pre‐existing bubbles in each isthmus with EDDY, oscillating at 5988 Hz (Video ). With LAI, there was very limited liquid movement in the canal system during the period in between 2 pulses (pulse pairs in fact). When the laser pulse produced a primary cavitation bubble above the canal entrance, appearance/growth of bubbles of order of magnitude of 0.5 mm over the entire length of both canals, and 0.25 mm in the isthmus was seen. These bubbles appeared rapidly and imploded soon after implosion of the primary bubble. This caused a vertical liquid movement in the canals, and a horizontal movement in the isthmus. The second pulse of the pair, arriving at a variable delay time after the first pulse, evoked the same events, hence causing a second sequence of very rapid irrigant movement (video provided in Video ). Hydrogel removal by the three irrigation protocols in the four isthmus types is summarized in Table and graphically presented in Figure . LAI, EDDY and NI showed significantly greater biofilm removal in short‐wide isthmuses than in narrow isthmuses ( p < .001). No significant difference in biofilm removal was found between long and short isthmuses of the same width, regardless of irrigation protocol ( p > .05), except for LAI who showed greater biofilm removal in short‐wide than in long‐wide isthmuses ( p < .001). LAI and EDDY resulted in significantly greater biofilm removal than NI in every isthmus configuration ( p < .001), but no significant difference was found between LAI and EDDY ( p > .05). Representative postoperative images of each isthmus type are shown in Figure . EDDY produced a continuous and steady flow, while the flows produced by LAI were pulsed: short and fast flows were followed by periods of relative little flow (Videos and ). Irrigant flow patterns in the isthmus are shown in Figure . In open isthmuses, the general pattern with both activation devices was fluid displacement from the activated to the other root canal. Activation by EDDY produced revolving currents (eddies) in the isthmus; their location and number differed with the isthmus type (Figure ). Maximum fluid speeds ranged between 0.3 ± 0.0 m/s (in the long narrow type) and 2.1 ± 0.4 m/s (in the short wide type). Fluid speeds by LAI ranged between 0.5 ± 0.1 m/s (in the long wide type) and 3.8 ± 0.9 m/s (in the short narrow type). The short‐wide isthmuses displayed eddies for both activation methods and showed the most marked ones (Figure ) (video provided in Videos , ). In closed isthmuses, fluid displacement from one canal to the other did not take place. EDDY, as in the open isthmuses, produced steady revolving currents, fluid speeds now ranging between 0.1 ± 0.0 m/s and 0.6 ± 0.1 m/s. LAI showed very rapid horizontal back‐and‐forth movements of the irrigant with every pulse (fluid speeds from 0.5 ± 0.1 m/s to 3.9 ± 0.1 m/s). LAI had higher fluid speed in closed short isthmuses than in long isthmuses and in EDDY. The high‐speed recordings at 30 000 fps revealed the events taking place in the canals and the isthmus during LAI and EDDY activation. These demonstrated stable cavitation of pre‐existing bubbles in each isthmus with EDDY, oscillating at 5988 Hz (Video ). With LAI, there was very limited liquid movement in the canal system during the period in between 2 pulses (pulse pairs in fact). When the laser pulse produced a primary cavitation bubble above the canal entrance, appearance/growth of bubbles of order of magnitude of 0.5 mm over the entire length of both canals, and 0.25 mm in the isthmus was seen. These bubbles appeared rapidly and imploded soon after implosion of the primary bubble. This caused a vertical liquid movement in the canals, and a horizontal movement in the isthmus. The second pulse of the pair, arriving at a variable delay time after the first pulse, evoked the same events, hence causing a second sequence of very rapid irrigant movement (video provided in Video ). The present data demonstrate that the three irrigation methods showed a better cleaning of short‐wide isthmuses than narrow isthmuses. No differences in hydrogel removal were found between long and short isthmuses of the same width except for LAI in the wide isthmuses. The null hypothesis, stating that there is no influence of isthmus morphology on the removal of artificial biofilm, has to be rejected. High‐speed imaging disclosed steady irrigant eddies in the isthmus during EDDY activation and pulsed horizontal oscillation during LAI. This study is the first to investigate the impact of isthmus anatomy on the efficacy of irrigant activation. In addition, newly developed 3D‐printed models were used, displaying a very realistic 3D root canal anatomy, reproducing complex shapes and irregularities found in human teeth. The use of these models helps to overcome the problems due to high morphological variability of natural teeth (Caron et al., ) and provides a very high degree of standardization by fabricating strictly identical root canal systems with different isthmus shapes but with otherwise identical dimensions (Macedo et al., ). While similar isthmus models have been used in previous studies (De Meyer et al., ; Meire et al., ; Swimberghe et al., ; Swimberghe, Buyse, et al., ) providing valuable information on the cleaning efficiency of different irrigant‐activation devices in straight‐, curved canals and isthmuses, the design of these models is basic, lacking any canal or isthmus curvature. In this respect, the current model takes the root canal and isthmus morphology much closer to the anatomical reality. The removable part facilitates direct application of the hydrogel in the isthmus, and the transparency of the models enables accurate determination of hydrogel removal (Swimberghe, Buyse, et al., ). The position of the isthmuses was standardized at 3 mm from the apical foramina (Gu, Kim, et al., ; Gu, Wei, et al., ; Villas‐Boas et al., ), and their dimensions were determined to correspond to the average maximum and minimum length and width values found in the literature (Keles & Keskin, ). The curvature of the root canals was set at 23° to match with generally reported curvatures in mandibular molars as well (Gu et al., ). The isthmus in this study was irrigated from both sides to simulate clinical reality. The biofilm‐mimicking hydrogel used to fill the isthmuses has been used in previous studies (Swimberghe et al., ; Swimberghe, Buyse, et al., ) and is based on the hydrogel described by Macedo et al. ( ), known as a highly standardized and controllable natural biofilm substitute. Bovine dentine debris was added, in order to approach the clinical situation where hard tissue debris generated by mechanical preparation is forced into the isthmus (Paqué et al., ). The present experimental setup entailed an extreme standardization, allowing the most subtle and differentiating comparison between the various test conditions. In addition to anatomical standardization of the models, the hydrogels were prepared under the same thermal conditions in order to control their viscoelastic behaviour and guarantee their stability (Macedo et al., ). Hydrogel placement was standardized as well. The irrigant‐activation protocols were standardized with regard to timing, tip positioning, flow rates, adherence to the manufacturer's instructions and blinding of the operator. The acquisition of the images was also standardized in terms of sample positioning, light and background. The image analysis was the same for each group by using the same saved ROIs and segmentators in FIJI to select the hydrogel. This minimized operator bias during the image analysis procedure. Overall, short‐wide isthmus types were best cleaned, regardless of the irrigation method. This result is not illogical, as wide isthmuses offer more hydrogel contact surface and more space for irrigant flow compared with small isthmuses, and the short intercanal distance poses less challenges than the long one. NI had very little effect on the hydrogel in the narrow isthmus types. The absence of significant differences in cleaning between long and short isthmuses with a fixed width suggests that the width of the isthmus is more critical than its length. EDDY and LAI cleaned the isthmuses markedly better than NI. This corroborates earlier findings by De Moor et al., , Conde et al., , Haupt et al., , Güven et al., and Swimberghe, Buyse, et al., , and supports the use of an adjunctive irrigant activation step after canal preparation. While the mean/median hydrogel removal by LAI was higher than that by EDDY in every condition, this difference was not significant. When applying LAI, the fibre tip is positioned above the root canal entrance. This is in contrast with EDDY, requiring intracanal tip positioning to exert any effect in the isthmus. LAI has been shown very efficient in removing debris in straight canals (de Groot et al., ; De Moor et al., ), isthmuses (Swimberghe et al., ) and curved canals (Swimberghe, Buyse, et al., ). The irrigant dynamics and flow generated during LAI seem not affected by the root canal curvature (Peeters et al., ). In contrast, introduction of an EDDY tip in a curved canal results in tip deflexion and wall contact, generating tensions that reduce its movement and oscillation, leading to a weaker streaming of the irrigant (Swimberghe, Buyse, et al., ). The absence of significant differences between EDDY and LAI may also be due to high Q1‐Q3 ranges in both groups. To date, no data are available regarding the physical mechanisms involved during LAI and EDDY activation in isthmuses. Therefore, high‐speed imaging was used to elaborate on this. A typical shadowgraphy setup was followed that is relevant to visualize bubbles and solid particles that have a different refractive index than water (Gregorcic et al., ). This permits visualization of cavitation, but gives no information about the streaming of the irrigant. Therefore, glass particles were added in a second run of high‐speed imaging to disclose the fluid movements within the isthmus. In open isthmuses, EDDY and LAI provoked irrigant displacement from the activated canal towards the empty root canal. Closed isthmuses were also investigated to mimic the clinical situation of an isthmus clogged with organic and inorganic debris at the start of activation. No such streaming was observed in closed isthmuses. The recordings disclosed steady circular irrigant eddies in the isthmus during EDDY activation. This was the case in every isthmus morphology. In this respect, EDDY does credit to its name. Fluid speeds generated by EDDY were higher in wide isthmuses compared with narrow isthmuses. The fluid pattern observed with LAI was markedly different: LAI generated 2 distinct and repetitive flow patterns: pulsed horizontal flow in open isthmuses, and horizontal back‐and‐forth flow in closed isthmuses (see additional material). Su et al. ( ), using particle image velocimetry, demonstrated that LAI activates a so‐called “breath mode” during irrigation, represented by a back‐and‐forth vertical liquid movement along the main root canal in a closed system. In the situation of a closed isthmus, these movements become horizontal as the isthmus becomes a lateral extension of the main canal. This “breath mode” was not observed in open isthmuses. Instead, a circular flow (from the activated canal, through the isthmus, in the nonactivated canal, over the pulp chamber, back in the activated canal) was observed. This is likely explained by the fact that the first (apically directed) fluid movement of the breath mode does not encounter any opposing pressure and is evacuated through the isthmus in the connected canal. In closed isthmuses, LAI had higher maximum particle speed than EDDY, and the speed was the highest in the short isthmuses. This is in line with the biofilm removal patterns observed in this work, showing that short isthmuses were also the best cleaned by LAI. The mean values measured remain in the range of those measured in previous works at 0.43 m/s and 1.3 m/s in main root canals (Koch et al., ). Interestingly, with EDDY, fluid speeds seemed higher in wide isthmuses compared with small isthmuses, while with LAI, the opposite pattern was observed. When matching the fluid speed with the biofilm removal, higher speeds resulted in higher hydrogel removal for EDDY, but not in the case of LAI. Other factors than fluid speed might thus be involved in debridement and should be further investigated. For example, acoustic effect, typically generated by LAI may account for this. The present methodology however is unable to disclose these. The present methodology also has its drawbacks. Despite its recognition as a biofilm substitute for research purposes, the hydrogel offers a simplified representation of the natural biofilm and it has a weaker adhesion to the synthetic model's walls than a natural biofilm in a tooth (Boutsioukis et al., ). In this respect, the use of a mature multispecies endodontic biofilm is more realistic, but also comes with challenges to adequately quantify biofilm removal (Swimberghe, Crabbé, et al., ). Another potential limitation of this work is the necessity to use water instead of NaOCl as irrigant because the NaOCl decolorizes and provokes a surface dissolution of the hydrogel. In this respect, the present results represent mainly the physical action/effect of the various activated irrigation regimen, not any chemical effect. There are other potential consequences of the use of water instead of NaOCl as the irrigant. de Groot et al. ( ) observed larger bubbles and more intense cavitation when laser‐activating NaOCl compared with water. Similarly, Cai et al. ( ) demonstrated stronger cavitation effects and fluid dynamics with 1% or 2.5% NaOCl than with saline during PIPS activation. This suggests that the debriding action of LAI potentially is higher with the use of NaOCl. When using activation systems based on oscillating tips, the use of NaOCl can also affect the debriding action due to the formation of small gas bubbles. These bubbles may hinder debridement, as described for ultrasonically activated irrigation by Macedo et al. ( ). Whether this also affects EDDY activation, remains unclear. In addition, fluid speed measurements were not as accurate as those obtained by more advanced techniques such as particle image velocimetry (Su et al., ). Within the limitations of this laboratory study, it is possible to conclude that the isthmus morphology influences debridement efficacy of activated irrigation. Short‐wide isthmuses were the easiest to clean, while narrow isthmuses were the most challenging to clean. Isthmus width seems to be a more critical anatomical parameter than isthmus length. LAI and EDDY resulted in the greater hydrogel removal than NI. EDDY produced eddies and stable cavitation in the isthmus, and LAI showed transient cavitation and pulsed horizontal flow. Conception: LR, MM; Design: LR, JD, LR; Fundings: LR; Materials: LR, JD, MM; Data collection: LR; Analysis: LR, MM; Literature review: LR, MM; Writers: LR, JD, MM. This study was supported by the French government through the Programme Investissement d'Avenir (I‐SITE ULNE/ANR‐16‐IDEX‐0004 ULNE) managed by the Agence Nationale de la Recherche. The authors declare no conflict of interest. Figure S1 Click here for additional data file. Video S1 Click here for additional data file. Video S2 Click here for additional data file. Video S3 Click here for additional data file. Video S4 Click here for additional data file. Video S5 Click here for additional data file. Video S6 Click here for additional data file. Video S7 Click here for additional data file. Video S8 Click here for additional data file. Video S9 Click here for additional data file. Video S10 Click here for additional data file. Video S11 Click here for additional data file. Video S12 Click here for additional data file. Video S13 Click here for additional data file. Video S14 Click here for additional data file. Video S15 Click here for additional data file. Video S16 Click here for additional data file. Video S17 Click here for additional data file. Video S18 Click here for additional data file. Supporting Information Click here for additional data file.
Dental floss ties for rubber dam isolation: A proposed classification and a new technique
f763aa5e-44a9-4c90-a68a-21ade5a67122
10092548
Dental[mh]
Generally, a dental floss tie consists of one or two loops, a knot, and two free arms. The term overhand knot has been used for surgical purposes to provide a secure stopper when intending the suture to be permanent. The author (O.A) proposes a new classification of dental floss ties as described below: 1. Simple ties: subdivided into: I. Traditional tie (surgeon's tie): A knot is created after placing the dental floss around the neck of the tooth gingival to the height of the contour. This tie is similar to an interrupted surgical suture in shape. A suitable length (20‐30 cm) is cut and doubled up into a U shape, then placed around the neck of the tooth and a double overhand knot is created clockwise. The knot is then tightened securely around the tooth followed by making a single overhand knot counterclockwise. , The main advantage of this tie is the quick and easy application; however, it may become loose soon after application. Figure demonstrates the step‐by‐step procedure of making this traditional tie and its clinical application. II. Single‐loop self‐ligating tie: This knot is prepared outside the patient's mouth. A piece of dental floss of suitable length (20‐30 cm) is cut and doubled up into a U shape. A loop is then created by passing the curved end of the dental floss over the two free ends as shown in Figure . The curved end of the dental floss is then inserted inside the loop and pulled out completely to create a loose tie. The loop is placed around the cervical area of the tooth and then tightened toward the neck of the tooth by pulling its free ends apart until it is tight and secure. III. Double‐loop self‐ligating tie: This tie is similar to the single‐loop self‐ligating tie but with two loops. A piece of dental floss of suitable length (20‐30 cm) is doubled up into a U shape, and a loop is created by passing the curved end of the dental floss over the two free ends. However, unlike the single‐loop self‐ligating tie, the curved end is inserted inside the loop and only partially pulled out. The curved end of the dental floss is then opened up, brought over around the entire knot until it encircles the free ends. Finally, the free ends are pulled to tighten the double loops downward. Figure illustrates the step‐by‐step procedure for preparing this tie. Generally, the double‐loop self‐ligating tie is easier to tighten as it enables the dentist to pull the free ends either apart or together to tighten the knot, allowing them to tighten it with one hand if needed. Conversely, the single‐loop self‐ligating tie only allows the dentist to tighten the knot by pulling the two ends apart using both hands. The double‐loop tie is also tighter and more stable around the neck of the tooth. 2. Compound tie: This tie, which was proposed and implemented by Dr. Osama A. Alkhatib, consists of a single‐loop or double‐loop self‐ligating tie attached to one, two, or multiple overhand knots followed by another overhand knot. This tie is designed to isolate prepared teeth for crowns, bridges, or cavities with deep margins, especially palatal and lingual cavities. It provides complete access to prepared teeth and appropriate isolation by the RD. Figure shows the step‐by‐step procedure of making a compound tie (1) that consists of a single‐loop self‐ligating tie attached to an overhand knot, followed by another overhand knot, while Figure demonstrates a compound tie (2) consisting of a double‐loop self‐ligating tie attached to overhand knot, followed by another overhand knot. In some cases, it might not be feasible to use clamps, necessitating the use of floss ligatures to secure the dam (Figures ). There are numerous advantages of using dental floss tie techniques compared to clamps. A dental floss tie provides complete access to the prepared tooth, while clamps may impede good access (Figure ). In addition, it can be safely used to isolate teeth with orthodontic brackets, as clamps may damage the brackets or debond them during insertion and removal (Figure ). Also, with the use of the dental floss tie techniques, indirect restorations can be tried‐in easily which may be more difficult when dental clamps are in place. Moreover, if the clamp does not fit the tooth correctly, or is not seated fully, it can dislodge and be aspirated or swallowed. In some clinical scenarios, dental floss ties are more suitable around teeth as opposed to clamps, whereby the latter may require additional fixation using impression compound or flowable composite. Dental floss ties are less traumatic to the gingival tissue than clamps, particularly active clamps, which can cause trauma to the gingival tissue and eventually irreversible gingival recession. The author suggests that the compound tie can be easily used if a dentist wants to isolate teeth from the second to the second premolar without the need for using anchor clamps and local anesthesia, as premolar teeth may have sufficient undercut to retain the floss ligature (Figure ). When molar teeth need to be isolated, the floss tie is not recommended as the pressure caused by the RD sheet may force the sheet to slide over the buccal knot. This is due to the fact that molars are located posteriorly with unfavorable undercuts leading to extra tension on the RD sheet compared to anterior teeth and premolars. Different knot types have been investigated in the literature for surgical application in terms of loop and knot security, , , but no study yet has classified different floss ligatures/ties for dental use. The compound tie technique may have an advantage over single‐ or double‐loop self‐ligating tie techniques as it can provide better isolation and more consistent gingival tissue retraction from the palatal and labial tooth surfaces simultaneously. The knots described in the compound tie prevent the RD sheet from sliding over the dental floss tie from all surfaces, as there are at least two knots: one on the palatal surface and one on the labial surface. When there is only one knot on the labial surface of the tooth using the traditional technique, for example, the RD sheet may slide over the dental floss tie from the palatal or lingual surface if stretched too far. Moreover, a compound dental floss tie has labial arms and lingual or palatal arms. This will enable the dentist to pull both arms simultaneously, which in turn will pull the entire loop in the same direction of pulling, achieving good gingival retraction from all the surfaces of the tooth (Figure ). On the other hand, simple ties have only labial arms, therefore, when the dentist pulls the labial arms, it may achieve good retraction from the labial surface only. In some cases, the loop might slide in the opposite direction of the pulling, especially with teeth prepared for indirect crowns, or remaining roots that require building up with posts, which may not have enough retention areas around the neck of the tooth to retain the loop in place. To achieve adequate RD isolation, two opposing dental floss knots may be required. This will help stabilize the RD sheet around the neck of the tooth from all surfaces and provide good gingival retraction (e.g., to expose the margins of a prepared crown). In such clinical scenarios, a dentist may opt to use two simple dental floss ties, the first one having a knot on the labial surface of the tooth and the second one with a knot on the palatal surface of the tooth. However, using a compound dental floss tie has an advantage over using two simple dental floss ties. When using a compound dental floss tie, every knot will be at the same level of the loop, allowing the minimal thickness of the dental floss tie, which may enable retraction of the gingival tissues and exposure of the margins, with minimal pressure on the gingiva. A new classification and technique of dental floss ties are proposed in order to simplify isolation when using an RD, provide a good seal, enhance visibility, and offer good access to prepared teeth, especially for indirect restorations. With this simple system, the stress of getting limited access or clamp dislodgement may be considerably reduced. Future research could further evaluate the clinical effectiveness of isolation with the proposed floss ties. The authors do not have any conflicts of interest in regard to the current study.
Picturing natural microbiomes: Matrix‐assisted laser desorption/ionization mass spectrometry imaging for unravelling the architecture of environmental microbial communities
b733e2bf-07bb-4f6c-b643-fb19b5eb5aaf
10092596
Microbiology[mh]
The author has no conflict of interest to declare.
American College of Rheumatology/
46c3101c-9410-4ab7-b1b0-e26f7d667ac2
10092655
Internal Medicine[mh]
Disease activity in rheumatoid arthritis (RA) was initially defined by a number of core set variables, agreed upon by the American College of Rheumatology (ACR) and EULAR in the 1990s ( , ). These variables comprised tender joint count (TJC) and swollen joint count (SJC), patient assessment of global disease activity (PtGA) and of pain, evaluator/physician global assessment (EGA), a measure of function such as the Health Assessment Questionnaire (HAQ), and an acute‐phase reactant such as C‐reactive protein (CRP) level. At the time of defining the core set variables, remission was more aspirational than a realistic goal ( ). Today, however, remission can be obtained in a sizable portion of patients and is seen as a major therapeutic target ( , , ). A clinical definition of remission for RA should reflect no or only minimal disease activity, and patients attaining this state should have a low risk of both structural progression and functional impairment ( ). ACR and EULAR endorsed provisional remission criteria over 10 years ago ( ). Their publication served the purpose of providing a common definition for this prime treatment target ( ). Two types of remission definitions were agreed upon by the ACR/EULAR committee after extensive data analyses and consensus‐based deliberations. The Boolean definition required that, to attain remission, each of 4 core set variables (TJC, SJC, PtGA, CRP) must have a value of ≤1. (PtGA is scored on a 0–10‐point or 0–10‐cm scale, CRP in mg/dl.) The index‐based definition used the remission cutoff point of the simplified disease activity index (SDAI) ( ). The committee also endorsed remission criteria that did not include CRP level, namely a Boolean definition that comprised SJC, TJC, and PtGA and an index definition based on the remission threshold of the clinical disease activity index (CDAI) ( ). Since their publication, arguments have been made claiming that remission definitions may, on the one hand, be too stringent, with the risk of overtreatment if used as treatment targets, or, on the other hand, too lenient, proposing addition of imaging confirmation of remission. A particular matter of debate was the requirement of achieving a PtGA score of ≤1; the stringent threshold for the PtGA has been criticized, because some patients do not achieve it despite the absence of tender and swollen joints and an elevated CRP level ( ). Moreover, the agreement between the Boolean and index definitions was only moderate, primarily due to the PtGA threshold ( ). However, the PtGA is the core set measure most sensitive to change in RA trials ( , , , ), best differentiating between patients receiving active treatment and those receiving placebo. Thus, PtGA is an important measure of disease activity. Consequently, the PtGA was included in the ACR core set, composite activity scores, and remission definitions. However, PtGA may also be influenced by other factors related to RA. For example, patients with pain from irreversible joint damage may have elevations in PtGA even if their RA is in clinical remission ( , ). To circumvent the strictness of the 1.0 rule for PtGA and to increase the agreement with SDAI‐defined remission, a higher PtGA threshold has been proposed ( , ). Furthermore, since the index‐based criteria can be used instead of Boolean criteria, both criteria should identify the same patients as having disease in remission. However, remission rates based on SDAI are higher than those using the Boolean criteria, because summing several components permits 1 component, such as the PtGA, to be slightly elevated if compensated by a lower score in others ( ). A study evaluating alternative Boolean definitions of remission, with PtGA thresholds ranging 1.0–2.5, found that using a threshold of 2 cm (Boolean2.0) led to a higher agreement with the index‐based definition without jeopardizing the strong association between remission and subsequent good functional and radiographic outcomes, a key criterion in the development of the provisional definition of remission ( ). The purpose of the present study is to externally validate the performance characteristics of this revision of the Boolean criteria ( ) and provide external validation of the provisionally endorsed SDAI and CDAI remission definitions. This provides the evidence base for ACR and EULAR to fully endorse the remission criteria, changing their status from the current “provisional” to a “definite” status. Patients RA patient data were retrieved from 4 clinical trials testing the efficacy of biologic disease‐modifying antirheumatic drugs (bDMARDs) against placebo or placebo with methotrexate (MTX), with an available observation period between 1 and 2 years. The GO‐AFTER trial tested golimumab as an active compound, the FUNCTION and LITHE trials tested tocilizumab, and the SERENE trial tested rituximab. GO‐AFTER evaluated patients who were insufficient responders to TNF inhibitors (TNFi), LITHE and SERENE included patients with an insufficient response to MTX, and FUNCTION included MTX‐naive patients with early RA. Results and detailed patient characteristics of the individual trials have been previously reported ( , , , ). These trials included RA patients with varying disease durations and treatment histories. In all 4 trials, the PtGA was evaluated using a 100‐mm visual analog scale (VAS). Definitions of remission and their modifications The Boolean definition includes SJC, TJC, PtGA (cm), and CRP levels (mg/dl); for a patient to meet remission criteria, all component scores must be ≤1 (in the case of a 100‐mm VAS, this translates to a score of ≤10). A version without CRP was also approved by the ACR/EULAR committee (3‐variable Boolean [3vBoolean]). The SDAI‐based definition of remission sums the scores for the components used in the Boolean definition in addition to EGA, and patients meet criteria if the score is ≤3.3. The CDAI‐based remission definition consists of the same components, excluding CRP level, and remission is fulfilled at a score of ≤2.8 ( ). Similar to a previous study ( ), we increased the threshold of the PtGA criterion by steps of 0.5 cm from 1 cm up to 2.5 cm, and labeled these as Boolean1.0, Boolean1.5, Boolean2.0, and Boolean2.5. The Boolean definition that does not include the PtGA criterion was labeled as BooleanX; in this definition, only CRP, TJC, and SJC needed a score of ≤1 to attain remission, regardless of PtGA value ( ). Statistical analysis We performed descriptive analyses and tested the revised Boolean2.0 criteria against the provisional Boolean1.0 criteria for convergent and predictive validity. Finally, we investigated the impact of the exclusion of the PtGA from the definition of remission (BooleanX). Analyses were performed on 6‐month and 12‐month data using SPSS Statistics 25 and Stata version 15. An experienced patient research partner (MW) was involved throughout the study. He took part in all meetings, reviewed data at different time points, and provided written as well as oral feedback. His contribution focused on a critical review of the PtGA as part of the RA definition of remission. Descriptive analysis We analyzed how the rates of remission at 6 and 12 months after treatment initiation in the trials were affected by the different modifications described above. For the Boolean modifications, we also studied which components prevented achievement of full remission by identifying participants who fulfilled 3 of 4 required criteria but not all 4 of them ( ). Convergent validity We tested the agreement of different Boolean criteria with the index‐based remission definitions. We cross‐tabulated remission fulfilment for Boolean remission versions with the SDAI and CDAI definitions and analyzed their agreement using McNemar's test for agreement with kappa statistics. In addition, the well‐established concordance between SDAI‐ and CDAI‐defined remission was tested to confirm the interchangeability of these definitions. We examined the optimal PtGA threshold to achieve concordance with SDAI‐defined remission by carrying out classification and regression trees (CART) analyses (R rpart package; https://cran.r-project.org/web/packages/rpart/index.html ), in which, after assuming that CRP, TJC, and SJC were all in remission (BooleanX), we asked what threshold of PtGA would provide the best prediction of SDAI‐defined remission. Predictive validity As a next step, we explored the impact of using the modified Boolean‐ and index‐based remission definitions assessed at 6 months after treatment initiation on outcomes at 1 year. Differences in mean radiographic progression (based on the change in modified total Sharp and van der Heijde score [mTSS] between baseline and 1 year) and the proportions of patients without progression (change in score ≤0) and with good function at 1 year (HAQ ≤0.5) were assessed. Attaining an HAQ of ≤0.5 without radiographic progression at 1 year of treatment was defined as a good combined outcome, similar to the procedure used to develop the provisional ACR/EULAR remission definition ( ). These analyses were repeated separately for early and late RA participants. Positive and negative likelihood ratios (LRs) were calculated separately for each remission definition to assess predictive validity for good functional and structural outcomes. Impact of PtGA score and PtGA exclusion from the remission definition In addition to the comparison of Boolean2.0 to Boolean 1.0, we analyzed the effect of excluding PtGA from remission criteria (BooleanX) in the context of each of the above analyses. RA patient data were retrieved from 4 clinical trials testing the efficacy of biologic disease‐modifying antirheumatic drugs (bDMARDs) against placebo or placebo with methotrexate (MTX), with an available observation period between 1 and 2 years. The GO‐AFTER trial tested golimumab as an active compound, the FUNCTION and LITHE trials tested tocilizumab, and the SERENE trial tested rituximab. GO‐AFTER evaluated patients who were insufficient responders to TNF inhibitors (TNFi), LITHE and SERENE included patients with an insufficient response to MTX, and FUNCTION included MTX‐naive patients with early RA. Results and detailed patient characteristics of the individual trials have been previously reported ( , , , ). These trials included RA patients with varying disease durations and treatment histories. In all 4 trials, the PtGA was evaluated using a 100‐mm visual analog scale (VAS). The Boolean definition includes SJC, TJC, PtGA (cm), and CRP levels (mg/dl); for a patient to meet remission criteria, all component scores must be ≤1 (in the case of a 100‐mm VAS, this translates to a score of ≤10). A version without CRP was also approved by the ACR/EULAR committee (3‐variable Boolean [3vBoolean]). The SDAI‐based definition of remission sums the scores for the components used in the Boolean definition in addition to EGA, and patients meet criteria if the score is ≤3.3. The CDAI‐based remission definition consists of the same components, excluding CRP level, and remission is fulfilled at a score of ≤2.8 ( ). Similar to a previous study ( ), we increased the threshold of the PtGA criterion by steps of 0.5 cm from 1 cm up to 2.5 cm, and labeled these as Boolean1.0, Boolean1.5, Boolean2.0, and Boolean2.5. The Boolean definition that does not include the PtGA criterion was labeled as BooleanX; in this definition, only CRP, TJC, and SJC needed a score of ≤1 to attain remission, regardless of PtGA value ( ). We performed descriptive analyses and tested the revised Boolean2.0 criteria against the provisional Boolean1.0 criteria for convergent and predictive validity. Finally, we investigated the impact of the exclusion of the PtGA from the definition of remission (BooleanX). Analyses were performed on 6‐month and 12‐month data using SPSS Statistics 25 and Stata version 15. An experienced patient research partner (MW) was involved throughout the study. He took part in all meetings, reviewed data at different time points, and provided written as well as oral feedback. His contribution focused on a critical review of the PtGA as part of the RA definition of remission. Descriptive analysis We analyzed how the rates of remission at 6 and 12 months after treatment initiation in the trials were affected by the different modifications described above. For the Boolean modifications, we also studied which components prevented achievement of full remission by identifying participants who fulfilled 3 of 4 required criteria but not all 4 of them ( ). Convergent validity We tested the agreement of different Boolean criteria with the index‐based remission definitions. We cross‐tabulated remission fulfilment for Boolean remission versions with the SDAI and CDAI definitions and analyzed their agreement using McNemar's test for agreement with kappa statistics. In addition, the well‐established concordance between SDAI‐ and CDAI‐defined remission was tested to confirm the interchangeability of these definitions. We examined the optimal PtGA threshold to achieve concordance with SDAI‐defined remission by carrying out classification and regression trees (CART) analyses (R rpart package; https://cran.r-project.org/web/packages/rpart/index.html ), in which, after assuming that CRP, TJC, and SJC were all in remission (BooleanX), we asked what threshold of PtGA would provide the best prediction of SDAI‐defined remission. Predictive validity As a next step, we explored the impact of using the modified Boolean‐ and index‐based remission definitions assessed at 6 months after treatment initiation on outcomes at 1 year. Differences in mean radiographic progression (based on the change in modified total Sharp and van der Heijde score [mTSS] between baseline and 1 year) and the proportions of patients without progression (change in score ≤0) and with good function at 1 year (HAQ ≤0.5) were assessed. Attaining an HAQ of ≤0.5 without radiographic progression at 1 year of treatment was defined as a good combined outcome, similar to the procedure used to develop the provisional ACR/EULAR remission definition ( ). These analyses were repeated separately for early and late RA participants. Positive and negative likelihood ratios (LRs) were calculated separately for each remission definition to assess predictive validity for good functional and structural outcomes. Impact of PtGA score and PtGA exclusion from the remission definition In addition to the comparison of Boolean2.0 to Boolean 1.0, we analyzed the effect of excluding PtGA from remission criteria (BooleanX) in the context of each of the above analyses. We analyzed how the rates of remission at 6 and 12 months after treatment initiation in the trials were affected by the different modifications described above. For the Boolean modifications, we also studied which components prevented achievement of full remission by identifying participants who fulfilled 3 of 4 required criteria but not all 4 of them ( ). We tested the agreement of different Boolean criteria with the index‐based remission definitions. We cross‐tabulated remission fulfilment for Boolean remission versions with the SDAI and CDAI definitions and analyzed their agreement using McNemar's test for agreement with kappa statistics. In addition, the well‐established concordance between SDAI‐ and CDAI‐defined remission was tested to confirm the interchangeability of these definitions. We examined the optimal PtGA threshold to achieve concordance with SDAI‐defined remission by carrying out classification and regression trees (CART) analyses (R rpart package; https://cran.r-project.org/web/packages/rpart/index.html ), in which, after assuming that CRP, TJC, and SJC were all in remission (BooleanX), we asked what threshold of PtGA would provide the best prediction of SDAI‐defined remission. As a next step, we explored the impact of using the modified Boolean‐ and index‐based remission definitions assessed at 6 months after treatment initiation on outcomes at 1 year. Differences in mean radiographic progression (based on the change in modified total Sharp and van der Heijde score [mTSS] between baseline and 1 year) and the proportions of patients without progression (change in score ≤0) and with good function at 1 year (HAQ ≤0.5) were assessed. Attaining an HAQ of ≤0.5 without radiographic progression at 1 year of treatment was defined as a good combined outcome, similar to the procedure used to develop the provisional ACR/EULAR remission definition ( ). These analyses were repeated separately for early and late RA participants. Positive and negative likelihood ratios (LRs) were calculated separately for each remission definition to assess predictive validity for good functional and structural outcomes. PtGA score and PtGA exclusion from the remission definition In addition to the comparison of Boolean2.0 to Boolean 1.0, we analyzed the effect of excluding PtGA from remission criteria (BooleanX) in the context of each of the above analyses. Patients, remission rates, and components limiting achievement of remission Data from 2,048 clinical trial participants, 1,101 with early RA (mean ± SD disease duration 0.8 ± 0.5 years) and 947 with established RA (mean ± SD disease duration 7.1 ± 5.4 years), were included. As expected, using Boolean2.0 yielded higher remission rates compared to Boolean1.0 at 6 months: 20.6% (n = 227) compared to 14.8% (n = 163) in early RA; 6.0% (n = 57) versus 4.2% (n = 40) in established RA (Figure ). These correspond to a relative increase in remission rates of 39% and 42%, in early and established RA, respectively. This trend was consistent at 1 year, although remission rates were generally higher (Supplementary Figure , on the Arthritis & Rheumatology website at https://onlinelibrary.wiley.com/doi/10.1002/art.42347 ). Omitting the PtGA criterion using the BooleanX definition further increased remission rates over Boolean2.0, in early RA (from 227 patients [20.6%] to 297 patients [27%]) and in established RA (from 57 patients [6%] to 95 patients [10%] patients at 6 months), with relative increases of 31% and 66%, respectively. Within the total study population, 311 participants (15.2%) achieved “near misses” of Boolean remission, meaning that they fulfilled 3 of the 4 criteria. In 60% of these participants, this was due to not meeting the criterion of PtGA ≤1cm. By using Boolean2.0, this proportion was reduced to 47% of all near misses. Consequently, among all participants, 14% were classified as having Boolean2.0‐defined remission, 5% missed achieving remission only because of the PtGA criterion, and 3% missed achieving remission only because of the SJC criterion (Supplementary Figure , https://onlinelibrary.wiley.com/doi/10.1002/art.42347 ). Convergent validity Increasing the PtGA cutoff from 1.0 to 2.0 cm for participants with early RA yielded higher concordance rates between Boolean‐ and SDAI‐defined criteria for remission. This led to more participants contemporaneously fulfilling the SDAI and respective Boolean remission definition (increase from 71% to 92% of participants when using Boolean1.0 versus Boolean2.0) (Table ). Rates of concordantly classified participants with respect to remission increased from 93.4% to 95.9% at 6 months. A similar increase in concordantly classified participants was observed for the agreement between the corresponding Boolean and CDAI definitions (Supplementary Table , https://onlinelibrary.wiley.com/doi/10.1002/art.42347 ). In patients with established RA, the percentage classified as having disease in remission by Boolean and CDAI or SDAI definitions likewise increased from 74% to 94% for SDAI and from 70% to 83% for CDAI, when using Boolean1.0 versus Boolean 2.0; and from 78% to 96% when using 3vBoolean1.0 versus 3vBoolean2.0 to assess agreement with CDAI. The proportion of participants concordantly classified as having disease in remission remained similar in established RA. Kappa analyses showed higher agreement between SDAI‐defined remission and Boolean2.0‐defined than with Boolean1.0‐defined remission at 6 months (Figure ). The 12‐month data showed similar results and are depicted in Supplementary Figure ( https://onlinelibrary.wiley.com/doi/10.1002/art.42347 ). Kappa estimates and 95% confidence intervals (95% CIs) of agreement with SDAI‐ and CDAI‐defined remission at 6 months increased when using Boolean2.0 compared to Boolean1.0 definitions (0.86 [95% CI 0.83–0.89] versus 0.77 [95% CI 0.74–0.82] for SDAI at 6 months, and 0.81 [95% CI 0.77–0.84] versus 0.76 [95% CI 0.72–0.81] for CDAI at 6 months) (kappa curves for CDAI are shown in Supplementary Figure , https://onlinelibrary.wiley.com/doi/10.1002/art.42347 ). A further increase in the PtGA threshold beyond 2 cm led to a decrease in concordance. Reduced concordance was particularly seen when omitting the PtGA (BooleanX) both in terms of percentage agreement and according to kappa estimates (Table and Figure ). Additionally, CART analyses confirmed the percent agreement and kappa results: in participants with SJC, TJC, and CRP values of ≤1, and PtGA values of ≤2.3 cm at 6 months and ≤1.8 cm at 12 months showed the highest likelihood of concurrent SDAI‐defined remission. The same analyses stratified by early or established RA yielded a PtGA threshold value of ≤2.3 cm in early RA and ≤1.4 cm in established RA at 6 months (≤1.5 cm in early RA and ≤1.9 cm in established RA at 12 months). Generally, all agreement estimates point to 2.0 cm as the optimal threshold. Predictive validity We studied rates of participants achieving a good functional outcome (HAQ ≤0.5) and no radiographic progression (ΔmTSS) at 1 year for participants classified by the different Boolean definitions at 6 months. Similar results were found for Boolean2.0 and index‐based definitions when predicting good functional outcome. HAQ scores at 12 months were as follows: mean ± SD 0.24 ± 0.40 for Boolean1.0, 0.31 ± 0.45 for Boolean2.0, 0.41 ± 0.53 for BooleanX, 0.27 ± 0.42 for SDAI, and 0.26 ± 0.42 for CDAI. Fewer participants scored an HAQ of ≤0.5 when the PtGA was omitted (70% in BooleanX versus 78% in Boolean2.0) (Table ). Increasing the PtGA threshold for Boolean‐based remission was associated with a linear increase in HAQ scores. While there was a drop in positive LR from 6.1 to 4.4 when using the Boolean2.0, this was similar to the positive LR predicting a good functional outcome for SDAI‐ and CDAI‐based remission, which ranged from 4.3 to 4.9. Table outlines the similarity of LRs for predicting lack of radiographic progression during the first year when the different remission definitions were fulfilled at 6 months of treatment. The radiographic outcomes were similar regardless of the PtGA threshold or whether PtGA was included in the Boolean criteria, and scores were similar between different definitions (mean ± SD ΔmTSS 0.29 ± 2.08 for Boolean1.0, 0.25 ± 1.81 for Boolean2.0, 0.21 ± 1.9 for BooleanX, 0.27 ± 1.86 for SDAI, and 0.27 ± 1.9 for CDAI). This observation is consistent with previous findings that PtGA is not associated with radiographic progression ( , ). Using the different Boolean definitions as well as index‐based definitions led to similar proportions of participants with disease in remission who had radiographic progression (defined as ΔmTSS>0) during the first year (29.6% for Boolean1.0, 28.5% for Boolean2.0, 28.6% for BooleanX, 28.2% for SDAI, and 28.6% for CDAI). The proportion of participants achieving both good radiographic and functional outcomes were similar for all remission definitions, from 57% to 60% (58.6% for Boolean1.0, 57.3% for Boolean 2.0, 59.2% for SDAI, and 60.4% for CDAI), except for BooleanX (50.8%). Again, index‐based remission definitions performed similarly to Boolean1.0 and Boolean2.0 definitions with respect to their predictive ability (positive LR between 3.8 and 4.3). This pattern could also be seen when analyzing data on early RA and established RA, separately (Table ). However, good functional outcomes when using BooleanX were even less frequent in established RA compared to early RA (HAQ ≤0.5 in established RA was 57% compared to 74% in early RA). Of note, no differences in radiographic progression in patients with established RA were observed between the remission definitions fulfilled. Overall, more than two‐thirds of patients with established RA showed radiographic progression throughout the first year. Data from 2,048 clinical trial participants, 1,101 with early RA (mean ± SD disease duration 0.8 ± 0.5 years) and 947 with established RA (mean ± SD disease duration 7.1 ± 5.4 years), were included. As expected, using Boolean2.0 yielded higher remission rates compared to Boolean1.0 at 6 months: 20.6% (n = 227) compared to 14.8% (n = 163) in early RA; 6.0% (n = 57) versus 4.2% (n = 40) in established RA (Figure ). These correspond to a relative increase in remission rates of 39% and 42%, in early and established RA, respectively. This trend was consistent at 1 year, although remission rates were generally higher (Supplementary Figure , on the Arthritis & Rheumatology website at https://onlinelibrary.wiley.com/doi/10.1002/art.42347 ). Omitting the PtGA criterion using the BooleanX definition further increased remission rates over Boolean2.0, in early RA (from 227 patients [20.6%] to 297 patients [27%]) and in established RA (from 57 patients [6%] to 95 patients [10%] patients at 6 months), with relative increases of 31% and 66%, respectively. Within the total study population, 311 participants (15.2%) achieved “near misses” of Boolean remission, meaning that they fulfilled 3 of the 4 criteria. In 60% of these participants, this was due to not meeting the criterion of PtGA ≤1cm. By using Boolean2.0, this proportion was reduced to 47% of all near misses. Consequently, among all participants, 14% were classified as having Boolean2.0‐defined remission, 5% missed achieving remission only because of the PtGA criterion, and 3% missed achieving remission only because of the SJC criterion (Supplementary Figure , https://onlinelibrary.wiley.com/doi/10.1002/art.42347 ). Increasing the PtGA cutoff from 1.0 to 2.0 cm for participants with early RA yielded higher concordance rates between Boolean‐ and SDAI‐defined criteria for remission. This led to more participants contemporaneously fulfilling the SDAI and respective Boolean remission definition (increase from 71% to 92% of participants when using Boolean1.0 versus Boolean2.0) (Table ). Rates of concordantly classified participants with respect to remission increased from 93.4% to 95.9% at 6 months. A similar increase in concordantly classified participants was observed for the agreement between the corresponding Boolean and CDAI definitions (Supplementary Table , https://onlinelibrary.wiley.com/doi/10.1002/art.42347 ). In patients with established RA, the percentage classified as having disease in remission by Boolean and CDAI or SDAI definitions likewise increased from 74% to 94% for SDAI and from 70% to 83% for CDAI, when using Boolean1.0 versus Boolean 2.0; and from 78% to 96% when using 3vBoolean1.0 versus 3vBoolean2.0 to assess agreement with CDAI. The proportion of participants concordantly classified as having disease in remission remained similar in established RA. Kappa analyses showed higher agreement between SDAI‐defined remission and Boolean2.0‐defined than with Boolean1.0‐defined remission at 6 months (Figure ). The 12‐month data showed similar results and are depicted in Supplementary Figure ( https://onlinelibrary.wiley.com/doi/10.1002/art.42347 ). Kappa estimates and 95% confidence intervals (95% CIs) of agreement with SDAI‐ and CDAI‐defined remission at 6 months increased when using Boolean2.0 compared to Boolean1.0 definitions (0.86 [95% CI 0.83–0.89] versus 0.77 [95% CI 0.74–0.82] for SDAI at 6 months, and 0.81 [95% CI 0.77–0.84] versus 0.76 [95% CI 0.72–0.81] for CDAI at 6 months) (kappa curves for CDAI are shown in Supplementary Figure , https://onlinelibrary.wiley.com/doi/10.1002/art.42347 ). A further increase in the PtGA threshold beyond 2 cm led to a decrease in concordance. Reduced concordance was particularly seen when omitting the PtGA (BooleanX) both in terms of percentage agreement and according to kappa estimates (Table and Figure ). Additionally, CART analyses confirmed the percent agreement and kappa results: in participants with SJC, TJC, and CRP values of ≤1, and PtGA values of ≤2.3 cm at 6 months and ≤1.8 cm at 12 months showed the highest likelihood of concurrent SDAI‐defined remission. The same analyses stratified by early or established RA yielded a PtGA threshold value of ≤2.3 cm in early RA and ≤1.4 cm in established RA at 6 months (≤1.5 cm in early RA and ≤1.9 cm in established RA at 12 months). Generally, all agreement estimates point to 2.0 cm as the optimal threshold. We studied rates of participants achieving a good functional outcome (HAQ ≤0.5) and no radiographic progression (ΔmTSS) at 1 year for participants classified by the different Boolean definitions at 6 months. Similar results were found for Boolean2.0 and index‐based definitions when predicting good functional outcome. HAQ scores at 12 months were as follows: mean ± SD 0.24 ± 0.40 for Boolean1.0, 0.31 ± 0.45 for Boolean2.0, 0.41 ± 0.53 for BooleanX, 0.27 ± 0.42 for SDAI, and 0.26 ± 0.42 for CDAI. Fewer participants scored an HAQ of ≤0.5 when the PtGA was omitted (70% in BooleanX versus 78% in Boolean2.0) (Table ). Increasing the PtGA threshold for Boolean‐based remission was associated with a linear increase in HAQ scores. While there was a drop in positive LR from 6.1 to 4.4 when using the Boolean2.0, this was similar to the positive LR predicting a good functional outcome for SDAI‐ and CDAI‐based remission, which ranged from 4.3 to 4.9. Table outlines the similarity of LRs for predicting lack of radiographic progression during the first year when the different remission definitions were fulfilled at 6 months of treatment. The radiographic outcomes were similar regardless of the PtGA threshold or whether PtGA was included in the Boolean criteria, and scores were similar between different definitions (mean ± SD ΔmTSS 0.29 ± 2.08 for Boolean1.0, 0.25 ± 1.81 for Boolean2.0, 0.21 ± 1.9 for BooleanX, 0.27 ± 1.86 for SDAI, and 0.27 ± 1.9 for CDAI). This observation is consistent with previous findings that PtGA is not associated with radiographic progression ( , ). Using the different Boolean definitions as well as index‐based definitions led to similar proportions of participants with disease in remission who had radiographic progression (defined as ΔmTSS>0) during the first year (29.6% for Boolean1.0, 28.5% for Boolean2.0, 28.6% for BooleanX, 28.2% for SDAI, and 28.6% for CDAI). The proportion of participants achieving both good radiographic and functional outcomes were similar for all remission definitions, from 57% to 60% (58.6% for Boolean1.0, 57.3% for Boolean 2.0, 59.2% for SDAI, and 60.4% for CDAI), except for BooleanX (50.8%). Again, index‐based remission definitions performed similarly to Boolean1.0 and Boolean2.0 definitions with respect to their predictive ability (positive LR between 3.8 and 4.3). This pattern could also be seen when analyzing data on early RA and established RA, separately (Table ). However, good functional outcomes when using BooleanX were even less frequent in established RA compared to early RA (HAQ ≤0.5 in established RA was 57% compared to 74% in early RA). Of note, no differences in radiographic progression in patients with established RA were observed between the remission definitions fulfilled. Overall, more than two‐thirds of patients with established RA showed radiographic progression throughout the first year. This study provides evidence of external validation of the previously proposed modification of the Boolean ACR/EULAR remission criteria, to include a threshold of 2 cm rather than 1 cm for the PtGA criterion, and of the provisionally endorsed index‐based remission definitions. The study was performed using independent clinical trial data sets not included in any of the previous studies (e.g., the data sets generating the provisional definition of remission and the recent analyses on raising the PtGA threshold [7,12] in which the revised threshold was derived). Our study assessed different aspects of validity for the revised definition of remission. The composition of this patient population was heterogenous in terms of disease duration and previous DMARD treatments, and our results are therefore applicable to a broad spectrum of patients with RA ( , , , ). The remission validation outlined here builds on work done 10 years ago when the selection of components was undertaken by a large ACR/EULAR consortium ( ). Due to criticism around the stringently low threshold of the PtGA component within the Boolean remission definition ( , , ) and concerns that the 2 approaches (Boolean‐based versus index‐based) to remission were not concordant, alternative thresholds for the PtGA were explored using multiple clinical trial data sets ( ). Our analyses support the notion of a slight increase of the PtGA threshold since it provides better agreement with the SDAI remission definition and higher rates of Boolean‐defined remission, without jeopardizing the prediction of good long‐term functional and radiographic outcomes. Our results replicate previous findings that a Boolean definition using 2 cm as threshold for PtGA (Boolean2.0) yields better agreement with both index‐based remission definitions than Boolean1.0 ( ). Furthermore, patients who attain Boolean2.0, CDAI, and SDAI remission thresholds at 6 months have a higher likelihood of good functional and radiographic outcomes after 12 months of treatment than those attaining Boolean‐based disease remission without PtGA (BooleanX). We have also shown the agreement between the 3‐variable Boolean approach definition and the CDAI definition, which can be applied during a clinic visit, without knowledge of current acute‐phase reactant levels. The PtGA threshold within the remission criteria does not influence the prediction of radiographic nonprogression, as all tested definitions yielded the same positive LRs for nonprogression of ~1.7 and the same proportions of patients not progressing (~79%). This is consistent with findings from a recent meta‐analyses including data from 11 clinical trials showing that people fulfilling the SJC, TJC, and CRP criteria but not the PtGA criterion demonstrate better radiographic outcomes than those not in any Boolean remission category ( ). We note that successful management of RA is not only defined by the prevention of joint damage, but, ideally, attaining remission should also prevent residual symptoms that matter to patients, such as pain, fatigue, and anxiety. The PtGA has not only been criticized for its stringent threshold in the remission definition. The Outcome Measures in Rheumatology (OMERACT) Working Group focusing on “Remission in RA: Patient Perspective” questioned whether the PtGA is the best instrument to reflect the perspective of patients in the current Boolean remission definition. They explored the effect of replacing PtGA with 3 patient‐assessed domains identified by patients as most important: pain, fatigue, and independence. Their search for a better incorporation of the patient perspective has not yet resulted in a promising set of validated patient‐reported outcome measures that can replace the PtGA. In their most recent working group report, they concluded that there is currently insufficient evidence to propose a change to the existing ACR/EULAR remission criteria ( ). This report also discussed the concept of a “dual‐target” approach, trying to decouple the assessment of disease activity from disease impact in defining remission ( , ). At this stage no data are available about the effectiveness and feasibility of such a dual‐target approach. Concerns have been expressed that the ACR/EULAR remission criteria allow few patients to achieve disease remission. Within our validation work, we additionally provide data on the shift in remission frequencies and the distribution of patients that miss Boolean‐defined remission due to fulfilling only 3 of 4 criteria. By using a threshold of 2 cm rather than 1 cm in the revised Boolean definition, 40% more participants in our data sets achieved disease remission (14% instead of 10%). Importantly, when applying the Boolean2.0 definition, the SJC criterion threshold of 1 seems to be nearly as prominent in limiting participants attaining full remission as the PtGA criterion (3% due to high SJC and 5% to high PtGA when fulfilling the other 3 criteria). The revised PtGA threshold of 2 cm has been proposed as 1 item in a set of 7 criteria that defined minimal disease activity of RA by OMERACT in 2005 ( ). Notably, the definition of remission should remain strict and ensure beneficial long‐term outcomes for patients with RA and prevent unnecessary treatment escalation at the same time. Furthermore, it appears that changes in the overall approach to treating RA before patients enter clinical trials or trends over time have led to much higher provisional ACR/EULAR remission rates in more recent clinical trials than in earlier ones, with recent rates reaching ~30% in early disease, 20% in patients with insufficient response to MTX, and 15–20% in patients with insufficient response to bDMARDs ( , , , , ). A preferable approach for more patients to achieve remission is to foster a collaborative relationship between patients and clinicians, to initiate treatment early, and to utilize a treat‐to‐target approach ( ), rather than omitting potentially problematic items such as the PtGA ( , ). Studies have shown that a treat‐to‐target approach is not yet fully implemented in clinical practice; in one‐third of instances where treatment was not increased, this was influenced by factors unrelated to RA and in another third it was the patient's preference to continue receiving the current treatment ( , ). All measurements and their interpretations need, in any case, to be complemented by the discussion between the patient and rheumatology clinician to reflect and decide on the appropriate steps in a shared decision ( , ). Remission has become a key target for the management of patients with RA ( ). The ACR/EULAR 2011 initiative on remission criteria was undertaken to harmonize the definition of the term “remission” and thus to facilitate the fair assessment and comparison of remission rates in clinical trials and clinical practice (e.g., for different health care settings or providers). It will be helpful to further study the performance of the revised criteria in trials using other antirheumatic drugs, such as JAK inhibitors, and in other countries and ethnic groups, since RA severity and the interpretation of the PtGA may vary across ethnicities. We validated the results of the performance of the Boolean2.0 and the provisionally endorsed index‐based remission definitions. With the validation of the threshold of 2 cm for the PtGA, we propose that these revised ACR/EULAR remission criteria be adopted both for future clinical trials and as a target in clinical practice. All authors were involved in drafting the article or revising it critically for important intellectual content, and all authors approved the final version to be published. Dr. Studenic had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study conception and design Studenic, Aletaha, Lacaille, Smolen, Felson. Acquisition of data Studenic, Aletaha. Analysis and interpretation of data Studenic, Aletaha, de Wit, Stamm, Alasti, Lacaille. Studenic, Aletaha, Lacaille, Smolen, Felson. Studenic, Aletaha. Studenic, Aletaha, de Wit, Stamm, Alasti, Lacaille. Disclosure Form Click here for additional data file. Figure S1 Rates of remission by modified Boolean classifications, using a PGA cut‐off of 1.0 (“Boolean”), 1.5, 2.0, 2.5 or omitting the PGA completely (BooleanX) as well as for the SDAI, CDAI and DAS28 definition. Rates at 12 months in % of total, separately depicted for those in early RA on the left and for established RA on the right. Supp‐Figure 2 : Overview of patients in near misses of Boolean Remission at 6 months (upper part) and 12 months (lower part). Number (percentages in pie charts) of patients not fulfilling 1 of the 4 Boolean criteria on the left. Number (percentages in pie charts) of patients not fulfilling 1 of the 4 Boolean2.0 criteria in the middle (using the PGA 2cm cut‐off). On the right: Rates of people in remission or near remission based on either the Boolean or the Boolean2.0 definition. Supp‐Table 1: Agreement Rates (% concordantly classified) between different modified Boolean remission definitions with the index‐based remission definitions. Provided for 6 & 12 months. Upper table: Agreement with SDAI remission, middle table with CDAI remission; lower table agreement of 3vBoolean (including SJC28, TJC28, PGA as criteria) with CDAI remission. Supp‐Figure 3 : Kappa with confidence intervals between modified Boolean remission categories and SDAI remission, separately for early RA (red line) and established RA (green line) and all RA (blue line) at 12 months. Additional table providing kappa estimates with confidence intervals in brackets. Supp‐Figure 4 : Kappa with confidence intervals between modified Boolean remission categories and CDAI remission, separately for early RA (red line), established RA (green line) and all RA (blue line) at 6 and 12 months. Click here for additional data file.