added
stringdate 1994-05-27 08:24:24
2023-01-01 15:24:47
| created
stringdate 1-01-01 00:00:00
2023-04-01 00:00:00
| id
stringlengths 3
9
| source
stringclasses 2
values | text
stringlengths 1.88k
335k
| version
stringclasses 1
value |
---|---|---|---|---|---|
2022-11-10T17:23:09.649Z | 2022-11-01T00:00:00.000Z | 253429949 | s2orc/train | Possible Utilization of Distillery Waste in the Carbonization Process
This paper characterizes the carbonization process in terms of the utilization of distillery waste in a laboratory-scale reactor. Due to the increase in market prices of wood and environmental protection laws, biomass waste, including distillery waste, is a potential source for biochar production. An experimental investigation of the carbonization process was carried out for different mixtures of distillery waste and oak sawdust. The obtained results showed that due to the European Standard, biochar from distillery waste could be used for the production of charcoal briquettes for barbecue applications. In addition, biochar from carbonization samples with 66, 50, and 33% distillery waste meet the standards defined by the International Biochar Initiative for HMs content. The analysis of the dynamics of the heating rate showed that adding wood to distillery waste significantly shortens the carbonization process, but this reduces the number of bio-oils produced and its calorific value.
Introduction
Pyrolysis, according to the combustion process, is an alternative method for biomass and waste disposal, which leads to the conversion of solid waste into solid, liquid, and gas fractions [1,2].
Carbonization, defined as the pyrolysis process focused on the solid fraction, is also widely described in the literature [3]. Due to the application of produced biochar, carbonization is carried out with specific parameters, including heating rate and final temperature.
Nevertheless, due to the increase in wood price and environmental protection, an important aspect is to find the possibility of utilizing different wastes in the carbonization process. Wang et al. [4] and Xu et al. [5] present the characterization of biochar from walnut shells. The influence of the pyrolysis parameters on biochar characteristics has been presented by Almeida et al. [6], who analyzed sugarcane biomass, including bagasse, straw, and treated biomass. In turn, Sahoo et al. [7] showed that biochar from pigeon pea stalk and bamboo meet the standards of biochar production. According to the literature, spent coffee grounds are also an interesting waste in terms of the carbonization process. Andre et al. [8] analyzed biochar from coffee spent in terms of the construction of an energy storage device application. Tangmankongworakoon [9] indicated that biochar from coffee residue might be used as a suitable material for soil amendment.
This work presents the characterization of distillery waste utilization in the carbonization process in terms of barbecue charcoal application due to the European (EN 1860-2) and the International Biochar Initiative Standard.
According to the literature, from 8 to 20 L of stillage on average is produced from every liter of alcohol production [10]. Moreover, the overall production of alcohol reached over 200 billion liters [11]. For this reason, waste management is one of the major problems due to soil pollution and damage to soil and water environments. Mahaly et al. [12], showed the possibility of distillery sludge waste utilization by vermicomposting. Naveen as well as Naveen C., Premalatha [13] present the characteristics of post-methanated distillery effluent using TGA analysis. Dhote et al. [11] analyzed a mixture of distillery waste and coal as a low-cost fuel. The characteristic combustion of distillery waste and coal in a Materials 2022, 15, 7853 2 of 10 tube furnace with emission analysis has also been presented by Manwatkar et al. [14]. Mohana et al. [15] showed the possibility of anaerobic and aerobic utilization of distillery spent wash. However, there is a lack of papers that characterize the pyrolysis of distillery waste. Dhote et al. [16] characterized the pyrolysis and gasification of distillery sludge and coal in a 3:2 ratio. This paper presents the carbonization of distillery corn waste pellets with the participation of 100%, 66%, 50%, and 33% by mass. As a well-known material that improves the carbonization process, oak dust was used as an additive. The final temperature was defined at 450 • C due to cooperation with a Barbecue Company. Because of the technical and economic aspects of the carbonization process, in the industrial solution for 2000 kg of wood in a crucible, temperatures of 420-450 • C are applied. The aspect of carbonization at similar temperatures has also been presented by Lima et al. [17], who indicate that 400-500 • C is the optimal range for Amazonian wood carbonization for charcoal production.
The experimental results include the physical and chemical properties of the char, as well as the characteristics of the heating rate of a fixed bed. A valuable part of the work is the characteristics of the carbonization products. This aspect, according to European Biochar Certificate [18], is important because part of the energy (at least 70%) released during the combustion of pyrolysis gases must be used as a heating source or drying biomass. In addition, the combustion characteristics of the produced char were determined to compare the activity and stability of the chars from different blends.
Materials
The characteristics of the distillery waste (corn base distillers) and the wood sample (oak sawdust) are presented in Table 1. The elemental analysis was carried out using a CHNS/O Flash 2000 Analyzer (Thermo Fisher Scientific, USA) and a wavelength-dispersive X-ray fluorescence spectrometer (Bruker Scientific Instruments, Germany). The heating value was determined using a calorimeter (EkotechLAB, Poland). The volatiles, fixed carbon, and ash content were analyzed according to the EN 1860-2 Standard.
The wood sample and dried distillery waste were shredded and then mixed in the assumed mass proportion. For the pelletization process, an EkoPal 3kW pellet press, equipped with a flat rotary die and compacting rollers, was used. The pellets consist of cylinder-shaped particles with a diameter of 6 mm and up to 25 mm in length. The variations in the size of the pellets particles are related to the flaw of the pellet mill and the lack of binder.
Carbonization System and Process
The carbonization of distillery waste was carried out in a laboratory-scale batch reactor ( Figure 2). The laboratory reactor consists of two thermocouples. The first thermocouple (T1) was placed at a distance of 20 mm from the chamber wall and 50 mm from the bottom. The second thermocouple was located in the core of the reactor (T2). The reactor was also equipped with an electrical heater with an energy meter to characterize the energy balance of the process. The wood sample and dried distillery waste were shredded and then mixed in the assumed mass proportion. For the pelletization process, an EkoPal 3kW pellet press, equipped with a flat rotary die and compacting rollers, was used. The pellets consist of cylinder-shaped particles with a diameter of 6 mm and up to 25 mm in length. The variations in the size of the pellets particles are related to the flaw of the pellet mill and the lack of binder.
Carbonization System and Process
The carbonization of distillery waste was carried out in a laboratory-scale batch reactor ( Figure 2). The laboratory reactor consists of two thermocouples. The first thermocouple (T 1 ) was placed at a distance of 20 mm from the chamber wall and 50 mm from the bottom. The second thermocouple was located in the core of the reactor (T 2 ). The reactor was also equipped with an electrical heater with an energy meter to characterize the energy balance of the process. In each of the experiments, samples with a mass of 2000 g (DW100%), 1800 g (DW66%), 15,000 g (DW50%), and 1300 g (DW33%) were loaded from the top of the reactor. First, the temperature in the heating chamber was set to 450 °C, and then the batch reactor was placed inside the heating chamber. The final temperature for carbonization was set at 450 °C in the core of the fixed bed. Due to maintaining the carbonization process at 450 °C, the heater was turned on automatically using a PID controller when the temperature in the chamber decreased below 450 °C due to heat consumption. When the temperature in the chamber reached 450 °C again, the heater was turned off.
Liquid and Gaseous Products Collection
The experimental investigation included a tar condensation system and a gas sampling system in determining the mass balance and characterization of the liquid products. Due to the analysis of the gas composition, it was first directed to the tar sampling. The In each of the experiments, samples with a mass of 2000 g (DW100%), 1800 g (DW66%), 15,000 g (DW50%), and 1300 g (DW33%) were loaded from the top of the reactor. First, the temperature in the heating chamber was set to 450 • C, and then the batch reactor was placed inside the heating chamber. The final temperature for carbonization was set at 450 • C in the core of the fixed bed. Due to maintaining the carbonization process at 450 • C, the heater was turned on automatically using a PID controller when the temperature in the chamber decreased below 450 • C due to heat consumption. When the temperature in the chamber reached 450 • C again, the heater was turned off.
Liquid and Gaseous Products Collection
The experimental investigation included a tar condensation system and a gas sampling system in determining the mass balance and characterization of the liquid products. Due to the analysis of the gas composition, it was first directed to the tar sampling. The sampling of the tar from the carbonization process was carried out using a steel cylinder with isopropanol (capacity 1 L) kept at 0 • C and 3 isopropanol washers kept at −20 • C. After each experiment, the contents of the cylinder and all washers were combined. Due to Karl-Fisher titrator analysis (the water content of the liquid phase) and measuring the initial weight of isopropanol, the number of tars was determined. The calorific value of the tars was determined using a calorimeter (EkotechLAB, Poland) after the drying of sample 2 at a temperature of 105 for 60 min.
The obtained gases were collected in Tedlar bags (1 L) and analyzed every 10 L, measured with a gas meter, to determine the average gas composition. The analysis of gas composition was carried out using a Gas Chromatograph with a thermal conductivity detector (SRI Instruments 310). The calculations of gas caloric was calculated in accordance with Wang et al. [20]: where HHV i [MJ/Nm 3 ] defines heating value of Y i gas component.
Thermogravimetric Analysis
The characteristics of the thermal degradation and combustion behavior of the char were analyzed by a TA Instruments SDT Q600 Thermogravimetric Analyzer. The sample was heated from 30 to 800 • C at the rate of 10 • C/min. The mass of each sample was 6 mg; the flow rate of air was set at 50 mL/min. According to the literature, the temperature at which the rate of combustion reached 1 wt.%/min was defined as the ignition temperature (T i ), whereas the burnout temperature (T b ) was defined as the temperature for which the combustion rate decreased to wt.%/min [21,22]. Moreover, to analyze the combustion process, the S index has been defined according to the following equation [22,23]: where (dw/dt) max is the maximum and (dw/dt) mean is the average combustion rate. Moreover, the characteristic temperatures and times can be correlated to define ignition (D i ) and, the burnout (D f ) index [24]: where t p corresponds to (dw/dt) max [min]. Figure 3 presents the characteristics of the heating rate of the pellets with different contents of distillery waste. The experimental results showed that pellets with a content of 100% distillery waste reached 450 • C in 540 min during carbonization. This sample was also characterized by a long plateau associated with the evaporation of moisture. A decrease in the distillery waste content in the pellets to 33% caused an increase in the heating rate and shortened the process to 75 min. A decrease in the content of distillery waste also decreased the power input (recorded with an energy meter) to maintain a constant temperature in the chamber, from 5.3 kWh for DW100% to 1.3 kWh for DW33%.
Heating Rate of Fixed Bed during Carbonization
contents of distillery waste. The experimental results showed that pellets with a content of 100% distillery waste reached 450 °C in 540 min during carbonization. This sample was also characterized by a long plateau associated with the evaporation of moisture. A decrease in the distillery waste content in the pellets to 33% caused an increase in the heating rate and shortened the process to 75 min. A decrease in the content of distillery waste also decreased the power input (recorded with an energy meter) to maintain a constant temperature in the chamber, from 5.3 kWh for DW100% to 1.3 kWh for DW33%. This aspect is also caused by different bulk densities (Figure 4). For the pellets with a content of 100% distillery waste, the bulk density was 480 kg/m 3 , and the heating rate of the bed during carbonization reached 1.6 °C/min. A decrease in the distillery waste content in the pellets to 33% caused a decrease in bulk density to 300 kg/m 3 and an increase in the heating rate to 5.7 °C/min. The change in the bulk density of the pellets due to the participation of distillery waste also influenced the change in the bulk density of the This aspect is also caused by different bulk densities ( Figure 4). For the pellets with a content of 100% distillery waste, the bulk density was 480 kg/m 3 , and the heating rate of the bed during carbonization reached 1.6 • C/min. A decrease in the distillery waste content in the pellets to 33% caused a decrease in bulk density to 300 kg/m 3 and an increase in the heating rate to 5.7 • C/min. The change in the bulk density of the pellets due to the participation of distillery waste also influenced the change in the bulk density of the obtained biochar, which was 480 kg/m 3 , 464 kg/m 3 , 325 kg/m 3 , and 300 kg/m 3 for DW100%, DW66%, DW50%, and DW33%, respectively.
The Effects of Pellets Composition on Char Characteristics
The experimental data showed that the char yield increased slightly with the decrease in distillery waste content in the pellets ( Table 2). The carbonization of different samples also indicated that the energy density of the produced biochar reached similar values for all of the samples, about 30 MJ/kg. This may be related to a similar content of fixed carbon. The experimental investigation showed that the change in the proportion of distillery waste in the mixture did not affect the fixed carbon content.
Due to the European Standard [23], for barbecue applications, prepared biochar requires over 75% fixed carbon content for barbecue charcoal and over 65% for charcoal briquettes, with ash contents below 8 and 18%, respectively, which indicates that biochar from distillery waste can be used for the production of charcoal briquettes, while biochar from a mixture of distillery waste and wood meets the standards for the production of charcoal and charcoal briquettes for barbecue applications. Table 2 also presents the elemental analysis of the obtained biochar. The results show
The Effects of Pellets Composition on Char Characteristics
The experimental data showed that the char yield increased slightly with the decrease in distillery waste content in the pellets ( Table 2). The carbonization of different samples also indicated that the energy density of the produced biochar reached similar values for all of the samples, about 30 MJ/kg. This may be related to a similar content of fixed carbon. The experimental investigation showed that the change in the proportion of distillery waste in the mixture did not affect the fixed carbon content. Due to the European Standard [23], for barbecue applications, prepared biochar requires over 75% fixed carbon content for barbecue charcoal and over 65% for charcoal briquettes, with ash contents below 8 and 18%, respectively, which indicates that biochar from distillery waste can be used for the production of charcoal briquettes, while biochar from a mixture of distillery waste and wood meets the standards for the production of charcoal and charcoal briquettes for barbecue applications. Table 2 also presents the elemental analysis of the obtained biochar. The results show that hydrogen and oxygen (for DW66% and DW33%) significantly decreased, which is caused by the breaking of the weaker bonds within the structure of the char [25,26]. The decrease in oxygen and hydrogen content is also associated with the emission of gaseous and liquid products during the carbonization process [27,28]. According to the literature, the decrease in the hydrogen content is mainly caused by the formation of volatile products from cellulose, mainly anhydrosugars and furanic compounds [29,30], the thermal decomposition of hemicellulose, which leads to the formation of anhydrosugars, ketones, and aldehydes [31]. A decrease in hydrogen content is also associated with lignin decomposition and the emission of phenolic derivatives.
The change in the oxygen content is mainly caused by the emission of carbon monoxide and carbon dioxide [27]. The elemental analysis also indicates that the atomic ratios of H/C reached 0.05-0.07, which meets the requirements specified by International Biochar Initiative for the maximum H/C ratio at 0.7 [18]. It should be noted that the low value of H/C and O/C atomic ratios makes the obtained biochar more aromatic and carbonaceous and leads to a smaller hydrophilic char surface [32,33]. Table 3 shows the characteristics of the carbonization products. This aspect is important due to the possibility of using gases for energy purposes. The conducted analyses showed that a decrease in the distillery waste content in the pellets, with an increase in the wood content, from DW100% to DW33% leads to an increase in the caloric value of pyrolysis gas mixtures from 3.34 MJ/kg for DW100% and increases with an increase in the proportion of wood to 5.45 MJ/kg for DW33%. The analysis of the average gas showed that CO 2 is the main component, which is caused mainly by the cracking of carboxyl and carbonyl [34]. The carbon monoxide content is related to the cracking of carboxyl and carbonyl groups [35] and decreases from 31% for DW33% to 24% for DW100%. The results of the part release and gas composition are difficult in terms of 66% and 55% of distillation waste participation in the pellets due to the wide temperature range of the thermal decomposition of the distillation waste. The obtained results of the gas composition, due to the limitation of the gas chromatograph, were limited to the main components at the level of 91-95%. The rest of the content may be C x H y or other non-detectable compounds.
Characteristics of Pyrolysis Gases
The characteristics of the obtained bio-oils, considering only the tar content, indicate that the heating value decreased from 35 MJ/kg for distillery waste pellets (DW100%) to 30 MJ/kg for pellets with a 33% proportion of distillery waste. In comparison, Fassinou et al. [36] present that the high heating value (HHV) of vegetable oils (corn, soya, crambe, sunflower, coconut) reaches the value of 38-39 MJ/kg, whereas Santos et al. [35] showed that bio-oils from pyrolysis of sugarcane bagasse and oat hulls at 450 • C reached, respectively, 31 MJ/kg and 33 MJ/kg. Table 3 also presents the overall energy balance of the liquid and gaseous carbonization products and shows that the caloric value of the produced liquid and gaseous fraction decreased from 17.51 MJ/kg for DW100% to 13.85 for DW330%. Despite the increase in the calorific value of gases caused by the increase in the wood content in the mixture, the amount of produced bio-oils decreased with a simultaneous decrease in the calorific value. Wang et al. [30] indicate that the pyrolysis of proteins isolated from microalgae at 470 • C lead to the production of 38% oils, 25% char, and 37% gas products. Moreover, the obtained oils consisted mainly of aliphatic hydrocarbons, amines, amides, N-heterocyclic compounds, esters, ketones, aldehydes, and nitriles, whereas main compounds from polysaccharides pyrolysis are furans, N-containing compounds, alcohols, carboxylic acid and esters, and sugars.
Influence of Pellets Composition on Elemental Analysis of Char
The obtained results showed that in each sample, the biochar produced contains mainly potassium, calcium, and phosphorus (Table 4). Moreover, the decrease in the distillery waste content in the pellets (DW100%) to 33% content caused an increase in the Fe content from 3.8 to 17.8%. The analysis of heavy metals indicates that the samples obtained from the carbonization pellets with 66, 50, and 33% distillery waste meet the standards defined by the International Biochar Initiative for HMs content in biochar [18]. mixture from 33% to 100% led to an increase in ignition temperature (T i ) from 305 to 343 • C. Moreover, increases in distillery corn waste caused a decrease in the average burning rate from 10.5%/min for DW33% to 4.2%/min for DW100% (Table 5). Wang et al. [37] indicate that this aspect is mainly caused by the combustion of fixed carbon and volatile. A decrease in the fixed carbon and volatile content may lead to a higher average combustion rate. Table 5 presents the characteristic parameters and indexes that characterize the combustion process. The obtained results of the characteristic temperatures and combustion rate showed that the calculated S index, which defines the combustion reactivity, decreased from 25.6% 2 /(min 2 °C 3 ) for DW33% to 4.5% 2 /(min 2 °C 3 ) for DW100%. It means that an increase in distillery waste content leads to a decrease in the char combustion activity. The increase in distillery waste content from DW33% to DW100% caused a decrease in the ignition index (Di) from 0.26 to 0.02%/min 3 . It means that less volatile are degassed from fuel.
Taking into account both indexes, adding wood to the mixture with distillation waste makes combustion more active, efficient and stable [24,39].
The obtained results of the combustion process indicate that there is not much difference between the addition of wood at the level of 50% (DW50%) or 67% (DW33%). Adding only about 34% of wood (DW66%) affects the D index but does not significantly affect the index S and the combustion activity.
Summary
The results presented in this work indicate the possibility of utilizing distillery waste using the carbonization process. The experimental investigations were carried out for different mixtures of distillery waste and wood. The results of the fixed carbon and ash content indicate that, according to the European Standard, biochar from distillery waste can be used for the production of charcoal briquettes, while biochar from a mixture of distillery waste and wood may be used for the production of charcoal and charcoal briquettes for barbecue applications. The obtained results showed that the thermal decomposition of distillery waste occurs, with regard to wood, in a wide temperature range, with a slightly The experimental investigation and TGA analysis showed that the biochars from the distillery waste are characterized by a very wide, in relation to biochar from corn cobs, cotton stalk, bamboo sawdust, or palm fiber [23,37,38], combustion temperature range. Table 5 presents the characteristic parameters and indexes that characterize the combustion process. The obtained results of the characteristic temperatures and combustion rate showed that the calculated S index, which defines the combustion reactivity, decreased from 25.6% 2 /(min 2 • C 3 ) for DW33% to 4.5% 2 /(min 2 • C 3 ) for DW100%. It means that an increase in distillery waste content leads to a decrease in the char combustion activity. The increase in distillery waste content from DW33% to DW100% caused a decrease in the ignition index (D i ) from 0.26 to 0.02%/min 3 . It means that less volatile are degassed from fuel.
Taking into account both indexes, adding wood to the mixture with distillation waste makes combustion more active, efficient and stable [24,39].
The obtained results of the combustion process indicate that there is not much difference between the addition of wood at the level of 50% (DW50%) or 67% (DW33%). Adding only about 34% of wood (DW66%) affects the D index but does not significantly affect the index S and the combustion activity.
Summary
The results presented in this work indicate the possibility of utilizing distillery waste using the carbonization process. The experimental investigations were carried out for different mixtures of distillery waste and wood. The results of the fixed carbon and ash content indicate that, according to the European Standard, biochar from distillery waste can be used for the production of charcoal briquettes, while biochar from a mixture of distillery waste and wood may be used for the production of charcoal and charcoal briquettes for barbecue applications. The obtained results showed that the thermal decomposition of distillery waste occurs, with regard to wood, in a wide temperature range, with a slightly lower average intensity of mass loss. The analysis of the dynamics of the heating rate showed that adding wood to the distillery waste significantly shortens the carbonization process but reduces the number of bio-oils produced and its calorific value. Taking into account combustion indexes, adding wood to the mixture with distillation waste makes combustion more active, efficient, and stable. | v2 |
2019-04-28T13:09:17.479Z | 2012-11-20T00:00:00.000Z | 137368409 | s2orc/train | INVESTIGATION OF THE PHYSICAL-MECHANICAL PROPERTIES OF TIMBER USING ULTRASOUND EXAMINATION
This research uses a non-destructive method – ultrasound – to examine timber, combining the results of measurement with the properties of strength and stiffness. The purpose of this work is to explore the possibilities of grading wood structure in situ using ultrasound measurements, specifically, the moisture content and density of the timber. The timber used in these experiments was taken from existing buildings of different ages. The potential of replacing direct measurements with indirect measurements by ultrasound was also investigated. The physical-mechanical properties of wood were determined in laboratory conditions according to standard practices, and the method of non-destructive measurements was based on a commercial test device based on 54 kHz compressional wave 50 mm diameter ultrasound transducers. Direct measurements were performed in the longitudinal and radial material directions. Indirect measurements were performed with transducers positioned on the same lateral surface of the sample. A weak correlation was found within the different measurements. Longitudinal measurements characterise bending strength with R2 = 0.18 and modulus of elasticity with R2 = 0.37. In multiple regression analysis, stronger correlations were found; prediction equations of bending strength and modulus of elasticity were found with R2 = 0.40 and R2 = 0.81, respectively.
Introduction
When renovating buildings it is necessary to assess the physical condition and strength of timber structures. The most accurate way of determining the mechanical properties is by destructive methods, the most relevant results being usually obtained from compression, tension and bending tests. However, there are situations where the wood structures need to be evaluated in situ, where the timber cannot be removed or sampled destructively and where visual assessment is constrained, since the timber structural member may have one or several sides covered and/or its position or geometry does not enable an inspection. At this point non-destructive methods are a possible alternative. There are several non-destructive methods that can be used in the assessment and determination of properties of timber, such as mechanical loading, electrical resistance measurement and acoustic, thermal and electromagnetic wave propagation (Niemz 2009).
Sound and ultrasound have been used rather widely. In general terms sound consists of an elastic wave that propagates through a material, its behaviour being different in various materials or in different conditions within the same type of material. Consequently a correlation between the speed of sound and certain properties of a material such as stiffness can be made (Lempriere 2002;Kettunen 2006).
In the evaluation of timber properties ultrasound is a widely used method in sawmills, where the longitudinal measuring method has been used to sort lumber into various strength classes. Obstacles occur if there is need to assess wood structures in situ, because in most cases both ends of a beam or joist, for example, are covered and the measurement cannot be made. This is because measurement can only be carried out when placing the transducers parallelly on one side or across the detail facing each other. But the latter way of measuring is not always possible, because of the inaccessibility of both sides. This kind of measurement method also only provides the local properties of wood. The main advantage of using ultrasound is that the piece being measured will not be damaged in any way and it can continue to be used as no deformations occur. Tests can be made on the same member repeatedly without any substantial variation in results (Bucur 2006).
Ultrasound wave propagation is directly related to the elastic properties of the material through which it propagates. If wood is damaged, its stiffness is likely to decrease. Sound wave speed is a function of the square root of material stiffness. Lower speed or longer propagation times are generally indicative of poorer conditions in a sample. It is assumed that ultrasound pulse velocity can be used as an index of wood quality of as it can detect defects like cracks, knots, decay, and deviation in grain orientation. In spite of the non-homogeneous nature and anisotropy of wood, it is possible to correlate the efficiency of the propagation of a sound wave with physical and mechanical properties (Drdácký, Kloiber 2006).
The density and modulus of elasticity also affect the acoustic properties of wood. Bucur and Chivers (1991) observed that in conditions of increased density the speed of sound decreased. Thus, the propagation of sound varies in timber from different species of tree. In other investigations the results have shown that velocity increases for large density values (Haines et al. 1996). By contrast, Mishiro (1996) found that velocity was not affected by density. As wood is an anisotropic material, the speed of sound also varies in different directions due to cell structure (Kettunen 2006). Transverse waves are scattered by each cell wall, while the longitudinal orientation of wood cells and their slenderness ratio facilitate ultrasound propagation. The greater number of impacts of waves on wood cells in the transverse direction slows them down (Kotlinova et al. 2008). Moreover, ultrasound propagation paths in directions differing from the main orthotropy axes (longitudinal L, radial R and tangential T) are significantly shifted from a straight line between the transducers, the actual trajectory being dependent on the local ring and grain angles (Sanabria et al. 2011). Ultrasound velocity is influenced by width of annual rings only in the radial direction, due to the macroscopic structure of wood, the relative proportions of early and latewood and the cell orientation in the growth ring (Drdácký, Kloiber 2006). Ultrasound velocity decreases with increasing moisture content (Drdácký, Kloiber 2006). The velocity also decreases dramatically with moisture content up to the fibre saturation point, and thereafter the variation is very small (Bucur 2006). It is also notable that moisture content above the fibre saturation point does not have any significant effect on ultrasound velocity when measured in the longitudinal direction (Sakai et al. 1990;Oliveira et al. 2005).
Overall the direct longitudinal measurement is fairly reliable in the assessment of the wood strength. However, according to Machado et al. (2009) it is necessary to note that an indirect measurement can provide quite good results. Furthermore the results seem to indicate that as the distance (10 to 40 cm) between the transducers becomes greater the influence of deeper wood layers in the velocity of wave propagation increases. Most of the signal energy is transmitted to the back-wall of the test piece while only a small part of the energy is transmitted along the edge surface. For wood measurement the most favourable frequency range is between 20 kHz and 500 kHz because of the high attenuation of ultrasonic waves in wood at higher frequencies (ASTM 494-89 1989;Tanasoiu et al. 2002). The length of the piece has more influence on the velocity than the cross-sectional area (Arriaga et al. 2006). The longitudinal measurement is strongly and continuously affected by the width (b) over thickness (h) ratio of the specimen. Largest velocities are obtained when the ratio lies between one and two, and the specimen is a rod, and b and h are greater than the wavelength (Bucur 2006).
The present study aimed to test the relationships between ultrasound velocity, modulus of elasticity and bending strength while also taking into account variations in moisture content and density. Ultrasound velocity, wood density and moisture content were used as independent variables and bending strength and modulus of elasticity were the dependent variables. The aim was to see if the latter characteristics could be predicted via the first.
Methods
The experiments were carried out in a laboratory at 21-23 °C and 20-40% relative humidity. 25 logs and beams of Picea abies taken from buildings with various uses and of different ages were used in the research. 92 pieces with dimensions of 50×50×1005-1100 mm 3 were sawn from the collected material. The chosen dimensions were based on the standard EN 408:2005 (2005), which specifies that the length of the piece for the bending test should be at least 19 times the height of the cross-section.
First of all, four different types of measurement with a TICO Ultrasound Instrument fitted with 50 mm 54 kHz compressional wave transducers were conducted ( Fig. 1): 1) Five times with a spacing (transmitter-receiver distance) of 200 mm on the tangential surface at early wood positions using the indirect method (variant A); 2) Three times with a spacing of 600 mm on the tangential surface at early wood positions using the indirect method (variant B); 3) Once between the end surfaces in the longitudinal direction (variant C); 4) Five times in the radial direction by random selection in the direct method (variant D). The device works by sending an ultrasound wave into the sample by the transmitter probe which is picked up by a receiver probe with the time of flight t recorded in microseconds and it also calculates the velocity v when the distance d is entered, with v = d/t. The distances were chosen according to the suggestions of the manual of the testing device for the maximum and minimal parameters (Tico User Manual 2008).
Before every test (Fig. 2) the measured length of the sample piece was entered into the device and the corresponding velocity reading in m/s was recorded. For better contact between the wood surface and the probes glycerine (propane-1,2,3-triol) was applied thinly onto the probes. This ensured effective transfer of the ultrasound wave between the surface of the sample and the probes. The transducers were equally pressed on the surfaces of the members.
Fig. 2. Conducting tests with a TICO Ultrasound Instrument device and 54 kHz transducers
The bending strength of the test pieces was determined using an Instron 3369 device, based on standard EN 408:2005 (2005). According to this standard the sample was laid across two supports set 1000 mm apart and thereafter it was loaded with a static force at the centre until it broke. Here it should be noted that the force was applied in the radial direction with younger annual rings facing upwards. With this test the modulus of elasticity as well as the bending strength was measured.
In order to define the moisture content of the samples, test pieces with dimensions of 50×50×30 mm 3 were sawn from them. These pieces were weighed on an electronic scale with a readability of ±0.01 g and afterwards placed into a drying oven at a temperature of 103 ±2 °C. The pieces were dried until their differences of weighing at two-hourly intervals were less than 0.1% (EN 408: 2005(EN 408: 2005. Moisture content was defined according to ISO 3130:1975ISO 3130: (1975.
For determining the density, pieces with dimensions of 50×50×75 mm 3 were sawn from the specimens. These were weighed on the same electronic scale and thickness, width and length were measured with a digital calliper (readability of ±0.01 m). The measurements were multiplied (EN 408:2005(EN 408: 2005, and the density of wood was calculated according to ISO 3131: 1975ISO 3131: (1975.
All data processing was conducted by MS Excel, STATISTICA 10 and R software.
Results and discussion
The variability of density, bending strength and modulus of elasticity were regular and spread out, the variation was not mainly in one or the other end of extreme and was not closely clustered. Consequently, the experimental data was adequate to carry out the regression analysis (Table 1). It is necessary to ascertain the relationships between the results of direct and indirect measurement methods of investigation by ultrasound. It can be concluded that the greatest velocities were given by method C (between the end surfaces in the longitudinal direction) and the lowest by method D (radial direction in the direct method) (Fig. 3). The general variation and fluctuation of ultrasound velocities using methods C and D are lower compared with the results of indirect measurements (methods A and B). et al. (2009) found that the relative difference between direct and indirect measurements of defectfree specimens was ±10%, which shows a good relationship between the results of the different ultrasound measuring methods. Thus, the use of the indirect method on site, where only one side of the wood structure is accessible, is possible.
The results of methods A and B fluctuate substantially more than C and D. The smallest fluctuation of the results was obtained with method D. Table 2 shows the correlation matrix between the individual measured variables. The relationship between maximum bending strength and modulus of elasticity is fairly strong (r = 0.72), which shows the latter to be an important factor in assessment of wood structures. There is also a strong relationship between density and modulus of elasticity (r = 0.8), which is about 0.3 higher than between density and maximum bending strength. The fact that stiffness (modulus of elasticity) is a more global property than bending strength, which has been confirmed by Hahnijärvi et al. (2005), is also shown by this research. Thus density can also be a good indicator of wood strength.
Moving on to the analysis of ultrasound velocities, the relationships between direct (C and D) and indirect measurements (A and B) are weak and therefore do not support the substitution of these methods with one another. In summary the results confirm that the shorter the measurement distance, the more localized is the evaluation of the sample.
In analysing the relationships between ultrasound measurements and other characteristics, the fitted relationship is linear and the results of longitudinal measurements characterise the bending strength with r = 0.42 and the modulus of elasticity with r = 0.61 (Figs. 4 and 5). According to the results of the indirect measurements (A and B) only method B gives a moderate linear relationship with the modulus of elasticity (Fig. 6), the relationships with other characteristics being weak, except with each other, which is moderate (r = 0.63). In investigating the relationships between strength and ultrasound veloci-ties Arriaga et al. (2006) found that a weak correlation between them occurred due to local defects having more influence on strength than the general quality of specimens. In that way the results from this research may be explained. The defects in members have important role in strength, but in situ it is difficult to identify them. This study examined specimens without classifying themselves.
A negative correlation occurs between moisture content and ultrasound velocity (Bucur 2006;Oliveira et al. 2005). In analysing the results of our tests the overall negative correlation was found to be true, especially for the results of the transverse method (D), for which there is a strong negative correlation (r = -0.78). As the measurement distances become larger the moisture content becomes decreasingly important.
To evaluate the stiffness of wood prediction equations based on specific variables were examined. The variables (x) comprised ultrasound velocities together with density and moisture content, which could be determined without any substantial damage to the wood structure (Table 3). It can be concluded that the best prognosis of the modulus of elasticity is given by density and longitudinal measurement (C), which describe the variety of the modulus of elasticity with 64.3% and 37.3%, respectively.
Next, all six chosen variables were included to a multiple regression analysis. It turned out that the prediction equation accounts for 81.1% of the variety of the modulus of elasticity (Table 4). Variables A and D (p-values were greater than 0.05) were eliminated one by one, because their influence in multiple regression analysis was not statistically significant.
where: E is the modulus of elasticity, MPa; B and C are the corresponding measuring techniques, m/s; ρ is density, kg/m³; W is moisture content, %. Similar data processing was conducted for bending strength, for which regression equations for each variable were first derived. None of the equations predict the bending strength with |r| > 0.3 (Table 5). Once again, the best correlation by individual regression analysis is obtained with the variables C and density, which describe the variety of bending strength with 17.8% and 26.1%, respectively. Within this investigation, again, only variable A is not statistically significant concerning the relevance of mean values.
where: σ is the bending strength, MPa; B and C are corresponding measuring techniques, m/s; ρ is density, kg/m³ and W is moisture content, %.
Conclusions
The main aim was to investigate the possibilities of applying ultrasound measurement in assessment of the physical-mechanical properties of wood. For achieving this purpose a commercial device with 54 kHz transducers was used to measure the ultrasound velocity in the timber samples sawn from logs and beams obtained from existing buildings. The total number of samples was 92.
The analyses among the different measurement techniques showed that the shorter the measuring distance is, the more local is the evaluation for the sample. It turned out that the best arguments for predicting physicalmechanical properties were provided by the longitudinal (C) and indirect measurements with a distance of 600 mm (B), in good agreement with previous experimental observations.
The prediction equations with the main parameters of strength, like modulus of elasticity and bending strength, were found using variables of density, moisture content and four different ultrasound measuring methods. As a result of data analysis, a prediction equation for the modulus of elasticity accounting for about 80% of its variability was obtained, using the variables of density, moisture content, indirect measurement B and direct measurement C. Methods A and D were excluded. The other equation of predicting the bending strength had the percentage likelihood of about 40% and the same variables were excluded as before.
Thus it can be concluded that within those parameters it is possible to predict the modulus of elasticity and bending strength of a timber element with an accuracy of 80% and 40%, respectively. The other methods of measuring ultrasound velocity over smaller measuring distances gives only so-called local results of specimens, therefore they do not give assessment on a larger scale and they are not statistically significant. Thus the larger the measuring distance, the better the dependent variable can be predicted.
The assessment of strength of the samples by these methods will always be somewhat imprecise, because of imperfection of the results of measuring on site and not using standardised samples. Also it is necessary to note that in practice it is not possible to measure ultrasound velocities in the longitudinal direction.
Therefore more research in this field is needed, especially in the search for stronger relationships between acoustic properties obtained from indirect methods and mechanical strength values. Although the field of acoustics is a rather young one, this research clearly shows the potential of evaluating in situ wood structures with this kind of non-destructive method. | v2 |
2019-05-21T13:03:51.995Z | 2019-01-28T00:00:00.000Z | 159200959 | s2orc/train | Heterogeneity in the Relationship Between Biking and the Built Environment
Bicycling is an environmentally friendly, healthy, and affordable mode of transportation that is viable for short distance trips. Urban planners, public health advocates, and others are therefore looking for strategies to promote more bicycling, including improvements to the built environment that make bicycling more attractive. This study presents an analysis of how key built environment characteristics relate to bicycling frequency based on a large sample from the 2012 California Household Travel Survey and detailed built environment data. The built environment characteristics we explore include residential and intersection density at anchor locations (home, work, school), green space, job access, land use mix, and bicycle infrastructure availability. Analyses are conducted separately for three distinct demographic groups: school-age children, employed adults, and adults who are not employed. The key conclusion from this work is that the relationship between bicycling and some built environment characteristics varies between types of people – most dramatically between adults and children. To develop targeted policies with scarce resources, local policymakers need specific guidance as to which investments and policy changes will be most effective for creating “bikeable” neighborhoods. Our work indicates that the answer depends – at least in part – on who these bikeable neighborhoods are meant to serve. ABSTRACT Bicycling is an environmentally friendly, healthy, and affordable mode of transportation that is viable for short distance trips. Urban planners, public health advocates, and others are therefore looking for strategies to promote more bicycling, including improvements to the built environment that make bicycling more attractive. This study presents an analysis of how key built environment characteristics relate to bicycling frequency based on a large sample from the 2012 California Household Travel Survey and detailed built environment data. The built environment characteristics we explore include residential and intersection density at anchor locations (home, work, school), green space, job access, land use mix, and bicycle infrastructure availability. Analyses are conducted separately for three distinct demographic groups: school-age children, employed adults, and adults who are not employed. The key conclusion from this work is that the relationship between bicycling and some built environment characteristics varies between types of people – most dramatically between adults and children. To develop targeted policies with scarce resources, local policymakers need specific guidance as to which investments and policy changes will be most effective for creating “bikeable” neighborhoods. Our work indicates that the answer depends – at least in part – on who these bikeable neighborhoods are meant to serve.
INTRODUCTION
Bicycling offers a wide range of benefits to both individuals and society. Cycling is an environmentally friendly and affordable mode of transportation that is viable for short distance trips. Using bicycles instead of cars reduces fuel consumption and associated harmful emissions, provides exercise for the cyclists, and can improve quality of life overall. For these reasons, urban planners, public health advocates, and others are looking for strategies to promote more bicycling, including improvements to the built environment that make bicycling more attractive. An understanding of the relationship between the built environment and individual decisions to bicycle provides an important basis for the development of such strategies. There are numerous studies in the current literature that focus on understanding the link between the built environment and bicycling from the perspectives of both health and transport (e.g., Pikora et al., 2003;Handy et al., 2002).
The present study adds to this existing literature by focusing on the heterogeneity in the association between built environment characteristics and bicycling behavior. A comparison of prior studies indicates that associations between bicycling and built environment characteristics are not always consistent. Because each study's sample, measure of the built environment, and estimation method was different, however, it is unclear whether the inconsistency in prior findings was due to different samples, measures, and methods, or due to heterogeneity in the underlying relationships. We use a single large survey together with measures of built environment characteristics and consistent statistical methods to estimate the association between bicycling frequency and built environment characteristics for different subpopulations. Specifically, we explore these relationships separately for three distinct demographic groups: school-age children, employed adults, and adults who are not employed. We also evaluate heterogeneity by gender among adults and by age among children.
Our findings indicate that substantial heterogeneity exists in the relationship between bicycling and built environment characteristics, especially between adults and children, between men and women, and between different ages of children. Most dramatically, we find that certain key characteristics have opposite effects on bicycling for different groups. For instance, overall density is positively associated with bicycling for high school children, but negatively associated with bicycling for elementary and middle school children. In addition, we find a strong positive relationship between a more connected street network and bicycling for older children and women. This characteristic is not statistically significant for men, and has a negative association with bicycling for elementary school children.
These findings suggest that women and children are more risk-averse and distance-sensitive than men when it comes to bicycling. Thus, our work adds quantitative evidence that supports policies and infrastructure that create "8 80 cities", suitable for bicycling by both 8-year-olds and 80-year-olds. The two main tenets are: create bicycle networks that connect residential areas with destinations, and make them safe and comfortable to use. This strategy may help encourage children -many of whom already bike in their neighborhoods -to begin bicycling for transportation as they get older, and to continue bicycling into adulthood.
LITERATURE REVIEW
Of the large number of studies that we reviewed, only a handful investigated heterogeneity in the relationship between the built environment and bicycling. Tables 1 and 2 summarize original research papers that estimated the association between bicycling and the built environment. Table 1 summarizes studies focusing specifically on children, while Table 2 summarizes studies focusing on adults or the general population. Included are papers containing built environment covariates in a multivariate statistical framework. Some prior studies included multiple variables in some of the built environment characteristic categories in Tables 1 and 2; our tables provide multiple listings for these if they were not consistently significant with the same sign. Following Pucher, Dill and Handy (2010) and Willis, Manaugh and El-Geneidy (2015), we do not include studies that group bicycling and walking together as a single "active travel" mode.
Density
Where residential density related variables were included in analyses, they were statistically significant approximately half of the time. Most studies indicated a positive relationship. This is expected, as denser suburban and urban environments generally have more destinations within easy biking distance. Most prior studies that found a negative relationship specifically involved children; Larsen et al. (2009) suggest that this finding is due to heavier traffic in denser areas, which can be a deterrent to cycling. Children -or perhaps more accurately parents -may be particularly sensitive to these safety concerns. Moran, Plaut, and Baron-Epel (2016) suggest that their negative finding is caused by a nonlinear relationship between density and cycling; they posit that their study area only contains densities high enough that the deterrent effects of density are more relevant than the additional accessibility provided.
Only one study of adults found a negative relationship. Conrow's (2018) negative association was found for users of the Strava mobile fitness app. The users of this app are likely primarily riding for recreation and exercise; they may be more motivated by country roads and paths, and less motivated by activity destinations.
Diversity
Land use diversity was statistically significant approximately a quarter of the time it was included in the reviewed prior studies. When statistically significant, the estimated relationship was positive, indicating that in neighborhoods with greater land use diversity, bicycling is more likely. Titze et al. (2008) suggested that the presence of shops and other services in one's home neighborhood encourages bicycling.
Connectivity
Where measures that indicate the connectivity of the street network were included in analyses, about a third of the estimated relationships with bicycling were statistically significant, and all were positive. Better connected streets allow for both shorter paths and more route choices between origins and destinations, reducing trip lengths and possibly also providing routes with less vehicle traffic.
Bicycle infrastructure
Bicycle-specific infrastructure such as bike lanes or paths, and bike-friendly infrastructure such as paved roadway shoulders were found to be statistically significant and positively associated with bicycling in approximately half of the analyses where they were included. This positive relationship is expected, since infrastructure specific for cycling is likely to encourage the activity. In fact, it is surprising that bicycle infrastructure is often found to be not significantly associated with bicycling. This may be simply because the presence of bicycle infrastructure is correlated with multiple other modeled factors that also encourage bicycling.
Alternatively, the presence of bicycle infrastructure may not reflect the presence of a functioning bicycling network. Indeed, a number of non-regression-based studies do suggest that a network of bicycling infrastructure is strongly correlated with bicycling activity. For example, Pucher and R. Buehler (2008) compared infrastructure and public policy in several European countries with the US. They illustrate that pro-bike and anti-car policies and infrastructure are much more common in European countries where there are also much higher cycling rates. Furth's (2012) review comes to a similar conclusion. T. Buehler and Handy (2008) detail the historical development of the extensive bicycle infrastructure in Davis, California. They observe that this infrastructure was at least partially responsible for cycling levels and a cycling culture they describe as similar to that of Amsterdam.
The only study that found a negative relationship between bicycle infrastructure and cycling, Ma and Dill (2015), included both perceived and objectively measured infrastructure variables in their model. They found that the perception of off-street paths was negatively associated with cycling, after controlling for the objective presence of off-street paths.
Green space
The presence of green space or parks was found to have a statistically significant relationship with bicycling only one quarter of the time that it was included in analyses. In those cases, the relationship was positive, indicating that more green space is associated with more bicycling.
Destination accessibility
When statistically significant, prior estimates of the relationship between destination accessibility and bicycling were generally positive, suggesting that the availability of destinations-such as jobs or retailencourages cycling, most likely for transportation. Notably, no studies of children found this variable to be significant, suggesting that children may be cycling for recreation rather than to reach particular destinations. Two studies found destination accessibility to have a significant negative effect. Moudon et al. (2005) suggest that the destination accessibility measure they use -convenience store square footage in the arealikely represents the presence of gas stations and high-speed arterials. Ma and Dill (2015) suggest that their negative finding may be due to competition with walking.
Heterogeneity
Though most studies reviewed in Tables 1 and 2 do not investigate heterogeneity directly, some do focus on particular demographic groups. This allows heterogeneity to be examined in a limited way, and we have pointed out differences in findings across demographic groups in the discussions above. These studies use different datasets, methods, and measures of built environment, however. This makes it difficult to deduce whether a difference in effects across groups is due to an actual difference in the underlying relationship, or simply a difference in study design (as was also encountered by Wong, Faulkner and Buliung 2011). One study, while not looking at heterogeneity directly, did observe that results from their population diverged from results of studies of other populations in similar contexts. Van Dyck et al. (2009a) found that adolescents in a small town center in Belgium cycled less than their counterparts in a nearby suburban area. This finding diverged from a similar study of adults (Van Dyck et al., 2009b), leading the authors to conclude that the built environment may have differential effects on the two groups. Our research examines this possibility explicitly in the context of California, and our findings are consistent with these observations.
One type of heterogeneity that has been explored is the extent to which built environment factors may relate to the gender divide in cycling (Garrard, Handy, & Dill, 2012 provide an overview). Trapp et al. (2011) found most neighborhood built environment factors to be significant predictors of boys' cycling to school, but not girls' cycling, although the opposite effect was observed for the presence of busy road crossings. Mitra and Nash (2018) examined whether relationships between cycling and the built environment differed among male and female university students, and found that women were more sensitive to some built environment characteristics, such as the presence of high-speed roads, than men. Our study contributes by examining heterogeneity across a broader range of dimensions, including age and employment status as well as gender. This study combined density, land use mix, and street network connectivity into one "walkability" index. 2 This variable was part of an index that included access to stores as well as access to transit and neighborhood hilliness (Rosenberg et al., 2009). 3 The included variables related to "low stress" bike routes, which include infrastructure as well as quiet neighborhood streets.
DATA
The individual-level bicycling data for this project come from the 2012 California Household Travel Survey (CHTS). The CHTS sampled households throughout California, collecting household and individual demographic data, information about habitual commute trips, and a 24-hour travel diary. The initial survey included three questions related to bicycling: 1. How many bicycles in working condition are available to people in your household? 2. In the past week, how many times did you/this person ride a bicycle outside including bicycling for exercise? 3. How do/does you/this person normally get to this primary job/school? Bicycle trips were also reported in the travel diary, but because such a small percentage of respondents bicycled on the travel diary day, results based on those data are not presented here.
We used the first of these questions to restrict our sample to the two-thirds of CHTS respondents living in households with at least one working bicycle, and used the second question's responses as our outcome variable. Using a 7-day trip count rather than a 24-hour travel diary is an advantage when studying an activity such as bicycling, which may be infrequent. Answers to the third question helped us interpret the results. The sample used for this analysis includes 45,027 individuals in 18,007 households. To investigate the link between bicycling and built environment characteristics, we paired the CHTS data with land use and census-based data. Theory as well as prior studies guided our choice of built environment characteristics to include (see Table 3). Short trip lengths allow travelers the flexibility to bicycle. Dense and mixed-use development can shorten trip lengths by bringing homes close to jobs, schools, and other key destinations. Built environment features that lengthen trips can discourage bicycling, such as lack of street connectivity or large tracts of undeveloped land. In busy areas, safety-enhancing infrastructure such as bike lanes or paths is also critical. Table 4 provides summary statistics (mean, standard deviation, and range) for both built environment and demographic control variables included in the regression models.
Using geocoded locations of home, work, and school, we identified characteristics of the built environment for these key anchor locations for each CHTS respondent in our sample. Built environment information was derived from three sources: the 2012 Urban Footprint base variables, the American Community Survey, and the Longitudinal Employer-Household Dynamics (LEHD) data. The Urban Footprint variables include land cover, parcel, census, and transportation network information measured at the resolution of a 150-meter grid. The geographic extent of our analysis was determined by the extent of the Urban Footprint data, available for the San Francisco Bay Area, the Sacramento metropolitan area, the San Joaquin Valley, the Los Angeles metropolitan area, and San Diego County. This represents all major urban and suburban areas in California.
Built environment characteristics from the Urban Footprint dataset are measured within 1-mile buffers of each respondent's home and, where applicable, work or school locations. These variables capture the land use surrounding where survey respondents are actually located, rather than summarizing census tract-level information. In addition, an indicator for Central Business District census tracts was developed as part of a neighborhood typology analysis (Salon, 2016). Local and regional job access measures were calculated as distance-weighted sums of employment estimates (LEHD) at the census block group level. To capture additional aspects of built environment bicycle friendliness, we also include the percent of commuters reporting non-motorized modes in the home census tract.
A limitation of these data is that we do not know if the reported bicycling trips began at home; some people drive with their bicycles to areas with desirable built environment characteristics for recreational bicycle trips. This means that our built environment measures at home, work, and school locations do not necessarily capture the built environment that was actually relevant for the bicycling trips taken -though we expect that they are relevant for a substantial fraction of those trips. Salon (2016) calculates that 87% of California bike trips reported in the 2009 National Household Travel Survey begin or end at home. The main contribution of this paper is to highlight how the relationships between bicycling and the built environment vary according to demographic characteristics of the cyclists. As a precursor to this, the boxand-whiskers diagram in Figure 1 graphically displays the large differences in bicycling frequency between different ages of school children, employed adults, and not-employed adults by gender. The boxes represent the 25th and 75th percentiles of the distribution of weekly bicycling trips for each demographic category, with the median indicated by a horizontal boldface line. It is evident from this figure that children bicycle more than adults, and that males bicycle more than females in each category. It is particularly striking that more than 75% of women did not ride a bicycle at all in the one week reporting period. We estimate separate models for each demographic group, and where relevant we include interaction terms to estimate separate relationships between built environment characteristics and bicycling for males and females, and for different age categories of children. Indicator of the residential-employment mix of developed acres within 1 mile of the home location.
Road connectivity
Number of intersections (UF) The number of non-highway intersections of three or more streets within one mile of home, work, and school locations.
ANALYSIS
Since our outcome variable is the number of bike trips a person made in the last week, we opted to use a count model, which is suitable for modeling nonnegative integer outcomes. The most common type of count model is the Poisson model, which models the outcome process as a Poisson distribution. This model has a requirement that the mean of the data be equal to the variance. This requirement is relaxed by the negative binomial model, which adds an error term to account for unobserved heterogeneity and allow for variance greater than the mean (so-called overdispersion; Washington, Karlaftis, and Mannering, 2011, Ch. 11). The Cameron and Trivedi test of overdispersion (described in Washington, Karlaftis, and Mannering 2011, pp. 293-4) indicate that our data are overdispersed, so we use the negative binomial model. 1 The negative binomial model is a generalized linear model, wherein a linear combination of predictors is exponentiated to model the outcome variable (Equation 1).
where y is bike trips in last week x is a vector of model covariates, and β is a vector of model estimated coefficients The coefficients, therefore, are not linear marginal effects, but rather a difference in logarithms. In lieu of raw model coefficients, we report incident rate ratios (IRRs), which are more readily interpretable. Mathematically, the IRRs are simply the exponentiated raw coefficient estimates (Equation 2).
where j indexes model covariates, and x, y, and β are as specified under Equation 1 IRRs are equal to the ratio of the predicted rate (i.e. count) of bicycle trips when a covariate increases by one unit increase to the original predicted rate of bicycle trips. An IRR of 1 indicates that a variable has no effect on bicycling; increasing that variable by one unit does not change the predicted rate of bike trips. An IRR of 1.2 would indicate that a unit increase in the covariate is associated with a 20% increase in bicycle trips. IRR values below 1 indicate a negative association. Due to the particular functional form of this model, the IRRs are constant over the full range of the variable space.
Note that because IRRs imply a percent change in bicycling, IRRs of the same magnitude can indicate very different absolute effects, depending on the number of bicycle trips originally taken. This can prove confusing. For instance, children in this sample bicycle more than employed adults. The two groups have similar estimated IRRs for the relationship between walking trips in the past week and bicycling trips in the past week (1.07 vs 1.06), indicating that an additional walk trip is associated with a 6-7% increase in weekly bicycle trips. The implied absolute effect size for these two groups, however, differs by more than a factor of 2 (0.14 vs 0.06). Because both may be relevant, we report and discuss both IRRs and marginal effect results.
Weights for each individual are included in the CHTS dataset, and can be used to adjust results to be representative of the population of California. We used these weights to create the average number of bike trips by age, employment, and gender in Figure 1 and when calculating weighted average marginal effects. These marginal effects indicate the weighted average across this sample of the absolute change in the number of bicycle trips in 7 days associated with a one unit increase in each variable.
The model itself, however, is estimated using unweighted data. If the probability of sampling a particular individual is uncorrelated with the dependent variable (number of bike trips) conditioned on the covariates, weighting is unnecessary (Solon, Haider, & Wooldridge, 2015). Because our model includes many of the CHTS weighting variables as covariates, we conclude that the relationships we care about should be properly estimated in an unweighted regression. For full transparency, the descriptive statistics in Table 4 are also unweighted.
We tested the models for multicollinearity. Interpreting variance inflation factors can be difficult when variables are included both in their base form and in interaction terms, as the interaction term is perforce somewhat correlated with the base variable. However, removing all interaction terms from our final model yields variance inflation factors for all variables below 4, with the exception of income (because it is correlated with income squared) and levels of car ownership (because one-car households cannot be twoor three-car households, and vice versa).
Principal Component Analysis
Population and intersection density within one mile of both home and work/school locations are important built environment characteristics. These variables are highly collinear, however, precluding including all of them in the model. To solve this problem, we used principal component analysis to identify three uncorrelated factors that explain most of the variation in these four density variables, and included these principal components as covariates in our models of bicycling frequency. Table 5 presents the principal component loadings. The analysis was done separately for each model subsample (children, not-employed adults, and employed adults), but the interpretations are the same for each subsample.
Component 1 (General density) has positive loadings on all original density variables. Areas with high values for all density variables also have high values for this component.
Component 2 (Home vs work/school) has positive loadings for both population and intersection density measured at the home location, and negative loadings for densities measured near work/school. Thus, a respondent living in a higher density environment relative to their work/school will have a larger value for this component. Of course, there is no work/school location for not-employed adults, so this component is absent for that subsample.
Component 3 (Intersection vs. population density) has positive loadings for intersection density and negative loadings for population density. Respondents who have a larger value for this component live or work in areas with street networks that are particularly well connected, in comparison with areas of similar density.
We used GeoDa software to test for spatial autocorrelation in our model residuals, and found virtually none. The main statistical analysis was performed using Stata 15.1 for Mac, using the nbreg command and clustering errors by household.
RESULTS
Tables 6 and 7 present our main model IRR estimation results and weighted average marginal effects, respectively. Each of these tables is divided into three sections -one for children, one for not-employed adults, and one for employed adults. In the children's model, the relationships between certain variables and bicycling are estimated and reported separately for elementary (age 5-10), middle (age 11-13), and high school (age 14-17) children. In the adult models, the relationships between certain variables and bicycling are estimated and reported separately for women and men. Many of the variables in these models are insignificant, but are retained in order to present comparisons between groups and prevent omitted variable bias. Removing these variables has almost no effect on the estimated coefficients of the remaining variables; the Appendix contains these alternate estimation tables. The remainder of this section is divided into three parts. The first details our main results for the associations between the built environment and bicycling, the second summarizes our findings for the associations between demographic characteristics and bicycling, and the last presents limitations.
5.88
Note: 95% confidence intervals are given below point estimates. Asterisks designate statistical significance, where *** indicates p<0.01, ** indicates p<0.05, * indicates p<0.10. z-statistics are shown in italics. a These are variables created using principal component analysis.
Built environment factors
Our models include estimates of the relationship between bicycling frequency and density, with density represented by the principal components of intersection and population densities at home and at work/school, as well as a binary variable indicating whether a person's home is in a central business district. In addition, we estimated the relationship between bicycling and five other built environment factors: access to jobs and services, green space near home, availability of bicycle infrastructure near home and work/school, extent of local mixed-use development, and percent of commuters using nonmotorized modes. Here we discuss our findings for each in turn.
Density
As described in section 4.1, we used principal component analysis to address the multicollinearity of the intersection and population density variables around home and work. This led to a "general density" component, a component for high home accessibility relative to work/school ("home vs. work/school"), and a component for high intersection density relative to population density ("intersection vs. population density").
The general density variable has a negative relationship with cycling for elementary and middle school children, and has a positive relationship for high school children. It is also positive and marginally statistically significant for not-employed males, and not significant for the other categories. This is consistent with our expectations; younger children (and their parents) are more likely to be concerned about safety and are thus likely to be most comfortable biking in lower density settings. Several prior studies of children found a similar deterrent effect of density (e.g. Larsen et al., 2009;Moran, Plaut, & Baron-Epel, 2016). Older children and adults are more likely to be biking for transportation, and thus bike more in areas with more activity.
The second component, density at home relative to work/school, is not significant for elementary and middle school children, but has a positive relationship with bicycling for high school children. It also has a significant positive sign for employed adult males. This makes sense, as most bike trips begin or end at home. Controlling for overall density at both home and work, higher density near home is associated with more bicycling.
Finally, the third component represents high intersection density relative to population density. It has a positive and significant relationship with cycling for employed and not-employed females, as well as middle and high school students. This makes sense; higher levels of intersection density given a particular population density indicates that there are likely more routes available. There may also be a safety benefit as traffic is spread out onto fewer, smaller streets-Ladrón de Guevara, Washington, and Oh (2014) found that higher levels of intersection density have lower levels of fatal crashes, but higher levels of injury crashes. This component has a negative and significant relationship with biking for young children, indicating that a lower level of street connectivity given a particular level of density increases cycling. This can be interpreted as young children cycling for recreation primarily within their neighborhoods, perhaps on safe, disconnected cul-de-sacs, and not needing access to destinations via a connected street network.
The final density-related variable we include is a binary indicator for central business districts (CBDs). This variable is derived from a cluster analysis of census tracts based on several built environment characteristics, including density, accessibility, bicycle and pedestrian friendliness, and housing mix, and is defined in Salon (2016). CBD tracts are extremely dense; most of those in California are located in downtown San Francisco. Including this variable allows us to identify some nonlinearity in the relationship between density and bicycling. Living in a CBD tract has a large negative relationship with biking, and is significant in all models. CBDs have many alternative transportation options other than cycling available, and cycling may not be practical due to safety concerns.
Access to jobs and services
We include three variables in our models to represent access to jobs and services: distance to school or work, local job accessibility within five miles of home, and regional job accessibility in the range from five to fifty miles from home.
The first, relevant for both school children and employed adults, is the logarithm of the straightline distance between each person's home and their school or work location. We use the logarithm because we expect that variation in distance affects bicycling more at shorter distances than at longer distances. In general, people with longer commutes bicycle less. There is some variation in this, however. Distance to school does not have a statistically significant relationship with elementary school children's bicycling. This is consistent with the fact that substantially fewer elementary school children in this sample bike to school (1.5%), compared with older children (4-5%). Distance to school, therefore, may be less relevant for elementary school children's bicycling frequency.
When commute distance is included in the analysis, our local job access variable become statistically insignificant for employed adults. A negative relationship between regional job access and bicycling for both employed and not-employed adults remains statistically significant, however. Our interpretation is that more jobs beyond 5 miles also represent more destinations beyond 5 miles, which reduces the likelihood of bicycling for transportation.
Green space
The proportion of the land area within one mile devoted to parks is included as a measurement of local green space in our regression model. It is negatively associated with bicycling frequency for children, and not significantly associated with bicycling for adults. Although this negative association is different from what others have found regarding the relationship between green space and bicycling, it is intuitive in some respects. Parks present substantial barriers by reducing street connectivity, lengthening trip distances and discouraging bicycling -especially for children who may be more sensitive to distance than adults.
Bicycle infrastructure availability and prevalence of nonmotorized commuting
We measure bicycle infrastructure in length of designated bicycle routes within one mile of each respondent's home and work/school. Included in the estimated models are two representations of this information: length of bicycle routes within one mile of home, and a binary indicator for individuals for whom bike route prevalence is in the 75th percentile within one mile of both home and work/school locations. As expected, where these variables are statistically significant, they have positive associations with bicycling.
Our models also include a measure of the extent to which nonmotorized transport is used for commuting in the home census tract of each survey respondent. This metric is likely correlated with the overall bike-friendliness of the infrastructure, which includes aspects beyond bicycle lanes such as traffic levels, speeds, and the prevalence of other cyclists on the roads. For all respondent categories, it is positively associated with bicycling frequency.
Land Use Mix
We included an entropy measure to represent the mix of residential and nonresidential development within a mile of each respondent's home. It is calculated as -∑ ln( )/ln( ), where is the proportion of each land use in the area, and k indicates the number of land uses (in our case, two). Winters et al. (2010) use this measure as well. The metric varies between 0 and 1, where higher values indicate more balance between the two types of development in the neighborhood. For elementary and middle school children, land use mix had no relationship with bicycling frequency. For high school children, however, more mixed areas are associated with lower rates of bicycling, all else equal. Land use mix did not affect bicycling of employed adults and not-employed women, but had a positive relationship with bicycling for not-employed men.
Socioeconomic factors
This analysis includes a wide variety of household and individual socioeconomic factors as controls. Household-level factors include household size, the presence of children, vehicle and bicycle ownership, income, a home ownership indicator, an apartment indicator, and whether a member of the household commutes by transit. Individual factors include gender, age, and the number of walking trips reported in the last 7 days, as well as whether the person holds a driver's license, self-identifies as disabled, holds a transit pass, holds a bachelor's degree, or is white. For employed adults, an indicator for individuals who identify as scientists, teachers, or doctors is also included. Most of these factors are statistically significant predictors of bicycling frequency, and most of the estimated relationships are as expected. We provide details below.
Household-level factors
Household size is negatively associated with bicycling for adults, but has no relationship with children's bicycling. The presence of children has an additional negative relationship with bicycling for both employed and not-employed adults; many in the latter group are stay-at-home parents. This is interesting because one might imagine the opposite relationship; since children bicycle more than adults, we might have predicted that in families with children, the adults bicycle more as well.
Increasing levels of household bicycle ownership have a positive and increasing relationship with bicycling in all models, but increasing levels of household car ownership have a negative relationship with bicycling only for adults. The fact that children's bicycling frequency is unrelated to household vehicle ownership suggests that rides in household vehicles do not substitute for children's bicycling. It may be that children bicycle mainly when the alternative mode is walking, a school bus, or not taking the trip at all. This finding is consistent with prior literature (e.g. Ewing et al., 2004;Moran et al., 2016), but has not been previously highlighted because individual prior studies have not analyzed both adult and child bicycling.
Household income has a more complex relationship with bicycling frequency. As income rises from low levels, bicycling frequency declines rapidly. At high incomes-between $170 and $215K annually, according to our model estimates-this relationship becomes less important and may actually reverse. Our models represent this by including household income as both a linear and squared term.
Individual-level factors
Age and gender are included in our models as separate variables, and they are also interacted with other variables of interest. Age category is the key interaction variable in our children's bicycling analysis, and the key interaction variable is gender in our adult analysis. Table 8 reports comprehensive marginal effects for children's age category and adult gender. These marginal effects include not only the effect of the male and age category dummy variables, but also the effects of these variables embedded in the estimated effects of all of their interactions included in the model.
Adult results regarding both age (older adults bike less; see Tables 6 and 7) and gender (women bike less; see Table 8) are consistent with expectations. Our results indicate that the gender difference is larger for not-employed adults than for employed adults, at approximately one bike trip each week. The gender gap in cycling-males cycle more, on average, than females-is well documented in the literature (e.g. Pucher et al., 2011).
Among schoolchildren, we find that bicycling declines with age such that the average high schooler makes 0.59 fewer bicycle trips each week than the average elementary school child. Our model also estimates separate associations between gender and bicycling for elementary, middle, and high school students. The estimated marginal effects of being a boy, given that one is in a particular age group, get larger as children get older (see Table 7). For elementary age children, boys are predicted to make 0.39 more weekly bike trips than girls, holding all else constant. For middle and high school age children, that gender difference is 1.00 and 1.23, respectively. As expected, individuals who self-identify as disabled bicycle less. Children and employed adults with driver's licenses bicycle less than those without licenses, and this relationship is especially large for children. Education plays a role as well, with adults holding bachelor's degrees bicycling more than lesseducated adults. Further, employed adults who self-identified as scientists, teachers, or doctors bicycle even more. Race, however, is not associated with bicycling in our model.
Of interest, we find that both walking and transit use are positively associated with bicycling frequency; individuals who take more walking trips also take more bicycling trips, and employed adults who hold transit passes do the same. This provides evidence that "alternative modes" (to the car) complement one another to provide a multimodal mobility package.
Limitations
Three limitations of this work bear mention. First, our models do not control for residential self-selectionthe idea that some people choose where they live partly based on preferences for transportation, including bicycling. This does not mean that our relationship results are invalid; it means that the associations we find do not necessarily imply that changing the built environment will affect bicycling for individuals already living in a neighborhood.
Second, there remains a substantial amount of variation in bicycling frequency that is not explained by our models. The deviance-based R 2 (Cameron and Windmeijer, 1997;Brilleman, 2011) of the models ranges between 0.16 and 0.19. Some of this variation undoubtedly reflects the random nature of individuallevel weekly bicycling frequency, but some of it is also due to lack of data on all of the relevant determinants of bicycling.
Previous literature suggests some possible omitted variables. Attitudes toward biking have been shown to be predictive of bicycle travel (Willis, Manaugh and El-Geneidy, 2015). Other possible predictors include safety, both from traffic and, for children, from strangers (Buliung et al., 2014), and risk of bicycle theft (van Lierop, Grimsrud and El-Geneidy, 2015).
While omitting these potential predictors likely reduces the fit of the models, the larger concern is that they may be correlated with the covariates that are included in the model, causing omitted variable bias. In our model, for example, risk of bicycle theft may be correlated with residential location in the CBD, resulting in the CBD coefficient in our model being more negative than it otherwise would be.
Finally, our data is entirely self-reported, and relies on the accuracy of the respondent's recall over the past seven days. Previous work has shown that GPS-prompted travel diaries have fewer people reporting zero trips than in self-report travel diaries (Salon, 2016). It is also possible that some people might not see recreational cycling as a "trip" (particularly for short trips made by children within the neighborhood). However, the question of how many trips were made has lower respondent burden than asking the details of each trip, and the respondent is primed to think of cycling which may help them recall shorter, recreational trips. We thus suspect that these data do not have the same underreporting problem that plagues travel diary data -especially for studies of active travel.
CONCLUSIONS AND POLICY IMPLICATIONS
This study contributes to the existing literature with a focus on comparing the relationship between built environment characteristics and bicycling frequency across demographic groups. Some of our findings are consistent across our groups of focus -children and adults, men and women, and different children's age groups. Distance to work/school is negatively associated with bicycling, living in a central business district is negatively associated with bicycling, and living in a census tract where it is more common to commute by nonmotorized modes is positively associated with bicycling.
Others of our findings are markedly different for different demographics. Where statistically significant for adults and older children, density is positively associated with bicycling. For younger school children, however, our measure of general density near home and school has a statistically insignificant or even negative relationship with bicycling. Similarly, land use mix has a positive relationship with bicycling for adults, but a negative relationship with bicycling for children.
These results might seem surprising at first, but they are actually quite intuitive. The hypothesized relationships are not clear between density and bicycling. There could be a positive impact due to increased access to destinations in a dense area, or due to increased street network connectivity leading to more direct routes. There could also be a negative impact due to safety concerns; dense areas have more traffic, and this may suppress biking. High intersection counts could increase safety (because traffic is spread over many streets) or decrease it (because there are more intersections to cross, and many crashes occur at intersections).
Younger children and their parents are more likely to be concerned about safety and are thus likely to be more comfortable biking in lower density settings. Older children and adults are more likely to be biking for transportation, and thus are enticed by areas with more activity. Similarly, women may be more risk averse than men.
We also find that children -and to some extent women -are especially sensitive to physical barriers and bicycle-friendly infrastructure. Plentiful bicycle infrastructure in both the home and school neighborhoods is strongly and positively associated with bicycling for children. Older children and women bike more in neighborhoods with high intersections per capita. Children also tend to bike much less in neighborhoods that have a high percent of park land within 1 mile of the home. Children are inclined to bike because it is fun and they cannot drive, but they can be deterred by safety concerns or lack of connected, plentiful infrastructure.
A key conclusion from this work, therefore, is that the relationship between bicycling and some built environment characteristics varies between types of people -most dramatically between adults and children. This finding complements the related literature that highlights heterogeneity among bicyclists by identifying bicyclist typologies (e.g. Damant-Sirois and El-Geneidy, 2015;Dill and McNeil, 2013).
To develop targeted policies with scarce resources, local policymakers need specific guidance as to which investments and policy changes will be most effective for creating "bikeable" neighborhoods. Our work indicates that the answer depends -at least in part -on who these bikeable neighborhoods are meant to serve. Bikeability for young children strongly emphasizes safety, connectivity, and low-traffic environments, while bikeability for adults emphasizes the attractiveness and number of destinations within biking distance. Putting bicycle lanes on arterial streets, therefore, serves only a portion of the bicycling public. These two goals need not compete with one other, however; it is a rare bicyclist who will complain of infrastructure that is too safe.
Building neighborhoods that are bikeable for children is likely to have a knock-on effect on bicycling for adults -both now and in the future. Many adult trips are made with children in tow, meaning that these trips cannot be made by bicycle if the available infrastructure is not bikeable for children. Because they cannot drive, children are more likely to bicycle than adults. Further, children who bike are more likely to become adults who bike (Thigpen, 2017). Creating neighborhoods that are bikeable for children, therefore, will help to create a society in which children and adults alike will consider bicycling a viable mode of transport. | v2 |
2022-05-04T05:21:00.250Z | 2022-04-28T00:00:00.000Z | 248502469 | s2orc/train | A Survey on the Willingness of Ganzhou Residents to Participate in “Internet + Nursing Services” and Associated Factors
Objective To investigate the willingness of Ganzhou residents to participate in “Internet + Nursing services” and analyse the relevant influencing factors. Methods From May to June 2021, 426 Ganzhou residents were surveyed using an Internet + Nursing services questionnaire and the relevant influencing factors were analysed. The questionnaire comprised two parts: demographic characteristic section and a questionnaire on residents’ willingness to participate in Internet + Nursing services including for dimensions (awareness, participation, trust and need), a 5-point Likert scale was used. Results A total of 397 valid questionnaires were recovered, and the total willingness of Ganzhou residents to participate in the service was derived as 11.59 ± 2.14. The results of multiple linear regression analyses showed that the presence of family members with a chronic disease or mobility difficulties, and an awareness and trust of Internet + Nursing services were influencing factors of residents’ participation willingness (P < 0.05). Conclusion The participation willingness of Ganzhou residents in Internet + Nursing services is modestly low, and the reasons for participation varied. It is suggested that the government and pilot hospitals strengthen the publicity surrounding these services, improve safety measures, strengthen team training, and develop products suitable for the elderly to increase residents’ participation willingness.
Introduction
By the end of 2020, the number of people aged ≥60 years in China reached 264 million, accounting for 18.7% 1 of the country's total population. The population ageing level is severe, and nursing services in the country are in short supply. Actively responding to the ageing population is in line with China's people-centred development ideology, and is of great significance for achieving high-quality economic development and maintaining the long-term stability of the country. 2 Therefore, it is urgent that the establishment of a high-quality and efficient medical service system be accelerated and that the reasonable supply of medical resources is realised to cope with the large service demand brought about by ageing. 3 In this context, the concept of "Internet + Nursing services" have emerged. "Internet + nursing services" mainly refers to the use of registered nurses in medical institutions, relying on the Internet and other information technology, to provide chronic disease management, rehabilitation care, special care, health education, maternal and child care, Chinese medicine care, hospice care and other nursing services for patients discharged from hospitals or special groups of people suffering from illnesses and mobility problems, based on the mode of "online application and offline service". These services can facilitate patients' medical needs and integrate the use of nursing resources, which can reduce the stress on medical resources and better meet the diversified and multi-level health needs of the public. 4,5 The National Health Commission issued several documents emphasising the promotion of Internet + Nursing services to reduce the burden and pressure on families and society and to improve the use of public healthcare resources. 6,7 Experimental projects involving these services have been carried out in many first-tier cities in China with remarkable results. However, we found that these cities were mostly located in economically developed regions, while the less developed and rural regions remained in a "wait-and-see" situation. In these areas, the implementation of services lags significantly, giving rise to large regional variations in their development. 8 To keep up with the progress of the times, the Health Commission of Jiangxi Province formulated an implementation plan for the experimental Internet + Nursing services project in Jiangxi Province in April 2021, taking into consideration the actual situation of the region. This determined Ganzhou city as the only viable pilot city in Jiangxi Province for implementing the project. In this study, to promote the experimental project, a survey was conducted to better understand residents' willingness to participate in Internet + Nursing services, analyse the influencing factors on their responses, and propose targeted improvement strategies aimed at providing a basis for the implementation of Internet + Nursing services in Jiangxi Province.
Research Participants
From May to June 2021, convenience sampling was applied to select participants from the vaccinated population at a coronavirus 2019 (COVID-19) vaccination site in a tertiary care hospital in Ganzhou, as well as from the residents of four communities, as the study population. The inclusion criteria were as follows: ① prospective participants were aged ≥18 years; ② conscious individuals can complete the questionnaire independently; ③ individuals who signed an informed consent form and were willing to cooperate with the research.
The exclusion criteria were as follows: prospective participants who had lived in Ganzhou city for <6 months. The sample size was taken as 5-10 times the number of independent variables, coupled with a 10-20% rate of participants lost to follow-up; thus, the minimum sample size was determined as 94 cases. Before formally distributing the questionnaire, the researcher determined the number of pre-survey respondents for a pre-survey, based on the total sample size, using the principle that the number of pre-survey respondents would be 10-20% of the total sample size.
Method Survey Tool
By reviewing the literature [9][10][11] and referring to the relevant policy documents of the National Health Commission on the experimental project known as Internet + Nursing services, we designed a questionnaire to determine residents' willingness to participate in Internet + Nursing services in Ganzhou city following group discussions, consulting experts in related fields, and combining the data obtained with the purpose of the survey. The questionnaire had good reliability and validity; the overall Cronbach's alpha (α) coefficient of the questionnaire was 0.896, and the Cronbach's α coefficients for the three dimensions of awareness, participation, and trust were 0.945, 0.880, and 0.894, respectively. After two rounds of expert review, the content validity index (S-CVI) at questionnaire level was 0.905 and the I-CVI ranged from 0.875 to 1.000.
The questionnaire comprised two parts: (1) a general demographic information section that included data on gender, age, education level, marital status, number of children, monthly income, mode of payment for medical expenses, the presence of chronic diseases, whether there were individuals in the family with chronic diseases or mobility problems, primary caregivers, the number of medical visits in the past year, and whether the participant had used Internet + Nursing services and communication tools; (2) a questionnaire on residents' willingness to participate in Internet + Nursing services, which included 19 items in total in 4 dimensions, ie 5 items on awareness, 4 on participation, 7 on trust, and 3 items on demand. In awareness, participation and trust dimensions, each section included one non-directed multiplechoice question, and the rest were single choice questions; 3 non-directed multiple-choice questions were included in the demand dimension of the questionnaire. Percentages were calculated for all non-directed multiple-choice questions in the
Data Collection and Quality Control Methods
From May to June 2021, four uniformly trained surveyors explained the background of the study, the content of the survey, and the criteria for completing the questionnaire to the survey respondents. The participants completed the questionnaire independently after providing signed informed consent for inclusion in the study, and all queries about completing the consent form were explained using consistent speech. Considering the age of the survey population and their use of smartphones, the questionnaire was distributed in a combination of online and paper forms, and there was no difference in the content of the two versions. The paper questionnaires were checked on the spot after completion to ensure effective responses. The online questionnaire was completed with the help of online survey software (Questionnaire Star), where each item was set as a compulsory question that required answering only once from the same IP and device. Finally, the returned questionnaires were systematically screened to eliminate those with a completion time <180 seconds and those that did not conform to the response logic to ensure the quality of the questionnaire data. The two researchers completed the scoring process of the questionnaire separately and when there was a scoring error, the two eventually agreed by looking at the raw data. A total of 426 questionnaires were distributed and 426 were collected (100% recovery rate); 397 questionnaires were valid (93.19% recovery efficiency rate).
Statistical Method
The SPSS 26.0 statistical software was used for conducting data analysis. Frequency, percentage, and mean ± standard deviation (χ± S) were used to express the general information of survey respondents; a t-test and analysis of variance were used to analyse the factors influencing residents' participation, and the results of a univariate analysis with statistically significant differences were included as independent variables in the stratified regression analysis. Differences were considered statistically significant at P < 0.05.
A Comparison of Resident Participation Scores with Different Demographic Characteristics
The total score of Internet + Nursing services participation among Ganzhou residents was 11.59 ± 2.14, and the differences in participation among residents with different education levels, different monthly incomes, and whether there were people with chronic diseases or mobility problems in their family were statistically significant, with P-values of 0.0000, 0.043, and 0.001, respectively. Although the participation scores for resident on other demographic characteristics were not statistically significant, there was still a trend for higher age or higher number of visits in the past year to be associated with higher Internet + Nursing service participations scores. The detailed results are shown in Table 1.
Survey on Ways for Residents to Learn About Internet + Nursing Services, Reasons for Acceptance, and the Demand for Using Them
Among the survey respondents, 39% of Ganzhou residents had not heard of Internet + Nursing services and 25.90% had learned about the project through medical personnel. The top three reasons related to a willingness to accept these services were time and labour-saving-related and convenience, accounting for 65.00%, 46.90%, and 38.00%, respectively; 69.30% of the survey respondents made appointments through WeChat. The detailed results are shown in Table 2.
Analysis of the Influencing Factors on Residents' Willingness to Participate in Internet + Nursing Services
The total score of residents' participation in Internet + Nursing services was used as the dependent variable, and the three variables with statistically significant differences (P < 0.05) according to univariate analysis, ie education level, monthly income, and whether there were people with chronic diseases or mobility problems in the family, were used as control variables. The degree of awareness and trust were used as independent variables, which were included in a hierarchical
901
multiple regression analysis, and the results show that literacy, monthly income and people with chronic illnesses or mobility problems at home are the main factors affecting the willingness of community residents to participate. The results are shown in Table 3.
Residents' Willingness to Participate in Internet + Nursing Services Was Modestly Low and the Reasons for This Were Diverse
According to the results of this study, the total willingness score of Ganzhou residents related to participation in Internet + Nursing services was 11.59 ± 2.14, which reflected a modestly low level. Additionally, 65% of residents stated that the main reason for participating in the services was time efficiency, 46.9% cited their reason as labour efficiency, and 38% ascribed their reasoning to convenience, reflecting the diversity of participant feedback. Internet + Nursing services are provided by nurses in the patient's home; accordingly, patients believed using the services could save time and labour because they did not have to wait in a hospital registration line. This may also have been related to the impact of the new coronary pneumonia epidemic, which has seen people gradually moving away from offline to online medical care and experiencing the benefits of doing so. 12 The Internet + Nursing services project can circumvent the time and spatial limitations of traditional medical services, thereby giving rise to convenience and making more people willing to engage with it.
Limitations Regarding the Primary Channels Through Which Residents Can Learn About Internet + Nursing Services
In this study, when excluding the three interference factors and including education level, monthly income, and whether there were people with chronic diseases or mobility problems in the family, residents who had a degree of knowledge
902
about Internet + Nursing services reflected a higher participation willingness (P < 0.001). This was consistent with the findings of Liu et al 11 and may have been because residents with a level of awareness were more cognisant of the meaning and advantages of these services and were more likely to accept this approach as a medical treatment option. The results of this study showed that 61% of residents were aware of Internet + Nursing services, of which 25.9% had been informed by medical staff and 21.9% through mobile phone internet publicity. This indicated that the information channels of Ganzhou residents have gradually changed from traditional to mixed media platforms. However, there are limitations within the primary channels aimed at delivering information to residents about Internet + Nursing services. The next step to remedy this may be the development and publicity of a programme that fully considers the characteristics of different groups of people and a variety of media platforms.
Residents Had Concerns About the Safety of Internet + Nursing Services
In this study, excluding three interference factors and including education level, monthly income, and whether there were people with chronic diseases or mobility problems in the family, residents who had a high level of trust in Internet + Nursing services were more willing to use them (P < 0.001). The reason for this may have been that residents with a stronger trust in these services believed it could meet their needs and ensure the medical safety of online medical care. In this study, 68% of the surveyed residents believed that laws and regulations related to Internet + Nursing services had to be established, and 62.7% believed that the qualifications of home nurses required strict examination; there were also strong calls for convenient complaint channels and emergency plans. As such, Ganzhou residents still had concerns about the safety of Internet + Nursing services.
How the Health Status of Family Members Affected Residents' Willingness to Participate in Internet + Nursing Services
Residents' willingness to participate in Internet + Nursing services was stronger when they had family members with chronic diseases or mobility problems (P < 0.05). The reason for this may have been related to the fact that most of the respondents in this study were young and middle-aged individuals (18-44 years). Although they did not have a strong demand for these services they represented individuals who were concerned about the health of their family members at home and, accordingly, had a strong willingness to participate in accessing these services.
Increase Promotion to Popularise Internet + Nursing Services
The Internet + Nursing services approach is new and requires more publicity. It is important to define the target audience, both for residents and nurses. Some studies [13][14][15] confirmed that nurses did not have a broad knowledge of these services. The most important way for Ganzhou residents to learn about these services is, however, to be informed by medical staff. For this reason, awareness of these services must be promoted among nurses, and a range of channels should be suitably employed to further promote them. This study showed that 98.2% of the surveyed residents used smartphones and 69.30% were more willing to use WeChat to make appointments. For this reason, WeChat can be used as a platform for promoting these services to residents. Additionally, lectures can regularly be conducted within the community to communicate with residents face-to-face and answer their questions. Furthermore, family bonds can be used to influence elderly members, ie by strengthening the promotion of services among younger family members, to gradually eliminate their rejection of Internet + Nursing services. Through publicity, the language of content related to Internet + Nursing services can be converted into easy-to-understand information. Furthermore, additional images and video can be used where relevant to replace wording to make it easier for residents to understand the information.
Improving Safety and Security Measures to Address Residents' Concerns
This study indicated that if residents had concerns about the safety of Internet + Nursing services their participation would also decrease. Therefore, it is necessary to improve safety measures to eliminate residents' concerns and increase their trust in these services. The relevant departments should improve the applicable laws and regulations. The long-term development of Internet + Nursing services requires law-based support because implementing regulations can give it a basis to carry out and follow. Second, the platform must have a smoothly operating complaint channel with a special person in charge that must pay attention to user feedback, address complaints promptly, and complete regular summary evaluations. Furthermore, a protective wall to ensure information security must be established. Studies have shown that patients are increasingly aware of securing their information 9 and have a high demand for security related to diagnosis and treatment information. It is recommended that an information security platform be created and that the information security knowledge of residents, nurses, and third-party platform personnel be ensured to establish and enhance information security awareness and facilitate the development of Internet + Nursing services.
Develop Products That are Suitable for the Elderly and Simplify the Operational Process
The physiological functions of the elderly will gradually decline with age, as will their learning and memory abilities, making it difficult for them to use smart products and making it easy for them to become intimidated and retreat. 16 For this reason, intelligent, humanised, and simple products must be developed for the elderly to stimulate their desire to use them. This can be achieved by simplifying operational interfaces, using images instead of text, and using large fonts to overcome limitations linked to the decline of physical functions among the elderly, which may help to make them feel more at ease when using software programs. Furthermore, by simplifying the operational process, such as setting a one-key reservation function and using voice announcements, the software can be made easier to use for the elderly. Finally, we can set a function through which to connect to their children, so that when elderly users encounter a situation that they cannot manage, they can connect to their children using one key and in this way assist them to complete a specific operation.
Strengthening Team Training to Ensure Medical Safety
Medical risk is one of the main concerns of patients regarding Internet + Nursing services. 17 As a service provider, the nurse's ability is crucial for ensuring the medical safety of these services. Medical institutions should strictly examine the qualifications and abilities of home nurses, 18 select outstanding nursing talents according to the admission system, and establish an elimination mechanism to enhance the sense of responsibility and urgency of home nurses. Second, a training system for nurses should be created based on the Internet + Nursing services project. The model created by Taizhou and Ningbo in Zhejiang can be used as an example, give full play to the role of the nursing association, sets up a nursing talent pool, establishes a skills training centre, 19 unifies training content and standards, and conducts centralised training and assessment 20 to ensure the homogenisation of nursing quality within and outside of the hospital. In addition, it is necessary to build a service quality evaluation index system to improve the quality control of Internet + Nursing services, to detect hidden safety problems at an early stage, ensure the sustainable development of services, and to gain public trust in services and their quality.
Study Limitations
There are some limitations to this study. Convenience sampling was adopted, which made the determination of sample units arbitrary and the inferred overall affect poor. However, in the sample size calculation stage, the number of study variables and the size of the missing visit rate were fully considered to guarantee an adequate sample size; in the individual inclusion stage, a strict quality control program was developed to ensure the logic and accuracy of the data, which further compensated for the shortcomings of convenience sampling and improved the effectiveness of inferring where this was applied.
Conclusion
The results of this survey showed that the willingness of Ganzhou residents to participate in Internet + Nursing services was at a modestly low level. For participants who noted the presence of individuals with chronic diseases or mobility problems in their families, awareness and trust were the main factors affecting their willingness to participate. It is https://doi.org/10.2147/JMDH.S351071
DovePress
Journal of Multidisciplinary Healthcare 2022:15 904 recommended that Internet + Nursing services be publicised according to different demographic characteristics and that the awareness and trust of residents be enhanced by strengthening the training of visiting nurses, improving safety and security measures, and developing suitable products for the elderly. These measures can help to increase the awareness and trust of residents concerning these services, and improve the willingness of residents (particularly those in need of them) to participate in Internet + Nursing services. This study only surveyed a selection of Ganzhou city residents; further expansion of the sample size is needed in future studies to provide a reliable basis for the development of Internet + Nursing services in the post-pandemic COVID-19 era.
Data Sharing Statement
All data generated or analyzed during this study are included in this published article.
Ethics Approval and Consent to Participate
This study was conducted in accordance with the Declaration of Helsinki and approved by the ethics committee of Gannan Medical University. | v2 |
2021-10-29T06:18:57.707Z | 2021-10-01T00:00:00.000Z | 240071859 | s2orc/train | Antineoplastic prescription among patients with colorectal cancer in eight major cities of China, 2015–2019: an observational retrospective database analysis
Objectives It is unclear what is driving rising colorectal cancer (CRC) treatment costs in China, whether an adjustment in drug prices changes use and total cost. This study aims to estimate trends in drug use, prescribing patterns and spending for antineoplastic drug therapies for CRC in major cities of China. Methods Information from 128 811 antineoplastic drug prescriptions in CRC was retrospectively collected from the Hospital Prescription Analysis Cooperative Project. The prescriptions extracted included demographic information of patients, the generic name and the price of antineoplastic drugs. The Mann-Kendall and Cochran-Armitage trend test was used to estimate the trends of antineoplastic agent usage. Results The number of antineoplastic prescriptions ranged from 18 966 in 2015 to 34 219 in 2019. Among the prescriptions collected in this study, the annual cost of antineoplastic drugs increased by 117.2%, and average prescription cost increased by 20%. Throughout the study period, the most prescribed antineoplastic drugs were capecitabine, oxaliplatin, fluorouracil and irinotecan, representing 49%, 27%, 21% and 9% of (per cent of visits (PV)). The PV of bevacizumab and cetuximab increased by 494% and 338% (from 1.8% and 1.3% in 2015 to 10.7% and 5.7% in 2019). In prescribing patterns of antineoplastic agents, monotherapy gradually decreased, while combination therapy, especially three-drug combination, increased significantly from 1.35% to 7.31%. Conclusion This study estimated recent trends of antineoplastic drug use and expenditure for Chinese patients with CRC. These results would inform CRC treatment decisions, including health insurance negotiation, precision therapy access, allocation of research funding and evaluation of the financial burden of CRC drug treatment.
INTRODUCTION
Cancer has become a leading cause of death in China, with an increasing burden of cancer incidence and mortality observed over the past half century. 1 The 2018 China Cancer Statistics Report showed that the incidence and mortality of colorectal cancer (CRC) in China ranked third and fifth among all malignant tumours, with 376 000 new cases and 191 000 deaths, respectively. Furthermore, the incidence and mortality of CRC in China have maintained an upward trend. 2 Medical expenditures for CRC diagnosis and treatment in China are substantial and have increased rapidly. 3 What accounts for the increase in CRC drug expense in China is not yet fully understood. We suppose that both patient and drug factors may significantly affect the costs.
Antineoplastic drug treatment is an important aspect of CRC therapy. At present, the antineoplastic drugs of CRC mainly include chemotherapy and targeted therapy. Overall, chemotherapy drugs cost less than Strengths and limitations of this study ► This study used hospital prescription records from 88 hospitals in the Hospital Prescription Analysis Cooperative Project to present the first analysis of the trend of antineoplastic agent usage in patients with CRC in eight major cities in China. ► We used time-series analysis to estimate changes in drug utilisation and expenditure in different drug classes, subclass and specific drugs, over the last 5 years. Current identifying practice might help interpret existing cost-effectiveness and guide future cost-effectiveness of drugs. ► This study only assessed the overall use of antineoplastic drugs in patients with CRC and did not distinguish the disease stage, individual patient factors or regional factors. ► We looked at all of the antineoplastic drugs during the study period, some of these drugs are used to treat other concurrent cancers.
Open access targeted therapy. The mainstream cytotoxic chemotherapy drugs include fluoropyrimidine derivatives, oxaliplatin and irinotecan. 4 5 There were seven fluoropyrimidine derivatives marketed in China during the study period, all of which had CRC indications. However, only fluorouracil (5-FU) and capecitabine are recommended by the guidelines. [6][7][8] Trends in the use of fluoropyrimidine derivatives in CRC are noteworthy in the context of different indications and guideline recommendations. CRC chemotherapy regimens usually consist of one to three cytotoxic drugs. Monotherapy is usually used in patients who cannot tolerate combination therapy. Standard CRC combination chemotherapy regimens include folinic acid (LV)/5-FU/OX (FOLFOX), LV/5-FU/irinotecan, capecitabine/oxaliplatin, LV/5-FU/oxaliplatin/ irinotecan, oxaliplatin/irinotecan (IROX). Since 2004, a variety of antineoplastic drugs for the targeted treatment of CRC have been available overseas. 9 Two years later, cetuximab became available in China as the first CRC-targeted drug approved by China National Medical Products Administration. With better efficacy and safety, the role of target therapy has become increasingly prominent for advanced or metastatic CRC (mCRC). 10 Targeted agents can be used as a monotherapy, or in combination with chemotherapy or other targeted agents. The introduction of targeted therapies has also introduced additional testing costs to identify patients who will benefit from these targeted therapies. 11 12 Currently, determination of tumour gene status for V-KI-RAS2 Kirsten rat sarcoma viral oncogene homolog (KRAS)/Neuroblastoma RAS viral oncogene homolog (NRAS) and B-Raf serine-threonine kinase mutations as well as human epidermal growth factor receptor 2 amplification and microsatellite instability (MSI)/ the DNA mismatch repair (MMR) status is recommended for patients with mCRC. Targeted drugs for CRC are mainly composed of monoclonal antibodies and protein kinase inhibitors (PKIs). Currently recommended monoclonal antibodies for CRC include cetuximab, bevacizumab and immune checkpoint inhibitors (ICIs) in China. 8 Up to 40% of patients with CRC have RAS mutations. 13 Patient with RAS mutation should not be treated with cetuximab. Bevacizumab has no genetic limitations.
The immune system substantially impacts CRC progression, which plays a crucial role in eliminating tumour cells. MSI-H incidence in Chinese patients with CRC is about 4.5%-15%. 14 Patients with CRC with MSI-H responded well to ICIs treatment regardless of monotherapy or combination therapy, palliative, adjuvant or neoadjuvant therapy. [15][16][17][18][19] ICIs work by blocking checkpoint proteins from binding with their partner proteins. Three ICIs have been approved by the Food and Drug Administration for patients with mCRC with MMR-D or MSI-H. Pembrolizumab and nivolumab work by inhibiting the immune checkpoint component programmed cell death-1 protein (PD-1). Ipilimumab, a fully humanised monoclonal antibody, blocks cytotoxic T-lymphocyte-associated protein 4. 20 However, affected by marketing policies and other factors, some CRC therapeutic drugs (such as panitumumab, ipilimumab, aflibercept, ramucirumab, etc) have not been listed in China. Similarly, some CRC treatment drugs that have been marketed in China (such as fruquintinib, etc) have not yet been marketed in other countries.
Many factors affect the prescribing patterns of antineoplastic agents for CRC, such as the location of the primary tumour, the results of genetic testing, availability of medicines, adverse reactions, insurance coverage and patient socioeconomic status. 5 21-24 Among these factors, economic factors are particularly influential in China. For example, British economic analysis showed that the incremental costeffectiveness ratio (ICER) of some clinical study populations with cetuximab plus FOLFOX was ¥2.07 million per qualityadjusted life year (QALY) in relation to chemotherapy alone in 2015/2016. 25 Treatment with cetuximab plus FOLFOX-4 resulted in an ICER of ¥0.84 million to ¥1.08 million per QALY according to different cost-effectiveness study used data of TAILOR trial (ClinicalTrails. gov identifier: NCT01228734) in China in 2018. 26 27 Although the ICER of cetuximab in China is lower than in the UK, the imbalance of regional economic level makes the willingness to pay (WTP) the cost gap larger. The commonly used WTP threshold per QALY in the UK is ¥0.49 million, and the commonly used WTP in China is about ¥0.18 million. An exchange rate of Chinese Yuan Renminbi to English pound (9.7:1, 31 December 2015) and US dollar (6.6:1, 30 June 2018) was used.
China's basic medical insurance is mainly government medical insurance. There are two independently managed projects, namely 'urban employee insurance' and 'resident insurance'. According to China's Sixth National Survey on Health Services by the National Statistics Bureau, the government insurance coverage rate reached 96.8% in 2018. The participation rate of urban and rural residents in basic medical insurance was 96.1% and 97.6%, respectively. 28 29 Within insurance coverage, the reimbursement rate varies from 35% to 90%, depending on the type of insurance, the regional disparity and hospital status. However, the targeted anticancer drugs were not included in the national medical insurance catalogue until bevacizumab in July 2017, followed by cetuximab and regorafenib in the second half of 2018.
The high price of targeted antineoplastic drugs and the lack of national health insurance coverage might severely affect the use of new drugs in patients with CRC. There were some reports on patients with cancer in China. [30][31][32][33] However, real-world trends in antineoplastic drug use in Chinese patients with CRC have not been fully assessed yet. Besides, the significant price reduction and inclusion of three classic CRC-targeted drugs (cetuximab, bevacizumab and regorafenib) in national health insurance following national negotiations might significantly affect the use of these drugs in patients with CRC. This study used a large clinical prescription database to explore changes in patterns of antineoplastic drug prescriptions and related
Study design and data source
This study was a retrospective study based on national multicentre prescription information collected from the Hospital Prescription Analysis Cooperative Project (HPACP). The HPACP database has been established since 1997, 34 and all participating hospitals were collected prescriptions normally during the study period. Specific prescription information was extracted from the HPACP database, where 10-day prescription data were randomly extracted in each hospital each quarter. 34 Multiple hospital admissions or visits by the same patient were recorded as independent data. In this study, prescription data were collected as described from 88 hospitals. Online supplemental table 1 provides the descriptive information for these 88 hospitals. All these cities are the most economically developed areas in the local area.
Prescription inclusion and information collection
Prescriptions containing at least one antineoplastic agent for patients who had a diagnosis of bowel cancer were included. Inclusion criteria were unrestricted to diagnostic criteria and staging of CRC. The study period was from January 2015 to December 2019. Prescription information, including prescription code, sex, age, location, diagnosis, hospital status (inpatient or outpatient) and the generic name and price of the antineoplastic drugs, was extracted from the HPACP database. Prescriptions with incomplete information were removed. Prescription coding was used to deidentify patients' information to protect patient identity. Prescription extraction was approved by the ethics committee at each hospital.
Drug classes
In order to understand the overall picture of antineoplastic drug treatment for patients with CRC in China, we had included all antineoplastic agents in the initial statistics. According to the WHO anatomical therapeutic chemical (ATC) classification system (https://www. whocc. no/ atc_ ddd_ index/), antineoplastic drugs were divided into six classes. L01A-alkylating agents, which act by inhibiting the transcription of DNA into RNA and thereby stopping the protein synthesis (eg, cyclophosphamide). 35 L01Bantimetabolites, which are structurally similar to require cellular metabolites, but the cells cannot use them in a productive manner (eg, 5-FU, capecitabine, raltitrexed). 36 L01C-plant alkaloids and other natural products, which include vinca alkaloids and analogues, podophyllotoxin derivatives, taxanes, topoisomerase 1 inhibitors and other plant alkaloids and natural products (eg, paclitaxel). L01Dcytotoxic antibiotics and related substances include actinomycines, anthracyclines and other cytotoxic antibiotics (eg, doxorubicin). L01E-PKIs is a type of enzyme inhibitor that can block the action of protein kinases. According to different drug targets, the drugs in the L01E class are divided into eight subclasses. Regorafenib, fruquintinib and vemurafenib belong to different subclasses of L01E, respectively. L01X-other antineoplastic agents, including platinum compounds (e.g., oxaliplatin), monoclonal antibodies (e.g., cetuximab, bevacizumab) indicated for the treatment of cancer, and antineoplastic agents that cannot be classified into other classes. The specific drug classification is shown in online supplemental table 2.
In the analysis process, the first step was to analyse the prescriptions of each class of antineoplastic agents according to the ATC classification. In the second step, we further analysed the subclasses contained in the three antineoplastic drug categories (L01B-antimetabolites, L01E-PKIs and L01X-other antineoplastic agents) with the most prescriptions in the first step. Finally, we estimated the use of specific drugs in the most used drug subclasses and recommended by guidelines. We extracted the price of each antineoplastic drug on each prescription from the HPACP database. The drug cost was calculated by adding the price of all analysed drugs in the Chinese Yuan. Drug expenditure was just costs as reported during each year.
Statistical analyses
Treatment visits and the cost of antineoplastic agents for patients with CRC were analysed. A visit was defined as one prescription containing antineoplastic agents, regardless of inpatient or outpatient status. The per cent of visits (PV) was the proportion of a specific class, subclass or drug prescriptions in the total number of antineoplastic drug prescriptions. The drug cost was the sum of all the antineoplastic drugs. The average cost per visit was calculated by the cost of total antineoplastic agents divided by the total visit number of patients. Overall trends in each class and the use of some specific antineoplastic agents were evaluated over the 5-year observation period. Monotherapy and combination therapy were analysed as prescribing patterns.
Since the antineoplastic drug treatment of patients with CRC is less affected by seasonal factors, and China's drug policy is usually adjusted annually, trends were analysed at the annual level. The Mann-Kendall test was used to estimate the statistical significance of overall trends for the number and expenditure of total prescriptions. 37 The statistical significance of prescribing trends of antineoplastic drugs and drug classes was analysed using the Cochran-Armitage trend test R V.3.3.0 (http://www. Rproject. org). 38 Statistical significance was identified as a p value less than 0.05.
Patient and public involvement
The study design was a secondary data analysis and did not directly involve patients or the public.
RESULTS
Descriptive statistics of total prescriptions A total of 129 098 antineoplastic prescriptions for patients with CRC were extracted. However, 287 prescriptions with Open access incomplete information were excluded. The remaining 128 811 of these were collected in this study. The demographic characteristics of the included patients for the prescriptions are shown in table 1. Prescriptions for male patients with CRC made up 62.2% of the sample for 5 years. The population with bowel cancer aged below 40 and over 79 years was small, with a total of approximately 10% each year. Among them, the number of prescriptions for patients in the 50-89 age group had increased significantly (z=2.2045, p=0.027). The proportion of male patients' prescriptions was on the rise, increasing from 61.20% to 63.45% (z=2.2045, p=0.027), the annual per cent change (APC) was 0.56% (95% CI 0.14 to 1.00).
These trends are similar to the previously reported increase in the incidence of CRC in men than in women in recent years. 2 Overall trends in antineoplastic drugs and cost The overall trend in antineoplastic drug prescriptions was determined by clinic visits and cost data, as indicated in figure 1A. For the antineoplastic drugs included in the present study, the yearly visits for patients with bowel cancer in sample hospitals increased by 80.4% (from ¥18 966 in 2015 to ¥34 219 in 2019, p<0.05). During the same period, the annual expenditure of antineoplastic drugs increased by 117.2% (from ¥58.7 million in 2015 The average cost of antineoplastic drugs per visit for patients with CRC was also calculated. Although it ranged from ¥3 095 to ¥3 723 in 5 years, the difference was nonsignificant (p=0.22).
Trends by drug subclass in the most widely used antineoplastic agent classes According to WHO ATC classification, a total of 6 classes, 26 subclasses and 79 antineoplastic drugs were included. Further analysis was conducted according to the WHO ATC classification of drugs for antineoplastic prescriptions. A list of drugs is shown in online supplemental table 2. Online supplemental figure 1A shows the annual trends in each class of antineoplastic drug during the study period. We further analysed the drug class L01Bantimetabolites, L01X-other antineoplastic agents and L01E-PKIs. The PV for antimetabolites was 82.6% in 2015, then gradually decreased to 79.5% in 2019 (Z=−11.479, p<0.05, APC 0.8%, 95% CI −4.78 to 4.46). Figure 2A shows trends in the PV and expenditures for the three subclasses of antimetabolites. Pyrimidine analogues were the top subclass of antimetabolites; the PV was only slightly less than that for antimetabolites, and it decreased during the 5-year period (Z=−15.461, p<0.05, APC −1.2%, 95% CI −4.6 to 4.1). The antimetabolites, which cost ¥148.7 million over a total 5 years, ranked second in all six classes of antineoplastic drugs. The cost of pyrimidine analogues increased from ¥24. 6 The PV of other antineoplastic agents (L01X) increased relatively fast, ranging from 35.3% to 48.2% in the space of 5 years (Z=26.932, p<0.05, APC 3.3%, 95% CI 1.32 to 6.48). The drug expenditure for L01X also increased. For 2015-2019, ¥264.7 million was spent on this drug class, making it the highest in the total (online supplemental figure 1B) and average prescription costs (online supplemental figure 1C). L01X mainly consists of three subclasses in this study, as shown in online supplemental table 2. The trends in use and cost of this class of drugs are shown in figure 2B. Among these subclasses, platinum compounds were the most widely used. The PV for platinum compounds was 26.2% in 2015 and gradually
PKIs were used in 0.1% visits in 2015, which increased to 2.5% in 2019 (Z=21.164, p<0.05, APC 0.6%, 95% CI −0.02 to 1.78). The drug expenditure of PKIs was ¥8.6 million during the study period ( figure 2C). This study collected nine subclasses of PKIs. Among them, multitarget PKIs and vascular endothelial growth factor receptor tyrosine kinase inhibitors are the most commonly used, the PV of which increased by more than 300 (Z=22.391, p<0.05, APC 0.4%, 95% CI 0.0017 to 1.6404) and 50 times (Z=5.152, p<0.05, APC 0.1%, 95% CI 0.0045 to 0.1371), respectively. The cost of multitarget PKIs was ¥24 267 in 2015 and increased to ¥5.0 million in 2019. Other subclasses of PKIs were mainly used for other cancer with multiple primary tumours.
Because patients with CRC may receive multiple drug combinations of antitumor therapy, the sum of PV of different class drugs may exceed 100%, which may also be the case for drug subclass analysis and specific drug analysis.
Open access
Bevacizumab and cetuximab were the most prescribed monoclonal antibodies, as shown in figure 3C. In 2015, bevacizumab and cetuximab were used in 1.8% and 1.3% of visits, accounting for 11.3% and 12.8% of antineoplastic drug costs, respectively. In 2019, the PV for these two drugs increased to 10.7% and 5.7%, accounting for 20.3% and 13.7% of the total antineoplastic drug cost, respectively. ICIs were not used in patients with bowel cancer until 2019 in our study. In 2019, three PD-1 inhibitors were used in 0.1% of visits. The total cost for PD-1 inhibitors was ¥0.4 million in 2019, accounting for 0.1% of the annual antineoplastic drug expenditure.
Compared with 2015, the cost of PKIs in 2019 had increased by 33 times ( figure 3D). The use of PKIs increased rapidly as well (Z=21.164, p<0.05). Regorafenib was the most widely used PKI, contributing nearly 61.2% of all PKIs prescribed in 2019.
DISCUSSION
In this study, real-world trends in the prescription patterns of antineoplastic drugs for patients with CRC in China are described herein for the first time. During the study period, the number of antineoplastic drug prescriptions showed an increasing trend, which may be related to the rising number of patients with CRC and the growing number of admissions/clinical visits per patient. 1 3 At the same time, the total amount of antitumor drug prescriptions increased significantly.
The total amount of medications spent in 2019 was 2.17 times that of 2015. The annual average prescription amount changed slightly. Compared with 2015, the single prescription amount only increased by 20%, and there was no statistical difference in p for trend. In terms of drug use, the proportion of cytotoxic drugs was in line with the guidelines. The treatment regimens are mainly composed based on fluorouracil or its derivatives and substitutes, combined with oxaliplatin or irinotecan, or IROX. The proportion of drug prescriptions of fluorouracils was higher than oxaliplatin and higher than irinotecan. Cytotoxic antitumor drugs without CRC indication accounted for less than 1% and were mainly used to treat other diseases. Compared with high-income countries, more than 70% of patients received biologics at any line of treatment, 39 40 our study showed that the use of targeted drugs may be significantly lower.
The increase in spending on antineoplastic drugs may be related to several factors, such as patient or drug factors, and they can affect each other. Consistent with previous epidemiological results, 1 the proportion of prescriptions for male patients in our study continues to rise slightly. The higher male proportion may lead to changes in mean weight or BSA, which will probably result in higher costs for weight or BSA-based therapies.
Our study shows that the proportion of targeted drugs in antitumor drug prescriptions increases (PV of monoclonal antibodies increased from 3.2% in 2015 to 16.6% in 2019). The drug price of target medications is significantly higher than that of cytotoxic drugs. The use of costly target agents increases the average drug expense per prescription. But the impact of increased costs from targeted drugs is not constant. As drug prices continue to fall, the average annual prescription cost of cetuximab decreased gradually from ¥30 768 in 2015 to ¥21 689 in 2018 and significantly declined to ¥9 020 in 2019. On the other hand, the cost per visit of PKIs dropped in 2017 after national negotiations in 2016, but it increased again in 2018 and 2019 as the use of non-negotiated drugs such as regorafenib, anlotinib and vemurafenib increased. Therefore, to accurately evaluate the overall impact of high-priced drugs on drug costs, a comprehensive calculation should be made based on total drug costs, the number of prescriptions and the amount of single drug use.
In addition, in our study, the proportion of prescriptions of a single drug regimen decreased significantly, and the remedies of three drugs increased significantly, suggesting that the change of medication pattern may have an impact on drug consumption. In particular, according to the guidelines, 6-8 the three-drug regimen is mostly a combination of chemotherapy and targeted drugs and, thus, may significantly increase drug costs. However, the specific composition remains to be confirmed by further studies.
There are several limitations of this study. First, detailed prescription information, such as the disease stage of CRC, surgical history, pathological results and genetic test results, was lacking. Our analysis was based on prescription data only; therefore, the appropriateness of the antineoplastic drug treatment could not be evaluated, nor could the outcome of anticancer therapy. Our prescription data do not involve the patient's individual identification information, it is impossible to analyse and evaluate the drug on a patient basis. This topic requires further research. Since the included hospitals are located in large cities and prescriptions are obtained through random sampling, prescriptions in different regions may fluctuate from year to year. There might be bias in sampling. Our sample hospitals are from large cities in some Chinese provinces, it may only represent the prescriptions of some Chinese patients with CRC and may be less representative of the economically less developed and rural areas. Moreover, the overall disease burden in patients with bowel cancer, not just the burden of medication, should be further explored in the future.
CONCLUSIONS
In our research, we found that prescription practices for Chinese patients with CRC underwent major changes during the 5-year study period. The use rate of targeted antineoplastic drugs increased significantly after the drug price was reduced. Therefore, as the high cost of cancer drug treatment put pressure on the healthcare system and patients, pharmacoeconomic research is needed to evaluate the cost-effectiveness of CRC antineoplastic drugs.
Contributors DY, LY and HD conceptualised and designed the study. WH screened and completed data extractions. YH and LY contributed to analysis of the data. DY, YY and HX conducted the final analysis and drafted the initial manuscript. Guarantor: HD. All authors contributed to the critical revision of the paper and approved the final manuscript.
Funding HX was awarded one grant from the National Natural Science Foundation of China (grant number: 81703479). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests None declared.
Patient consent for publication Not applicable.
Ethics approval All procedures performed in studies involving human participants were in accordance with the ethical standards of medical ethics committee of the Second Affiliated Hospital of Zhejiang University School of Medicine.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available upon reasonable request.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/. | v2 |
2020-02-05T16:29:36.354Z | 2020-02-05T00:00:00.000Z | 211028519 | s2orc/train | RAMAN AND ATR-FTIR SPECTROSCOPY TOWARDS CLASSIFICATION OF WET BLUE BOVINE LEATHER USING RATIOMETRIC AND CHEMOMETRIC ANALYSIS
There is a substantial loss of value in bovine leather every year due to a leather quality defect known as “looseness”. Data show that 7% of domestic hide production is affected to some degree, with a loss of $35 m in export returns. This investigation is devoted to gaining a better understanding of tight and loose wet blue leather based on vibrational spectroscopy observations of its structural variations caused by physical and chemical changes that also affect the tensile and tear strength. Several regions from the wet blue leather were selected for analysis. Samples of wet blue bovine leather were collected and studied in the sliced form using Raman spectroscopy (using 532 nm excitation laser) and Attenuated Total Reflectance - Fourier Transform InfraRed (ATR-FTIR) spectroscopy. The purpose of this study was to use ATR-FTIR and Raman spectra to classify distal axilla (DA) and official sampling position (OSP) leather samples and then employ univariate or multivariate analysis or both. For univariate analysis, the 1448 cm− 1 (CH2 deformation) band and the 1669 cm− 1 (Amide I) band were used for evaluating the lipid-to-protein ratio from OSP and DA Raman and IR spectra as indicators of leather quality. Curve-fitting by the sums-of-Gaussians method was used to calculate the peak area ratios of 1448 and 1669 cm− 1 band. The ratio values obtained for DA and OSP are 0.57 ± 0.099, 0.73 ± 0.063 for Raman and 0.40 ± 0.06 and 0.50 ± 0.09 for ATR-FTIR. The results provide significant insight into how these regions can be classified. Further, to identify the spectral changes in the secondary structures of collagen, the Amide I region (1600–1700 cm− 1) was investigated and curve-fitted-area ratios were calculated. The 1648:1681 cm− 1 (non-reducing: reducing collagen types) band area ratios were used for Raman and 1632:1650 cm− 1 (triple helix: α-like helix collagen) for IR. The ratios show a significant difference between the two classes. To support this qualitative analysis, logistic regression was performed on the univariate data to classify the samples quantitatively into one of the two groups. Accuracy for Raman data was 90% and for ATR-FTIR data 100%. Both Raman and ATR-FTIR complemented each other very well in differentiating the two groups. As a comparison, and to reconfirm the classification, multivariate analysis was performed using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). The results obtained indicate good classification between the two leather groups based on protein and lipid content. Principal component score 2 (PC2) distinguishes OSP and DA by symmetrically grouping samples at positive and negative extremes. The study demonstrates an excellent model for wider research on vibrational spectroscopy for early and rapid diagnosis of leather quality.
Introduction
Every year more than a billion animals are slaughtered as part of the animal production industry for meat. In turn, this generates returns of over a billion dollars for the global leather industry, meat processing's most important coproduct sector [1,2]. The production of leather is split into three phasesanimal slaughtering, tanning and manufacturing of the finished product for the commercial market. Tanning is one of the most important stages in the leather production. It involves processing the raw skin or hide, to retain its natural properties by stabilising the molecular structure and make it more durable [3]. Previously, natural chemicals like plant tannins, alum, and other minerals were used in the tanning process which had some advantages over current methods using synthetic chemicals, though these take only a fraction of the processing time required for the earlier methods [4]. Wet Blue refers to part-processed chrome-tanned leather in the wet state. During this stage the skin or hide is protected from decomposition through chemical crosslinking that stabilises the collagen network [5]. The blue colour comes from the chromium tanning agent (Chromium (III) oxide), which stays in the leather after tanning.
Looseness is a fault found in leather that affects the quality of the leather. It manifests itself as corrugations on the outer surface of finished leather when bent inward. Whilst processing is known to exaggerate the fault, the root cause of the less densely packed fibres in affected regions is poorly understood, although potential causes may include environment, nutrition, breed and age. Looseness is a major concern to the leather industry in terms of its effect on structure of the leather and appearance of the final leather product [6]. At present looseness can only be accurately identified once the leather is dried, thus tanners can only address it by either discarding the leather or remedial treatment, costing both time and money [7][8][9]. There is an understanding that looseness is prevalent in specific regions including the shoulder and flanks whereas other regions such as the backbone and official sampling position (OSP) are typically unaffected. This study investigates wet blue leather from two regionsdistal axilla (DA), i.e., from the flank side and official sampling position (OSP), i.e., from near the central lower region. The aim is to obtain a better understanding of how tight and loose wet blue leather might be differentiated through measurement, since the OSP and distal axilla regions typically give tighter or looser leathers, respectively. The intention of this study is to develop a model using nondestructive techniques that can identify the looseness fault at an early stage of the leather production. Different strategies or markets for the affected hides can then be identified, to not only save time but also to minimise the damaging costs incurred by identifying looseness at a later stage of processing.
Vibrational spectroscopy techniques such as Raman spectroscopy and Attenuated Total Reflectance -Fourier Transform InfraRed (ATR-FTIR), supported with ratiometric band intensity analysis and chemometric methods, are used here to identify structural variations which effect the physical properties of leather from the two identified regions -OSP and DA.
Raman spectroscopy measures the inelastic scattering of photons (with visible wavelengths) while they interact with the vibrational motions of molecules to provide useful information about molecular structure via both band position and intensity. Raman can be used for non-invasive probing of chemical and biological samples [10,11]. Infrared spectroscopy (IR) is based on the absorbance of infrared photons by molecules due to vibrational motion of the molecules present in the matrix. Both non-destructive techniques are fast, require minimal sample preparation, and have high specificity and sensitivity [12]. Raman spectroscopy has the advantage of a very weak water signal so minimal interference from water in biological samples [13], not causing any damage to the sample [14] and allowing insitu detection using optical fibres or microscopes. Raman is particularly sensitive to structures that are easily polarised, such as aromatic rings and sulphurcontaining groups. Water has an absorption that can mask the characteristic amide I band at 1640 cm − 1 and a very intense, broad absorption around 3300 cm − 1 , that can obscure absorption by other O-H and N-H vibrations. If water interference can be minimised, then the advantage of IR is its sensitivity to vibrations associated with the amide bonds in proteins. The secondary and tertiary structures of proteins influence the shape of the amide bands and IR spectroscopy provides useful information about protein structure. We have used an ATR-FTIR spectrometer that limits water interference by using a very short effective path length that results from the attenuated total reflection process.
For most studies of spectral diagnosis of biological samples, the mid-IR (MIR) spectrum within 4000-600 cm − 1 range seems to be more effective than the near-IR (NIR) range (14,000-4000 cm − 1 ). Bands within the range of 4000-1500 cm − 1 are characterized by various stretching modes of functional groups of molecules. Bands below 1500 cm − 1 are dominated by deformation, bending and ring vibrations of the molecular "backbone", and are generally referred to as the fingerprint region of the spectrum. As the vibrational activity between Raman and infrared (IR) spectroscopies is different, some modes in both are active, but others are only Raman or IR active. MIR and Raman spectra both exhibit amide bands that are relevant to the structure of collagen. Thus, IR and Raman spectroscopies provide similar and complementary information of molecular vibrations [15][16][17].
Sample preparation is relatively simple as compared to other analytical techniques, such as high-performance liquid chromatography (HPLC) and colorimetric methods [18][19][20][21][22]. Finding the variations in the initial leather processing steps reduces the costs of down-stream processing. Therefore, the label-free and non-destructive techniques are highly attractive tool for understanding wet blue leather [ 13,16,17].
Several bone studies have utilised Raman and IR spectroscopy to identify defects [5,23,24], the quality of bone affected by bacteria [25][26][27], or changes in collagen due to cross-links [20-22, 28, 29], but no work has so far been performed on wet blue leather defects. To the best of our knowledge, this work is the first attempt to identify the variation between loose and tight leather regions using these two techniques.
Sample preparation
All bovine wet blue samples were prepared by New Zealand Leather and Shoe Research Association (LASRA®) using the conventional methods [19]. Samples were collected from the official sampling position (OSP) and distal axilla (DA) of the wet-blue and stored at below 4°C until analysis.
Wet blue samples were sliced using a Leica CM1850UV Cryostat to 40 μm thickness. Six replicates of each sample were cut and placed on a microscope slide for Raman and ATR-FTIR analysis as described below.
Data acquisition and spectral processing
Six leather samples labelled as 'DA' and displaying signs of looseness and five wet blue bovine leather samples labelled as 'OSP' from the tighter regions of the hide were prepared for analysis using the method described above. These samples were then analysed using a home-built Raman microscope utilising a Teledyne-Princeton Instruments (USA), FERGIE spectrometer using a 532 nm excitation laser (with~10 mW laser power) focused onto the sample with a spot size diameter of~1-2 μm using 40 × magnification and 0.65 NA objective. For both Raman and IR measurements, spectra were collected from 5 different spots on each sample. Raman spectra were acquired with an exposure time of 5 s per frame and 10 frames (each frame was stored separately). Therefore, 50 spectra were obtained from each DA and OSP sample.
A Thermo Scientific™ iD5 Nicolet™ iS™5 Attenuated Total Reflectance -Fourier Transform InfraRed (ATR-FTIR) spectrometer was used to collect ATR-FTIR spectra from the same wet blue samples. Spectra were recorded by attenuated total reflection (ATR) on a diamond crystal and 16 scans were collected from 5 different spots for each sample. Figure 1 shows the flowchart for spectral analysis. For analysis by principal components analysis, each spectrum was preprocessed with an algorithm written using the SciKit Learn package [30] in Python 3.7. Baseline correction, background subtraction and average spectra were obtained using the Python algorithm.
For ratiometric analysis, Origin 2018b (Origin Lab Corporation, Northampton, MA, USA) was used. Preprocessing, consisting of a 7-point, zero-order derivative Savitzky-Golay smoothing function, was applied to smooth spectral noise. Curve fitting by sums-of-Gaussians was used to determine band areas, which were subsequently used to calculate area ratios of the peaks of interest.
Results and discussion
Raman and ATR-FTIR spectra are shown in Figs. 2 and 3 respectively. Bands that are known to be associated with functional groups and structures in protein are labelled in the Raman spectra. Bands with positions within instrumental resolution in the Raman and FTIR spectra are assumed to have the same chemical and structural origin [31].
Peaks of interest
Band assignments and their interpretation are based on Raman and IR studies of collagen tissues. 16,17,32. By visual inspection, it was found that there are variations in the collagen region (1002-1680 cm − 1 ). A careful examination of the spectra showed shifting of a few peaks due to the complexity of biochemical components in leather samples. Sharp peaks were observed in OSP leather Raman spectra whereas significant overlapping of bands was found in DA in the 1550-1700 cm − 1 region.
The observed peak positions of the Raman and IR bands observed and their assignment for wet blue leather are shown in Table 1.
There is a significant shift observed in peak position, intensity and number of signature peaks between DA and OSP samples in Raman and IR spectra. Both DA4 and DA5 show a broad band different from the other DA replicates, which indicates that some structural changes in collagen may occur due to alterations in secondary structures -α helix, β sheet, random coils or immature cross links.
The peak identified at 1669 cm − 1 is associated with random or unordered protein structure (e.g., random coils). The amide I vibration is dominated by peptide carbonyl stretching vibration with some contribution of C-N stretching and N-H in-plane bending [32]. The bands near 875 and 920 cm − 1 can be assigned to the C-C stretching vibrations of amino acids characteristic of collagen; hydroxyproline and proline. The band near 1002 cm − 1 is assigned to the phenyl ring breathing mode of amino acid, phenylalanine [15,28,33]. 2340 cm − 1 band is observed in few DA and OSP samples is the appearance of asymmetric stretching of CO 2 band which is the result of some background from the spectra.
Special emphasis was placed on the spectral features at 1448 cm − 1 which is assigned to the CH 2 bend of phospholipids [34] and 1650-1669 cm − 1 , that corresponds to amide I region that is comprised of both proteins and lipids [35,36]. Selecting these two bands serves as an excellent indicator of variations, because any changes due to lipid variation are factored out using the 1448 cm − 1 lipid band [37,38]. It was found that 1448 cm − 1 is the more intense Raman band when compared to 1669 cm − 1 whereas 1632 cm − 1 is the most intense IR band. Therefore, collagen analysis was performed using the peak area ratio of the CH 2 wag band at 1448 cm − 1 and the amide I band at 1669 cm − 1 for Raman analysis.
Before analysing the Raman and IR marker bands for ratiometric analysis, we decided to validate the accuracy of the method and spectral positions identified for DA and OSP. Raman analysis was performed on the other regions of the wet blue to classify loose and tight features based on specific biomarker Raman bands.
Our hypothesis is that OSP region tend to give tight leather as it comes from central backbone part of hide whereas DA region is more prone to looseness as it comes from flanks or sides of hide. To confirm that the features we have identified are characteristic of looseness and not simply associated with the location of the sample, we have a selected few regions from OSP looking at some wrinkles which exhibit characteristic looseness and a few regions from DA which are not wrinkled or stretched to investigate for tightness.
We have also selected regions from other parts of wet blue like neck and shoulder. The selection of samples was based on the visual examination of wet blue. Figure 4 shows the Raman spectra of all these regions labelled as OSP (tight); the characteristic feature of OSP, OSP (loose) means few loose regions in OSP, DA (loose); the characteristic feature of DA and DA (tight) meaning any tight regions observed in DA. Table 2 summarises some differentiating Raman bands from OSP, DA and other regions of wet blue leather. A few characteristic bands categorise wet blue into two sets as loose and tight leather rather than OSP and DA. The following two signals are of interest in classifying looseness: (1) the protein backbone confirmation: the amide I band detected at 1677 cm − 1 for tight regions in OSP and DA corresponds to anti-parallel β-sheets which give a tight structure whereas the amide I But the loose and tight features are more dominant in DA and OSP regions and visible in the spectra (Figs. 2,3, and 4), despite of looking specifically loose regions in tight section of OSP or tight regions in loose section of DA, therefore has the potential of further classification based on location. Hence, further study was carried out in analysing DA and OSP locations for classifying loose and tight structural features to demonstrate the potential of Raman and ATR-FTIR spectroscopy.
Two spectral analysis techniques were employed to find the best classification fit for various biochemical components affecting the wet blue leather quality, such as proteins, lipids or nucleic acids.
A univariate statistical method which includes band intensities, area ratios and intensity ratio calculations for the interpretation of spectra [42,43]. This ratiometric analysis was carried out for qualitative classification, which was further supported by a logistic regression algorithm to enable straightforward quantitative classification of DA and OSP.
A multivariate statistical method (based on Principal Component Analysis) that considers the whole spectrum but performs classification with a small number of variables (data set dimension reduction) that extract the maximum variance in the data. Multivariate analysis makes no a priori assumptions about selecting the best variables for classification.
Univariate analysis
Ratiometric analysis, a simple approach, was employed to identify the spectral variations by Raman and IR spectroscopy and generate a systematic and comparative trend of structural features of biochemical components in OSP and DA. Ratiometric analysis can overcome variations due to sample thickness and morphology, background scattering fluctuations and other instrumental effects [17].
Intensity-based ratiometric analysis may result in inaccurate interpretation due to baseline estimation issues [35,44]. So, average and standard deviations of the peak area ratios for the CH 2 deformation (1448 cm − 1 ) and Amide I (1669 cm − 1 ) bands from Figs. 2 and 3 were calculated [16,17]. For the Raman spectra, 0.57 ± 0.099 and 0.73 ± 0.063 for DA and OSP ratio values were obtained respectively. For IR, the values were 0.40 ± 0.057 and 0.49 ± 0.13 for DA and OSP samples (Additional file 1). Both Raman and IR show significant variation between the two categories of loose and tight samples. Although ATR-FTIR and Raman spectroscopy arise from the same physical phenomenon of molecular vibration, the processes of Raman scattering, and infrared absorption are fundamentally different as observed in Table 1. Additional bands like 1548 cm − 1 , observed in IR, but absent in Raman provides an understanding of cross-links in collagen [44,45]. Hence, the combination of Raman and FT-IR gives synergistic information on complex samples in a non-destructive manner.
The variation between the two categories could be the result of changes in the collagen network, which directs further investigation towards the amide I band of collagen, which consists of several secondary structures [24]. Curve fitting by sums-of-Gaussians method was used to find the component area under a broad band. Accurate peak areas, and peak centres then can be deduced. Univariate analysis was again performed using the collagen components in the section below.
Alterations in collagen network
The amide regions of proteins are overlapped by many underlying bands [27]. In vibrational spectroscopic methods, such as FTIR and Raman, resolution of underlying constituent peaks and calculation of their contributions offer a wealth of information, as these peaks are very sensitive to secondary structure [15,46]. Therefore, curve-fitting was carried out on both Raman and IR data to investigate the spectral changes in the secondary structures of collagen. A typical result of curve fitting four Gaussian components to the Amide I band in the Raman spectrum is shown in Fig. 5. These secondary bands have been used to investigate the lipid to protein ratio as a measure of collagen quality.
There is a well-established frequencyassignment correlation in literature (Table 3) for the underlying bands in amide I group [34,43]. The Amide I band in the OSP spectrum is strongly asymmetric and its curvefitting (Fig. 5) yields components in the 1600-1700 cm − 1 region which can be mainly assigned to collagen (1648 and 1669 cm − 1 ), elastin (1681 cm − 1 ), and amino acids (1610 and 1698 cm − 1 ).
The triple helical structure of the collagen molecule is unique and there is no specific peak wavenumber for these secondary structures (e.g., α-helix, or β -sheet). Therefore, changes in collagen's helical structure were investigated empirically by observing changes in the curve fitted area ratios, as identified by the curve-fitting analysis. Collagen crosslinking is measured as changes in the amide I envelope [39,46]. It was observed that the Raman band at~1669 cm − 1 was present in the fractions containing the trivalent collagen cross-links whereas IR observed a band at 1632 cm − 1 , but no band was evident at~1669 cm − 1 [17,27]. From literature studies [47,48], biochemical analysis of collagen peptides showed that pyridinoline (Pyr) crosslinks result in a band at 1666 cm − 1 . Therefore, the peak at 1669 cm − 1 reflects pyridinoline cross-linked collagen peptides [29]. These observations from Raman and IR spectra provide additional information of changes in the amide I band. Most of the underlying bands of amide I arise from the structure of the collagen triple helix as well as the telopeptides (1632, 1645, 1655, 1672, and 1682 cm − 1 ). The intermolecular crosslinking of collagen is a key element in determining tensile strength and elasticity [49,50].
For amide I, the Raman band area ratio of 1648/1681 cm − 1 (non-reducible and reducible collagen types) was used for analyzing variations between loose and tight leathers, whereas, for IR, the 1632/1650 cm − 1 (triple helix and α-like helix collagen types) ratio was used, as shown in Fig. 6 (Additional file 1).
For quality assessment, a student t-test was carried out between the two ratio datasets. For Raman, the t-test gave a value of p = 0. 0008, and for IR, p = 6.8 × 10 − 5 . So, there are significant differences (p < 0.05) between the DA and OSP ratios, and, therefore, suitable to fit to a regression model.
Logistic regression
Quantitative classification of DA and OSP involves a continuous independent variable (peak area ratio) and a binary dependent variable (DA vs OSP), therefore a logistic regression (LR) algorithm [37] was devised to discriminate the samples using the SciKit Learn package [30] in Python 3.7.
A confusion matrix was generated from the output that describes the performance of classification. It summarises correct and incorrect spectra classification. It is useful for two-class classification and in measuring recall, precision and accuracy [18,51,52]. The confusion matrix for Raman and IR data obtained is presented in Tables 4 and 5.
The first entry in the confusion matrix is the number of correctly identified DA samples. i.e. 5/ 5 which is a perfect classification whereas for OSP it is 4/5 which is also close to perfect fit. Accuracy, precision and recall are of importance where: where TP = true positive, TN = true negative, FP = false positive, and FN = false negative with DA arbitrarily set as True and OSP set as False.
The accuracy for the Raman data is 0.9 (90%), precision is 1.0 (100%), and the recall score is 0.8 (80%). The IR data presented in Table 4 shows the perfect classification of 6/6 from all six DA and OSP. This means all were correctly classified. The accuracy, precision and recall score is 1.0 (100%).
From the results obtained, it is evident that Raman peak area ratios and IR peak ratios are a good predictor in differentiating the leather type. Both techniques complement each other very well. There is a significant difference obtained in the recall score for Raman and IR data that provides the motivation for the multivariate analysis of the Raman data. Although univariate analysis is quite useful, it might be possible to still obtain a useful prediction from Raman spectra by using a multivariate analysis to reveal the differences, especially when there is a large dataset.
Multivariate analysis
Multivariate analysis can be used to quickly characterise the "types" or "classes" of spectra or samples present in a large data set. An unsupervised method, Principal Components Analysis, is used that can determine the existence of classes in the data set without any assumptions of the number of classes. The classes are determined by transforming the data set, expressed in the original spectral variables, to a new description using variables (principal components) that maximise the separation between samples (the principal components are the eigenvectors of the variance-covariance matrix). A scores plot shows the samples plotted using the principal components. If distinct clusters of samples are observed in the scores plot, then classes exist in the data set. All spectral variables in the original data set have been used in the analysis presented here [27].
There is a supervised method, linear discriminant analysis, that assumes the existence of classes and then proceeds to constructs a function (the discriminant) that gives the best separation between the classes. LDA works on a similar approach to PCA, but LDA creates a linear function (the discriminator) that maximises the differences among the classes or groups [44]. It will show how well the classes are separated as well as where the classification fit is robust and where it is misinterpreted. To demonstrate the best performance of classification in a robust model, combinations of both PCA and LDA were attempted. A potential issue with LDA is that it will always sort samples into classes, so it is difficult to determine if the model contains errors. However, performing PCA prior to LDA can independently confirm the existence of classes in the data set. The principal components from the PCA analysis can also be used to construct the discriminant function in LDA (PCA-LDA). Figure 7 below shows the loading plots of first three principal components. PC1 explains 59.1% of the data while PC2 and PC3 explain 16.2% and 11.7%, respectively.
The loading plots shown in Fig. 7 indicate which spectral bands contribute most to the variance described by the principal component. The OSP average spectrum is used as a reference for comparison of loadings. This gives an understanding of the origin of differences between the samples corresponding to spectral variations. The strong contribution in PC1 and PC2 is from the C= O stretch around 1669 cm − 1 which is usually from amide I band and is mainly proteins and lipids. PC2 has spectral contributions from the Amide III band, around 1243 cm − 1 , which is purely collagen and from the CH 2 wag with a broad noisy band around 1340 cm − 1 which indicated collagen and lipids. PC3 shows another significant contribution around 1100 cm − 1 that is broadly from C-O-C modes, which is mainly protein [33,50].. So, the loadings are showing that DA and OSP samples are differentiated due to their protein and lipid content. These observations are consistent with the identification of the tight and loose marker bands as discussed in Fig. 4. It appears that these bands are responsible for the classification of the samples using Raman and there was a distinguished difference between the two leather types. Figure 8(a-c) shows the two-dimensional (2-D) score plots, with 95% confidence ellipse, of combination of two principal components with the aim to find the separation between DA and OSP. The screeplot in Fig. 8d shows the proportion of the percentage of variance that is accounted for by the principal components.
DA and OSP were not separable as clusters within principal components, but OSP samples show a significant separation along PC3 and DA samples are on the positive side of PC2. No perfect distinction is found between the replicates of OSP and DA in other dimensions. A few points of OSP and DA samples in Fig. 8b overlap with each other. PCA scores plots reveal the intra and intergroup variation between loose and tight wet blue samples [27,46]. After comparing the sample replicates at individual level, principal component analysis was performed on the average spectra of loose and tight replicates. An interesting observation shown in Fig. 9 is that OSP and DA are symmetrically grouped at the positive and negative extremes which is contributed by PC2, which is 7.8% and discriminates both, respectively. This reduced dataset formed after averaging six samples of OSP and DA makes the differentiation more identifiable and provides a significant role in visualising it clearly.
By displaying the data along the directions of maximal variance, PCA analysis demonstrates that Raman spectra of Wet Blue can be separated into classes (i.e. loose and tight). However, the principal components might not give the maximum separation between the classes. Linear discriminant analysis was used to construct a function (the discriminant function) that maximised the class separability. LDA assumed that the data was Gaussian distributed, that all rows must belong to one group (samples are mutually exclusive) and that the variances are the same for both groups. The original variables, or the principal components can be used to construct the discriminant function, the principal components are used as they have the advantage of being independent. If two principal components are used (so the scores plot is planar), the LDA process finds the line in the scores plot plane that maximises separability.
When LDA is done on the PC scores, the mean centre of each grouping is calculated, and each spectrum is predicted to belong to one of the groups based on its distance from the centre of the group. The accuracy of the prediction is an indication of how well the groups are separated [15,23].
A leave-one-out cross validation method [53] was utilised to train the LDA classifier where one sample is left out of the calibration model and predicted with the results obtained. Results are plotted in Fig. 10, which shows the observed group with the predicted group along with the cross-validation summary ( Table 6). It also shows that one of the six DA samples was falsely identified as OSP and two of the five OSP samples were falsely identified as DA. The error rate for cross validation of the training data is 12.33%. The Wilk's Lambda test was conducted on the discriminant variable and found that the discriminant function is highly significant (p < 0.05) in agreement with the classification summary.
The cross-validation summary table shows that OSP has a classification accuracy of 60% and DA has 83.33% which proves that both are mutually exclusive as was expected.
Conclusion
In summary, the work presented here used Raman and Infrared spectroscopy to investigate the variations in loose and tight wet blue leather. To the best of our knowledge, this is the first study done in depth using ratiometric and chemometric analysis to identify and quantify the difference between two wet blue samples of OSP and DA. Vibrational spectroscopy with advanced spectral analysis can quantify the biomolecules which impact the quality, strength and sustainability of leather.
Classification from the peak area ratios was done using logistic regression that gives 100% accuracy for IR data and 90% accuracy for Raman data. Multivariate analysis has supported the Raman results for OSP and DA in describing the difference between the groups providing a clear representation of underlying biological differences. This study is a proof of principle to employ vibrational spectroscopy for quality assessment of leather.
Identification of issues at the raw skin stage and differentiating the changes occurring at each stage of leather processing will be the next area of further work so that only high-quality leather can be obtained with no defects. The analysis of Raman spectra in this work classifies leather samples based mostly on their chemical composition as this factor has the strong influence on the shape of the Raman spectra. Structural factors are also likely to be important. Polarised Raman microscopy can provide structural information that complements the chemical information acquired from the spectral data alone. Further research will analyse the Amide I and Amide III bands using polarised Raman microscopy to provide information on cross-linking between microstructures in the samples.
Additional file 1. Supporting Information. | v2 |
2022-07-07T15:02:43.748Z | 2022-07-05T00:00:00.000Z | 250318519 | s2orc/train | Experiment Study of Salt-Frost Heave on Saline Silt under the Effects of Freeze-Thaw Cycles
Multiple freeze-thaw cycle experiments were performed to determine the in- situ deformation of salt-frost heave of sulfate saline silt under long-term freeze-thaw conditions. For the determination of in-situ salt-frost heave of saline silt by removing the effect of water-salt migration due to temperature gradients, the in-situ salt-frost heave apparatus was designed to achieve a uniform cooling effect. The influence laws of four main factors, such as salt content, water content, initial dry density, and load, on the residual pore ratio of sulfate saline silt under the effects of freeze-thaw cycles have been analyzed. It has been found has been determined. The effect of four main factors, including salt content, water content, initial dry density and load, on the residual pore ratio of sulfate saline silt under freeze-thaw cycles was analyzed, and the mechanism of the accumulative in-situ deformation of salt-frost heave in saline silt under freeze-thaw cycles was analyzed. The results are of positive significance for the in-depth understanding of the mechanism of in-situ deformation generation of salt-frost heave in residual sulfate saline silt under the action of freeze-thaw cycles, and can also provide reference for the prediction of long-term accumulative deformation of residual sulfate saline silt in seasonal permafrost areas in northwest China, and the assessment of the development and prevention of salt-frost heave hazard.
Introduction
The structural connections and arrangements between soil particles are altered by freeze-thaw phenomena that occur with fluctuations in ambient temperature, and this alteration has a strong effect on the physical and mechanical properties of the soil. Research have shown that freeze-thaw cycles increases the pore ratio of dense soil while it reduces the pore ratio of loose soil. The pore ratio of dense soil and loose soil will tend to a stable value after the freezethaw cycles, which is called the residual pore ratio [1]. Loose silt, low-density clay and normally consolidated remodeled soil are compacted during the freeze-thaw cycles, resulting in increased modulus and strength and reinforced structural properties [2]. However, for strongly superconsolidated remodeled soil, the structure is weakened by the freeze-thaw cycles [3]. A large number of experiments have shown that the weakening effect of freeze-thaw cycles on soil structure is widespread. The cement structure between soil particles is gradually destroyed by repeated freezing and thawing and the particles are rearranged. During the freeze-thaw cycles the soil structure becomes looser and looser and the cohesive force is continuously reduced, what is more, the pore ratio of the soil body is continuously increased, and the soil deformation is accumulative [4][5][6][7]. Electron microscopy, CT, and MIP have been used to observe the microstructure of soil after freeze-thaw action. The mechanism of the effect of freeze-thaw cycles on soil structure has been studied by many researchers at the microscopic level, and the results show that (1) freeze-thaw cycles cause crushing and agglomerating behavior of soil, and the soil particles tend to homogenize [6,8], (2) freeze-thaw cycles will induce a decreasing in the number of small pores and an increasing in the number of large pores [9] , (3) with the increase in the number of freeze-thaw, some overhead pores consisting of large and medium pores appear in the soil samples [10][11], (4) the existence of salt in the soil affects the connection form of the soil skeleton [12][13].
The accumulation of deformation disappears with the increase of the number of freeze-thaw cycles, so that the soil structure and mechanical properties gradually stabilize. The structure of the soil at stabilization is influenced by the combined action of water, salt, heat and force, and is the end point of the structural development of the soil under the effects of freeze-thaw cycles. The pore ratio of soil in stable equilibrium structure is the residual pore ratio. The residual pore ratio is an important index to characterize the limit capacity of pore development of soil with certain water, salt, heat and force conditions under the action of freeze-thaw cycles, which is important for predicting the long-term accumulative salt-frost heave deformation of soil and assessing the current salt-frost heave development of soil.
Many researchers have suggested that water migration and the growth of ice lenses are the root cause of frost heave [14][15], while for residual saline silt regions there is no significant development of ice lens, but salt expansion and frost heave hazards are still serious [16][17]. This phenomenon indicates that insitu salt expansion and frost heave caused by long-term freezing-thawing cycles cannot be ignored. Existing investigations mainly focus on the mechanism of frost heave caused by water migration, and lack a systematic analysis of the deformation patterns of salt-frost heave in residual saline silt regions in the northwest under long-term frost heave conditions. In this paper, multiple freeze-thaw cycle experiments have been performed and the in-situ deformation of salt-frost heave in sulfate saline silt under long-term freeze-thaw conditions has been determined. The effect of four main factors, including salt content, water content, initial dry density and load, on the residual pore ratio of sulfate saline silt under freeze-thaw cycles was analyzed, and the mechanism of the accumulative in-situ deformation of salt-frost heave in saline silt under freeze-thaw cycles was analyzed. The results are of positive significance for the in-depth understanding of the mechanism of in-situ deformation generation of salt-frost heave in residual sulfate saline silt under the action of freeze-thaw cycles, and can also provide reference for the prediction of long-term accumulative deformation of residual sulfate saline silt in seasonal permafrost areas in northwest China, and the assessment of the development and prevention of salt-frost heave hazard.
Materials and Sample Preparation
The experimental soil was collected from Lanzhou. The soil has been washed by pure water to remove salt (6 times), dried, crushed, sieved (2 mm) and then sealed and stored. The physical property index of the soil is shown in Table 1. Anhydrous sodium sulfate had been dissolved in distilled water and prepared into a certain concentration of sodium sulfate solution at room temperature (20±2ºC), which was mixed well with dry soil and prepared into soil samples. under long-term freeze-thaw conditions in residual sulfate saline silt in the seasonal permafrost region can be realized. The experiment was conducted using a homemade in-situ salt-frost heave apparatus. The structure of the apparatus is shown in Fig. 1. The soil samples were 8 cm in diameter and 2 cm in height. Salt expansion deformation was measured by YWD-50 displacement sensor with a sensitivity of 200 με/mm. The loading device was WG single lever medium pressure consolidation instrument, and the temperature control used DL2010 high precision low temperature bath with a temperature control range of -20ºC~100ºC and an accuracy of ±0.1ºC. For the determination of in-situ salt-frost heave of saline silt by removing the effect of water-salt migration due to temperature gradients, the in-situ salt-frost heave apparatus was designed with two measures to achieve a uniform cooling effect. First, the inner barrel of the specimen was made of stainless steel with good thermal conductivity, using the inner barrel to cool down from the side and bottom of the specimen at the same time. Second, the specimens were designed as cylinders with a height of 2 cm and a diameter of 8 cm. Due to the small thickness of the specimen and the ability to cool down from the side and bottom simultaneously, there was no obvious temperature gradient inside the specimen during the cooling process.
Experiment Design
(1) Freeze-thaw cycle experimental condition control The temperature in the experiment was controlled by a high-precision low temperature cold bath. The measured temperature-time variation curve in the center of the soil sample under single freeze-thaw in this experiment is shown in Fig. 2 (the actual temperature in the soil sample was measured by a PT100 temperature sensor, and the temperature measurement position was the center of the soil sample).
(2) Variables control The controlled variable method was adopted for this experiment, and the main influencing factors studied were salt content, water content, initial void ratio, and load. The specific covariate controls are shown in Table 3.
Residual Void Ratio of Saline Silt
The salt-frost heave deformation of saline silt under multiple freeze-thaw action was accumulative, which soil when it reaches the stable state [1]. The data of soil samples 1#, 2# and 3# showed that the residual void ratio of residual saline silt was also not affected by the initial degree of compactness of the soil samples. The variation curves of void ratio with increasing number of freeze-thaw cycles for sulfate saline silt with the same salt content (3%) and water content (16%) but different initial void ratios under multiple freeze-thaw cycles are given in Fig. 3. Fig. 3 shows that the void ratio of the soil gradually increases with the increase of the number of freeze-thaw cycles under the action of multiple freeze-thaw cycles, and stabilizes after reaching 30 freeze-thaw cycles. The residual sulfate saline silt with different initial void ratios will eventually reach the same and stable residual void ratio under multiple freeze-thaw cycles, and the residual void ratio is not affected by the initial compactness of the soil.
The ultimate body strain generated by saline silt under the action of long-term freeze-thaw cycles can be calculated from the soil void ratio e and the residual void ratio e res : meant that the volume of saline silt gradually increased with the number of freeze-thaw cycles. Viklander defined the residual void ratio as the void ratio of the Note: Soil samples to which loads were applied were exposed to freeze-thaw experiments after deformation stabilization.
If the void ratio and residual void ratio of the soil are known, the following equation can be used to calculate the accumulative volume deformation ΔV a of residual type sulfate saline silt under long-term freezethaw conditions.
where V s is the initial volume of saline silt.
The effects of water and salt migration within the soil were not considered in the above equations. Therefore, the applicability of the above equations is limited in the surroundings with water recharge and significant water-salt migration. However, for the residual sulfate saline silt foundation without water recharge in the dry zone, which is less affected by water-salt migration, the above equation has a greater practical value. It is of great practical engineering significance to predict the ultimate volume deformation of saline silt foundation under the action of long-term freeze-thaw cycles by using the ultimate volume strain obtained from the residual porosity ratio.
Porosity Ratio and Salt-Frost Heave
A set of soil sample void ratio and the corresponding salt-frost heave deformation and thawing deformation data can be obtained for each freeze-thaw test of soil samples. Since the void ratio of soil sample increases gradually with the increase of freeze-thawing times, the data of salt-frost heave deformation of soil sample with the change of void ratio can be obtained through multiple freeze-thawing experiments, as shown in Fig. 4. From Fig. 4, it can be noticed that with the increase of the number of freeze-thaw cycles, the void ratio of soil samples gradually increases, and the salt-frost heave deformation then decreases. The relationship between salt-frost heave deformation and void ratio is in line with the linear negative correlation. This indicates that in porous materials, the smaller the pore volume, the greater the crystallization deformation when the crystallization volume is constant. If the soil porosity ratio is known, the prediction of single salt-frost heave deformation can be achieved by using the equations of porosity ratio and deformation rate obtained from the fitting. However, some researchers controlled the porosity of the soil by changing the sand grading, and the data showed that there was no obvious linear relationship between the porosity ratio and crystalline deformation [18][19]. The pore structure of porous materials refers to the size, distribution and connectivity of pores, while the porosity can only reflect the size of the total pore volume of porous materials and does not completely reflect the pore structure of porous materials. Therefore, it cannot be simply said that there is necessarily a linear negative correlation between saltfrost deformation and void ratio in saline silt. Combined with the experimental conditions of this experiment, the following conclusion can be drawn: under the same soil, different degree of compactness, there is a linear negative correlation between salt-frost heave deformation and void ratio.
The residual deformation is the difference between salt-frost heave deformation and thawing deformation. Fig. 5 gives the change curves of salt-frost heave deformation and thawing deformation of specimens 1#, 2# and 3# with the increasing void ratio. It can be seen that when the void ratio is small, there is a large residual deformation of the specimens, and when the void ratio reaches the residual void ratio, the residual deformation of the specimens disappears. Since the residual deformation of the specimen originates from the change of soil structure by the freeze-thaw process of saline silt, the soil structure also tends to stabilize and reach a stable equilibrium structure when the soil void ratio reaches the residual void ratio.
Many researchers have used advanced means to observe the microstructure of soil after freeze-thaw, and the results show that repeated freeze-thaw causes a gradual disruption of the cementation structure between soil particles, the clumps in the soil undergo splitting and agglomeration, the particles are rearranged, and the presence of salt in the soil also affects the connection form of the soil skeleton. As the times of freezing and thawing increased, the number of large pores gradually increased, and some overhead pores consisting of large and medium pores appeared in the soil samples [10][11]. Combined with the previous research results, the process of structural development and shelf pore formation in saline silt under the action of freeze-thaw cycles was analyzed here. During the cooling process, watersalt crystallization generated a large crystallization force at the contact location of soil particles, which led to soil fragmentation and pore enlargement, and then salt-frost heave was formed. Salt crystals bonded fine soil particles and develop continuously during salt precipitation, forming soil-salt bound particles, and the size and arrangement of soil agglomerates were changed. During the warming process, some of the salt crystals dissolved, resulting in the soil skeleton formed during the cooling process losing some of the support of the salt crystals, and the pore volume in the soil decreases, resulting in thawing deformation. However, due to the internal frictional resistance, cohesion and matrix suction within the soil body, and the presence of some incompletely dissolved materials such as salt crystals existed in the soil agglomerates, the soil agglomerates not being able to recover their original position. As the result, the unrecoverable residual deformation appears, and an overhead structure will be easily formed as shown in Fig. 6 [16]. The overhead structure cannot be developed infinitely. When the void ratio of the soil reached the residual void ratio, the overhead structure stopped developing and forms an equilibrium structure that tends to be stable. Therefore, the salt expansion of saline silt under multiple freeze-thaw cycles was accumulative, and the accumulative salt expansion decreased with the enlarging pores, and the accumulative deformation will
Effects of Salt Content on Residual Void Ratio
The phase change of water and salt inside the soil during the freeze-thaw cycle is the motive force that drives the continuous development of the soil structure and the salt content has a significant effect on the amount of salt-freeze heave deformation during a single cooling of the soil. The variation process of the void ratio of residual saline silt with different salt contents under long-term freeze-thaw cycles could be obtained from the data of specimens 2# and 10# to 14#, which is given by Fig. 7. It can be gained that the void ratio of the specimens with different salt contents gradually increases with the increase of the number of freezethaw cycles, and gradually stabilizes after 30 cycles.
According to the data in Fig. 7, the average value of the last three void ratios of the specimens was taken as the residual void ratio, and the relationship between the residual void ratio and the salt content can be obtained, as shown in Fig. 8. It can be observed that as the salt content increases, the residual void ratio increases, then decreases and then increases again. Since the starting Fig. 6. Schematic diagram of soil overhead structure formation during freeze-thaw cycles; a) initial soil structure, b) Crystallization precipitation during the cooling process, c) Soil agglomerate fragmentation during cooling process, d) Overhead structure of soil after warming and thawing. void ratio of the above specimens is 0.51, and the single salt-frost deformation of the specimens at the void ratio of 0.51 can be obtained, the residual void ratio and accumulative deformation of the specimens change with increasing salt content in the same way as the single salt-frost deformation, as shown in Fig. 9. This indicates that there is a strong correlation between the magnitude of accumulative salt-frost heave deformation and the magnitude of single salt-frost heave deformation. The relationship curves between single salt-frost heave deformation and accumulative salt-frost heave deformation have been fitted in Fig. 10. It can be noted that there is an obvious linear positive correlation between single salt-frost deformation and accumulative salt-frost deformation, and the relationship equation between the two at salt content ≤1.2% is significantly different from that at salt content >1.2%. From the research findings of our group, the conclusion that salt crystals start to precipitate before freezing in saline silt when the salt content is greater than 1.2% can be reached [20]. Therefore, when salt crystals start to precipitate before freezing of saline silt, the relationship between single salt-frost heave deformation and accumulative salt-frost heave deformation changes abruptly. This indicates that the salt precipitation before the onset of freezing causes a significant change in the crystalline deformation mechanism of saline silt.
Effects of Water Content on Residual
Void Ratio The variation of water content not only affects the water-salt phase variation under the action of freeze-thaw cycles, but also affects the structural strength of the soil, which will have an effect on the residual void ratio of the soil. From the experimental data of specimens 2# and 4# to 8#, the void ratio variation curves of residual saline silt with different water contents under long-term freezethaw cycles can been obtained, as shown in Fig. 11. According to the data in Fig. 11, the average value of the last 3 times of void ratio of the specimen was taken as the residual void ratio, and the relationship between residual void ratio and water content can be obtained as shown in Fig. 12. What can be reached from Fig. 12 is that the residual void ratio decreases linearly as the water content increases. This means that the greater the water content, the smaller the accumulative deformation of saline silt under long-term freeze-thaw action. What can evidently be observed is that unlike permafrost frost deformation which requires a large amount of water recharge for its generation, saline silt produce
Effects of Load on Residual Void Ratio
The effect of cooling rate on salt-frost heave deformation during freeze-thaw cycles was neglected, and the residual void ratio e res was considered as a function of salt content s (%), water content w (%) and load p (kPa). To dimensionless the above parameters, let s` = s/100, w` = w /100, and p` = p/1atm. The software SPSS 21.0 was used to perform a nonlinear regression analysis of the experimental data so that the equation for the residual void ratio e res was obtained: 1.678-0.0775`+0.1994`0.576r es e s p ω = − The correlation coefficient of Eq. (3) is 0.7925. The residual void ratio e res calculated by Eq. (3) was compared with the experimental value to obtain as shown in Fig. 15, which shows that the deviation between the fitted and experimental values is small. The prediction of accumulative deformation of saltfrost heave under long-term freeze-thaw conditions of sulfate saline silt can be achieved using Eq. (1), Eq. (2) and Eq. (3). In order to ensure the reliability of the calculation results and provide scientific reference indexes for the engineering, the conditions of application of the above calculation equations are given here in combination with the conditions of this experiment: 1) the soil is sulfate saline silt; 2) there is no water recharge during the freeze-thaw process; 3) the water content of the soil remains constant during the freezethaw cycle. The above conditions are in good agreement with the actual engineering conditions of residual type sulfate saline silt in the seasonal permafrost area. Eq. (1), Eq. (2) and Eq. (3) can provide evaluation indexes for the actual engineering in the above areas to provide the salt-frost heave characteristics of the site under long-term freeze-thaw conditions.
Conclusions
The in-situ deformation data of salt-frost heave of sulfate saline silt under multiple freeze-thaw cycles have been obtained through multiple freeze-thaw experiments. By analyzing the effects of four main factors, including salt content, water content, initial dry density, and load on the residual void ratio of residual type sulfate saline silt, the mechanism of in-situ deformation accumulative generation of salt-frost heave in saline silt under the action of freeze-thaw cycles has been given. Moreover, the following conclusions have been obtained: (1) Sulfate saline silt with the same initial void ratio eventually reach the same and stable residual void ratio under multiple freeze-thaw cycles, independently of the initial compactness of the soil. At 16% water content, the residual void ratio increases, then decreases and then increases with increasing salt content. At a salt content of 3%, the residual void ratio decreases linearly with increasing water content. The greater the load, the smaller the residual void ratio. At 16% water content and 3% salt content, when the load is greater than 12.5kPa, it can effectively reduce the salt-frost heave accretion of residual saline silt.
(2) With the increasing number of freeze-thaw cycles, the void ratio of soil samples gradually increases and the salt-frost heave deformation decreases. The deformation of salt-frost heave in this process is in line with the linear negative correlation with the void ratio, and there is a positive correlation between the magnitude of accumulative salt-frost heave deformation and the magnitude of single salt-frost heave deformation.
(3) The equation of residual void ratio with salt content, water content and load as parameters has been given by this paper. Using this formula, the prediction of accumulative deformation of salt-frost heave under long-term freeze-thaw conditions in residual type sulfate saline silt in the seasonal permafrost zone can be achieved. | v2 |
2017-10-30T15:36:15.557Z | 2015-04-01T00:00:00.000Z | 33153179 | s2orc/train | Kudoa spp. (Myxozoa) infection in musculature of Plagioscion squamosissimus (Sciaenidae) in the Amazon region, Brazil
Ninety specimens of Plagioscion squamosissimus captured using fishing tackle in the Outeiro district, state of Pará, were examined. Fish were placed in plastic bags containing water, under conditions of artificial aeration, and transported live to the Carlos Azevedo Research Laboratory (LPCA), in Belém, Pará. They were anesthetized, euthanized and necropsied; small fragments of the epaxial and hypaxial muscles were removed for examination of fresh histological sections by means of optical microscopy. In 100% of the specimens analyzed, parasitic pseudocysts were seen to be interspersed within and between the skeletal muscle. These contained pseudoquadrate and/or star-shaped spores that presented four valves and four polar capsules, which were identified from their morphology as belonging to the genus Kudoa. This is the first report of Kudoa in P. squamosissimus in the Amazon region, Pará, Brazil.
Introduction
The Amazon region has an abundance and diversity of fish and is very important for commercial and artisanal fishing. The Outeiro district of Belém is located on the island of Caratateua, which is situated 18 km from the main urban area of Belém (the state capital of Pará) and is connected directly to the Icoaraci district. The island is surrounded by the murky, muddy freshwater of the Guajará bay (PEREIRA, 2001). Because of the influence of the bay, a wide variety of fish is available on the island, thereby allowing local subsistence and artisanal fishing. Among the variety of fish species is Plagioscion squamosissimus Heckel, 1840, which is popularly known as hake and accounts for one of the largest productions of artisanal and industrial fishing (BARTHEM, 1985;BOEGER & KRITSKY, 2003;VIANA et al., 2006). This species has benthopelagic habits and is carnivorous (BARTHEM, 1985;BOEGER & KRITSKY, 2003;SANTOS et al., 2006;VIANA et al., 2006). In Pará, this species has high commercial value, both in freshwater fishing in the estuary and also possibly due to the changes in salinity in the dry season (BARTHEM, 1985).
Interest in studying parasites in fish has increased over recent years because of the consequences and economic losses produced by their presence, as well as the hazards that they present to human health, with the possibility of food and/or allergic poisoning. Parasites of the phylum Myxozoa, which includes myxosporidians, have been the subject of many studies recently. The locations of aquatic myxosporidians in their hosts are varied, and they are found in nearly all tissues and organs. The species of this genus are mostly histozoic (intercellular or intracellular) and are often found in the somatic musculature of their hosts. They can also be found in the heart, intestines, gills, brain, kidneys, gallbladder and blood (LOM & DYKOVÁ, 2006;MACIEL et al., 2011).
Most species of myxosporidians have pathogenic effects on their hosts. For example, the genus Kudoa currently consists of more than 95 species (EIRAS et al., 2014) and causes lesions to the somatic musculature of fish. This may have a significant economic impact through softening of the flesh, with or without formation of macroscopic pseudocysts, which gives rise to postmortem myoliquefaction (ANDRADA et al., 2005). This muscle lysis effect is caused by the action of proteases that the parasites produce and use to soften the muscles of the host in order to promote their own development (ANDRADA et al., 2005). This effect compromises the acceptance of these fish among consumers. Furthermore, there is evidence that consumption of fish that have been infected by Kudoa may cause allergic symptoms (MARTÍNEZ DE VELASCO et al., 2007;GRABNER et al., 2012;KAWAI et al., 2012). The present study describes a infection in muscle tissue of Plagioscion squamosissimus fish caught in the Amazon Region.
Materials and Methods
Ninety specimens of the freshwater fish P. squamosissimus Heckel, 1840 (Teleostei, Perciformes), Brazilian common name "pescada branca", i.e. "white fish", were collected from the Amazonian estuarine region of Outeiro (1° 14' S; 48° 26' W) near the city of Belém. The mean total length of the fish was 12.28 ± 3.67 cm (range: 7.5-18.30) and their mean weight was 25.14 ± 18.48 g (range: 6.12-61.8). They were lightly anesthetized using MS 222 Sigma (Sabdoz Laboratories) diluted in freshwater and after euthanize samples of infected epaxial and hypaxial muscles were taken for optical and electron microscopy studies. For optical microscopy, small fragments (0.5 cm) of parasitized tissue from the palate of the fish were fixed in Davidson solution (neutral buffered formalin, glacial acetic acid, 95% ethyl alcohol and distilled water) for 24 h and were then processed and stained with hematoxylin-eosin (HE), May-Grunwald-Giemsa and Ziehl-Neelsen (LUNA, 1968). The stained sections were documented using a Zeiss Primo Star optical microscope and Zeiss Axio Cam ERc 5s microscope camera, with micrometrics imaging software. Some fragments containing tissue cysts were analyzed using a differential interference contrast (DIC) microscope (Nomarski). Measurements on the spores were made in micrometers (µm), with minimum and maximum values in parentheses. The dimensions were expressed as means ± standard deviations. To calculate the statistical data, the BioStat 5.0 software was used (AYRES et al., 2007). Parasite prevalence was analyzed according to Bush et al. (1997). The measurements of the spores were made in accordance to Lom & Dyková (1992), under 1000X magnification.
Results and Discussion
In fresh preparations of fragments of muscle tissue viewed under an optical microscope, pseudocysts were seen within and between the muscle fibers. When this material was squashed between the slide and cover slip, large numbers of spores of different shapes and sizes were observed (Figure 1). The rectangular or pseudoquadrate format and/or star shape, with four valves and four polar capsules (Figures 2 and 3 The information on the morphological data of the parasite and on the site of infection is corroborated by Heckmann (2012); Moran et al. (1999); Lom & Dyková (1992), who reported that members of the genus Kudoa infect muscle tissues of their hosts and can present starry, square or rounded square shape in apical view.
Some studies on other host species have already recorded spores of stellate Kudoa format infecting the somatic musculature, like Kudoa histolytica Pérard, 1928;Kudoa cruciformum Matsumoto, 1954;Kudoa kabatai Shulman and Kovaleva, 1979;Kudoa bengalensis Sarkar and Mazumder, 1983;Kudoa mirabilis Naidenova and Gaevskaya, 1991;Kudoa cynoglossi Obiekezie and Lick, 1994;and Kudoa miniauriculata Whitaker et al., 1996; and also in pseudoquadrate format: Kudoa funduli Hahn, 1915;Kudoa clupeidae Hahn, 1917;Kudoa crumena Iversen and Van Meter, 1967;Kudoa alliaria Shulman and Kovaleva, 1979;Kudoa amamiensis Egusa and Nakajima, 1980;Kudoa caudata Kovaleva and Gaevskaya, 1983; and Kudoa leiostomi Dyková et al., 1994(MORAN et al., 1999. The infection site observed was limited to the skeletal muscles of the host (P. squamosissimus), and pseudocysts could not be viewed macroscopically in any infected animal. According to Abdel-Ghaffar et al. (2012), most of the genus Kudoa form macroscopic pseudocysts in muscles and cause economic problems with regard to sale of infected fish. On the other hand, like in the results from the present study, little or no inflammatory reaction has been found associated with parasitosis (ANDRADA et al., 2005;CASAL et al., 2008;BURGER & ADLARD, 2010;HEINIGER & ADLARD, 2012;GRABNER et al., 2012;HEINIGER et al., 2013). According to Grabner et al. (2012), Kudoa species were found in the muscle tissue of Paralichthys olivaceus, but no cyst formation or myoliquefaction was visible. On the other hand, myoliquefaction was found in Paralichthys orbignyanus parasitized by Myxobolus sp. (EIRAS et al., 2007). In Trichiurus lepturus, parasitic cysts were found in somatic muscle fibers, but with no inflammatory reaction (ANDRADA et al., 2005). In Aequidens plagiozonatus, Kudoa aequidens was reported with ultrastructural data; this was first reported in Brazilian aquatic fauna in the subopercular muscles, thus emphasizing that no direct inflammatory responses were found in the fibers but, rather, free spores that had disintegrated between myofibrils, which indicated that muscle liquefaction was associated with presence of spores (CASAL et al., 2008).
The biometric data on the spores of Kudoa spp. in apical view and stellate format comprised length of 9.70 ± 0.14 µm and width of 9.33 ± 0.25 µm. The four polar capsules in piriform shape consisted of two larger and two smaller capsules, with lengths of 4.05 ± 0.07 µm and 3.63 ± 0.18 µm and widths of 1.28 ± 0.32 µm and 1.15 ± 0.14 µm, respectively. Spores of pseudoquadrate square format presented length of 5.63 ± 0.18 µm and width of 5.60 ± 0.07 µm; the four polar capsules were equal in size and had a piriform shape, with length of 1.75 ± 0.24 µm and width of 0.98 ± 0.06 µm. Table 1 shows measurement data relating to spore sizes, shapes, lengths and widths of some polar Kudoa species capsules, in comparison with the data of the present study.
From comparisons of morphological species of the genus Kudoa, it could be seen that the length, width and polar capsule measurements of the Kudoa species of the present study differed from the species previously studied , thus giving evidence that there must be two new species. Previously, in fish from the Amazon River, only K. aequidens had been described by Casal et al. (2008) in Aequidens plagiozonatus (common name: cará pixuna).
Examination of the histological sections revealed that the parasite Kudoa spp. was presented within and between the muscle fibers ( Figure 4-7), with 100% prevalence. The extent of the damage and the fate of the infected muscle fibers shows that it is possible that full replacement of mature spores may occur, within and between skeletal muscle fibers, thus causing changes to the swimming behavior of the fish and making them more vulnerable to predators. This was also found by Dyková et al. (2009), who observed that the muscle fibers infected with K. inornata were completely replaced by spores.
Since presence of Kudoa spores in the fish host can cause lesions in its muscles, this disease has a significant economic impact because of postmortem myoliquefaction. This muscle lysis effect comes from the action of proteases, thereby leaving the muscle of the host softened (ANDRADA et al., 2005) and compromising the acceptance of the fish among consumers. Furthermore, there is evidence that consumption of fish infected with Kudoa may cause allergy symptoms or food poisoning (MARTÍNEZ DE VELASCO et al., 2007;GRABNER et al., 2012;KAWAI et al., 2012).
The observations under the microscope in the present study provide the first confirmation that the presence of infection in the musculature of P. squamosissimus is related to parasitism by two different spores of the genus Kudoa. There is a high possibility that these two new species are treatable, but analyses using transmission electron microscopy and molecular biology are necessary in order to determine the species. Studies of this nature are of great relevance, since parasitism by Kudoa may damage the muscles of the host, which is considered to be the finest area of the fish, thus causing commercial losses. | v2 |
2018-11-17T13:25:51.512Z | 2018-06-20T00:00:00.000Z | 53687189 | s2orc/train | Multiscale formulation for coupled flow-heat equations arising from single-phase flow in fractured geothermal reservoirs
Efficient heat exploitation strategies from geothermal systems demand for accurate and efficient simulation of coupled flow-heat equations on large-scale heterogeneous fractured formations. While the accuracy depends on honouring high-resolution discrete fractures and rock heterogeneities, specially avoiding excessive upscaled quantities, the efficiency can be maintained if scalable model-reduction computational frameworks are developed. Addressing both aspects, this work presents a multiscale formulation for geothermal reservoirs. To this end, the nonlinear time-dependent (transient) multiscale coarse-scale system is obtained, for both pressure and temperature unknowns, based on elliptic locally solved basis functions. These basis functions account for fine-scale heterogeneity and discrete fractures, leading to accurate and efficient simulation strategies. The flow-heat coupling is treated in a sequential implicit loop, where in each stage, the multiscale stage is complemented by an ILU(0) smoother stage to guarantee convergence to any desired accuracy. Numerical results are presented in 2D to systematically analyze the multiscale approximate solutions compared with the fine scale ones for many challenging cases, including the outcrop-based geological fractured field. These results show that the developed multiscale formulation casts a promising framework for the real-field enhanced geothermal formations.
increase the effective formation conductivity. These formations are naturally developed over large (km) length scales, while the heterogeneity of the damaged matrix needs to be resolved at fine (e.g., cm) scales. Fractures naturally add to the complexity of the mathematical formulations by introducing significant contrasts in the conductivity and geometry [1][2][3][4]. Their important role in the flow and transport of mass and energy can be properly investigated only if they are explicitly represented in the computational domain [5][6][7][8][9][10][11][12]. The embedded discrete fracture model (EDFM) [13][14][15] has been developed to resolve several geometrical challenges due to explicit treatment of the fractures. EDFM has been extended to complex scenarios in multiphase iso-thermal reservoir simulation [16][17][18], and, importantly, to geothermal systems [19]. Recently, a consistent projection-based EDFM (pEDFM) for flow has been proposed to account for all types of fracture conductivities (from flow barriers to high conductive channels) [20].
The size of final fine-scale systems describing single phase flow in fractured geothermal reservoirs, even after EDFM modeling approach, is beyond the scope of state-of-the-art commercial simulators. Upscaling these highly heterogeneous discrete quantities-in order to reduce the computational costs-would lead to inaccurate simulations, with no error control ability to the reference system. Therefore, new modeling and simulation techniques are more than ever on demand.
Multiscale finite volume methods have been developed for resolving this computational challenge by constructing coarse-scale systems based on local basis functions [21,22]. They are mainly developed for flow equations with complex fluid physics [23]. Together with their recent developments for fractured media [10,15,[24][25][26][27], they form a promising approach for real-field applications. For geothermal applications, however, the coupled flowheat equations need to be considered, which leads to additional complexities both in linear (size of the discrete system) and nonlinear (temperature-dependent coefficients and nonlinear coupling terms) aspects [28].
In this work, we propose a multiscale formulation for coupled flow-heat equations in fractured porous media, where not only the flow but also the heat equation are mapped to the coarse-scale system by using local basis functions. We investigate two different matrix-fracture coupling procedure for heat and flow basis functions, namely totally independent and semi-dependent. These two approaches differ from each other by the amount of matrixfracture coupling and number of matrix basis functions in the local system construction. The nonlinear coupling between flow and heat is treated with a sequential implicit approach.
Several challenging test cases are considered where the fractures play major role in transport of the cold water into the reservoir, and thus enhancing the production of heat. The large temperature gradients (due to slow to fast flow field) adds to the complexity of the simulations, which form a good basis to investigate the accuracy of the developed multiscale method. For all the investigated test cases, including the outcrop-based characterized formation, the basis function formulation for both the flow and heat equations are shown to be able to approximate the reference fine-scale solutions very well. Specially, the very first approximate multiscale solutions (with no iterations) are compared with the fine-scale solutions after the first Newton update, with very close agreement. Note that, even though a single-phase flow is considered here, the conservative velocity field of the Multiscale Finite Volume (MSFV) method is crucial for accurate transport of enthalpy which appears in the heat balance equation. The proposed multiscale finite volume method therefore casts a promising approach for field-scale geothermal studies.
Note that this work focuses on developing an accurate method to approximate the fine scale (fully resolved) solution. While the efficiency improvement is also part of the goal of the multiscale method, it is not extensively studied in this work and would be subject to future studies and development. However, the efficiency of the multiscale method for pressure solver in fractured reservoirs has been studied, and interested readers are referred to [24]. This paper is organized as follows. First, the mass and energy conservation equation of single-phase water system in fractured porous media are presented, together with the coupling strategy used to calculate pressure and temperature. Then, the MSFV method is introduced, and its application for pressure and temperature calculation is explained. After that, the numerical results are presented and discussed, including the simulation results of a realfield fracture geometry taken from outcrop data. Finally, the conclusions are presented.
Mass conservation equation
Mass conservation for single-phase flow in fractured porous media with embedded discrete fracture modelling (EDFM) approach reads for matrix on m ⊂ R n , and on f ⊂ R n−1 for fractures. In this work, n = 2. But these general formulations are also valid for three-dimensional domain (i.e., n = 3). Note that if gravitational effects are considered, one has to replace the pressure p with the potential (p − ρgz), where ρ, g, and z are, respectively, density, gravitational acceleration, and the coordinate along the gravity (pointing downward). Moreover, the superscripts m, f , and w indicate, respectively, the matrix, fracture, and well (source) terms. Moreover, φ stands for the porosity, λ the mobility, and q the flow rate. Note that the equation for fractures is defined in a lower dimensional space than for the matrix. The mobility is defined as λ = k/μ, where k is the absolute permeability and μ is the water viscosity. While the matrix permeability is characterized from geological inputs (possibly after proper upscaling), the permeability of the fractures can be defined (under fully developed flow assumption between parallel plates of distance (aperture) a) as k f = a 2 /12.
The well flow rates read for matrix and for fractures, where P I is the well productivity index [29], and β is the normalized well productivity index, with β m normalized with the matrix control volume V (i.e., β m = (P I λ)/V ) and β f normalized with the fracture area A (i.e., β f = (P I λ)/A). The discrete flux exchange between matrix and fractures, i.e., q mf and q f m , are modeled using EDFM approach as and Here, CI is the connectivity index between matrix and fracture, λ f −m is the effective mobility at matrix-fracture interface, and η is the normalized connectivity index, with η m normalized with the matrix control volume V (i.e., η m = CI λ f −m /V and η f normalized with fracture area A (i.e. η f = CI λ f −m /A, [24]. The discrete connectivity index CI allows for the representation of the discrete fracture element i overlapping with the matrix element j , i.e., where A i−j the surface area of fracture element i inside the element j , and d i−j is the average normal distance between the two elements [15].
Energy conservation equation
In this study, local thermal equilibrium between fluid and solid is assumed [30][31][32], i.e., the rock and the fluid have the same temperature at any given location. Under this assumption, the single-phase energy conservation equation on m ⊂ R n for matrix, and on f ⊂ R n−1 for fractures. Here, ρ r and C pr are the density and the specific heat capacity of the rock, U and h are the water specific internal energy and specific enthalpy, respectively. Also, T is the temperature and λ c is the average thermal conductivity, which is computed as λ c = φλ cw + (1 − φ)λ cr . The well source term, q * w H is defined as with q * w defined in Eqs. 3 and 4. Finally, u is the mass flow rate, which reads u = −ρλ · ∇p according to Darcy's law. From Eqs. 8 and 9, the matrix-fracture heat coupling is divided into two parts: conduction and convection. The conduction coupling terms q c are defined analogous to the matrix-fracture mass (flow) transfer, i.e., and where η c is the normalized conductive connectivity index; with η m c normalized with the matrix control volume V (i.e.
and q f m Note that the convection and conduction heat transfer are defined in a strictly conservative manner, i.e.,
Sequential implicit simulation strategy
The fluid properties depend on both pressure and temperature (see Appendix A for detailed formulations). These properties result in non-linearity of each mass and heat transfer equation, as well as the co-dependency between the coupled equations. A sequential implicit formulation is followed to treat the nonlinearly coupled mass and heat transfer equations for both fine-scale and multiscale simulations, where the pressure equation is first solved, then the total velocities are obtained and then the temperature solution in the fractured media is obtained. Notice that these equations are nonlinear functions of pressure and temperature and therefore are solved implicitly using the Newton-Raphson method.
The linearized equation for the general unknown x x x (i.e., p p p or T T T ) reads (17) or, in expanded form, ⎡ ⎢ ⎣ The superscript ν indicates the iteration stage, A A A the system matrix, and the vector f f f ν x stands for the right-hand-side terms. Note that for each unknown x x x, matrix, fracture, and well terms are present and that the system matrix and righthand-side terms depend on both p p p and T T T . As such, the complexity of the system is quite significant for real-field applications.
Algorithm 1 Sequential solver procedure Update pressure-dependent properties 4: Temperature solver: solve for T T T ν+1
MSFV method for nonlinear flow and heat equations
MSFV method approximates the fine scale solution by superposing the coarse scale solution with the basis functions as the interpolator. The approximate solution is defined as where x is a generic term denoting the unknown (i.e., p or T ), x the approximate fine scale solution, andx the coarse scale solution. The superscript * indicates the domain on which the unknowns are defined (i.e., matrix or fracture), and * • x the local basis functions in domain * coupled with domain • (i.e., matrix (m), fracture (f ), or well (w)).
Finally, N f is the number of fracture networks, N cf i the number of primal coarse cell in fracture i, and N w the number of wells (N w,inj in temperature system, which is the number of injection wells).
Multiscale grids
In the MSFV methods, two types of coarse grids are constructed and imposed on the fine scale grids. The primal coarse cells are constructed as the coarse-scale control volumes, while the dual coarse grids are overlapping coarse grids bounded by the coarse nodes (vertices) on which the local basis functions are computed (see Fig. 1). Figure 1 also illustrates the embedded fracture networks. Using EDFM benefits the multiscale implementation in that the coarsening strategy of the fracture elements is entirely independent of the matrix coarsening. Moreover, the fracture elements can be connected to any matrix cell, i.e., a vertex, edge, or an internal cell (based on dual-corsecell partition [33]).
MSFV for the flow equation
In development of an efficient multiscale method, the proper choice of basis function formulation is important. The important factors to consider for basis functions are their accuracy in representation of the underlying heterogeneity (accuracy), and their independency on the primary unknowns for adaptivity (efficiency). In this work, these aspects are being considered to formulate both pressure and temperature basis functions. The general formulation of the pressure basis function is written as is different for each coupling strategy. Equation 20 is formulated based on an equivalent incompressible system equation. This formulation is proven [34] to be the most efficient strategy (based on CPU measurements) because it eliminates the need to frequently update the local basis function, while the fully compressible coarse-scale system takes care of the global compressibility (and time-dependent) effects.
The pressure basis function formulated using Eq. 20 needs to be calculated at the beginning of the simulation, and updated only if the water mobility λ changes above a prescribed tolerance (i.e., due to the change of the water viscosity). Moreover, with regard to the matrixfracture coupling, we consider two different coupling strategies for the basis function calculation, namely the totally independent (fully decoupled) and semi-dependent (coupled) strategy. More precisely, the totally independent (decoupled) strategy formulates basis function without any coupling between matrix and fractures, while the semidependent strategy is formulated with partial coupling between matrix and fractures [24]. In the semi-dependent strategy, the matrix basis function is coupled with the fractures, but fracture basis function is decoupled from the matrix. This way, the semi-dependent approach enriches the matrix basis functions with the number of fracture course cells inside a dual-coarse domain.
Totally independent approach for basis functions
In the totally independent coupling strategy, all basis functions are calculated independent of interactions with other domains, i.e., An example of the pressure basis function calculated using this strategy is shown in Fig. 2. In Fig. 2a, it is shown that the matrix basis function is not affected by fracture existence, as well as fracture basis function not affected by matrix basis function in Fig. 2b. The basis function forms a partition of unity, meaning that the sum of all the basis function is equal to 1.
Semi-dependent approach for basis functions
In the semi-dependent coupling strategy, fracture basis function ff p is first calculated, decoupled with the matrix basis function, using These values are then used as Dirichlet boundary condition to calculate mf and setting to account for the connectivity of matrix basis function with the fracture domain. An example is shown in Fig. 3a, where ff p is plotted in the fractures and mf p is plotted in the matrix with the coupling effect clearly observed.
The matrix basis function mm p is calculated by setting therefore also accounting for fracture existence. An example is shown in Fig. 3b where the fracture basis function f m p is set to 0, and the matrix basis function mm p observing the effect of the fracture existence, as though the fractures act as flow barriers. This strategy also results in partition of unity.
Fine scale flux reconstruction
In the pressure MSFV method, one of the most important step is the fine-scale conservative flux reconstruction. In MSFV, the mass fluxes are conservative only at coarse scale. Therefore, the fine scale fluxes need to be obtained via additional reconstruction step [35]. This is especially important in multiphase flows, to accurately predict the saturation front since the fractional flow is sensitive to the flux. In geothermal simulations, conservative mass flux is also needed in the convection part of the energy balance calculation due to its velocity dependency. Therefore, it is worth revisiting the fine scale flux reconstruction in this subsection.
The mass flow rate formulation is valid at the primal coarse cell boundaries ∂ c . The conservative fine-scale flux can be reconstructed after solving locally on primal-coarse cells c , subject to the boundary condition (ρλ∇p c ) ·n n n c = (ρλ∇p ) ·n n n c (27) at ∂ c . Here,n n n c is the normal vector pointing out of the primal coarse cell boundaries, meaning that the fluxes at the coarse cell interfaces are used as Neumann boundary condition to calculate the reconstructed local pressure. The locally conservative mass flux is finally reconstructed as
MSFV for the heat equation
To exploit the efficiency of the temperature basis functions, they are formulated based on the conduction term within the whole energy balance equation. This allows for convenient implementation, as well as efficient algorithm (since basis functions are not required to be frequently updated). The general formulation of the temperature basis function can be defined as is different for each coupling approach, and is defined the same way as in pressure MSFV method (see Eqs. 21, 22, 23, and 24).
Note that the temperature basis function formulation is slightly different with pressure basis function. This is due to the fact that the well source term in the energy balance equation is defined through the enthalpy flow. Therefore, there is no explicit relation connecting the well and the matrix or fracture temperature. As such, the well basis function is omitted in Eq. 29.
The temperature basis functions depend only on thermal conductivity. And since the thermal conductivity is considered to be constant in this work, the basis functions therefore do not need to be updated frequently as they will also remain constant. In models where the thermal conductivity is considered to be non-constant, then an adaptive update of the temperature basis functions is necessary (i.e., if λ c changes above a prescribed tolerance). As will be seen in the result section, this formulation is shown to be working well to interpolate the coarse-scale temperature values to the fine scale. Note that these thermal basis functions along with the flow basis functions form the full prolongation (interpolation) operator to map between coarse and fine scale values for flow and heat.
Multiscale algebraic description and algorithm
In the algebraic formulation of the MSFV method, i.e., AMS [36,37], the multiscale procedure can be described by the prolongation P P P and the restriction R R R operators. The prolongation operator is a matrix constructed by the basis function values (interpolators) to map the coarse scale to fine scale solution. The restriction operator, on the other hand, is useful to map from fine scale to coarse scale. In finite-volume formulation, it acts as an integrator of all the fine scale fluxes, source/sink terms, as well as accumulation inside a primal coarse cell [36]. In this section, the algebraic description is explained in a generic way for both pressure and temperature calculation. More specifically, the prolongation operator reads P P P = ⎡ ⎣ P P P m P P P f P P P w ⎤ ⎦ = ⎡ ⎣ P P P mm P P P mf P P P mw P P P f m P P P ff P P P f w P P P wm P P P wf P P P ww where P P P * stores the basis functions defined on domain * , * • c,d , i.e., In the totally independent coupling strategy, the submatrices P P P mf and P P P f m are zero matrices, resulting in a sparser prolongation operator than in the semi-dependent coupling strategy, where only P P P f m is zero. Note also that P P P * w for temperature calculation has the column size of N w,inj , and P P P w * has the row size of N w,inj instead of N w .
The MSFV restriction operator is defined as and in MSFE method, the restriction is defined as the transpose of the prolongation operator, R R R F E = P P P T . Now that both operators are defined, the coarse scale system in equation is written algebraically as wherex x x ν+1/2 c is the coarse scale solution (i.e., pressure or temperature), and the superscript ν + 1/2, indicating that this stage will be complemented by a second stage smoother to be explained later.
Note that in Eq. 33, (R R RA A A ν P P P) constructs the coarse system matrix A A A ν c . The approximate fine scale solution is found as (34) or in residual form, δx δx δx ν+1/2 = P P Pδ δ δx x x ν+1/2 c = P P P(R R RA A A ν P P P) −1 R R Rr r r ν , where r r r ν is the fine-scale residual and is calculated as In each solver, both δp δp δp ν+1/2 and δT δT δT ν+1/2 are calculated first using multiscale operators (see Eq. 35), and then a 2 nd stage smoother (in this study, ILU (0) Here, we employ 5 ILU(0) iterations per stage. This twostage multiscale procedure is repeated iteratively until the norm of residual goes below the prescribed tolerance.
The approximate fine scale solution is finally calculated as x x x ν+1 = x x x ν + δx δx δx ν+1/2 + δx δx δx ν+2/2 , where x x x ∈ {p p p , T T T }. The MS algorithms for pressure and temperature are presented in Algorithm 2.
Numerical results
In this chapter, numerical results are presented first to validate the EDFM model for coupled flow-heat equations, and then to investigate the performance of the multiscale simulation strategy for fractured reservoirs.
Test case 1: validation of EDFM
In this test case, the fine scale EDFM model is validated by comparing it to the result of the fully resolved Direct Numerical Simulation (DNS), used as a reference. The DNS result is obtained by using a very fine grid such that the fractures are captured as equi-dimensional (heterogeneous) objects [15]. EDFM, on the other hand, imposes much coarser grids and models the impact of the explicit lower-dimensional fractures by introducing fracture-matrix connectivities.
The fracture aperture is 0.0101 m, which can be fully resolved by imposing 99 × 99 DNS grid cells Fig. 4. This aperture leads to the fracture permeability of k f = 8.50 × 10 −6 m 2 . The simulation parameters are shown in Table 1. Figure 5 presents the pressure and temperature solutions obtained from EDFM and DNS simulators. Note that the EDFM solutions are obtained by imposing only 11 × 11 matrix cells and 14 fracture elements. It is clear that the EDFM solutions are in good agreement with the DNS reference ones.
The following measures are considered for the error of pressure and temperature: assuming x DNS x DNS x DNS 2 = 0, where x x x is flow rate q q q for pressure and enthalpy flux q q q H for temperature at both left and right boundary faces. The error norms for both pressure and temperature at different time steps are plotted and shown in Fig. 6. This figure also presents the EDFM error study at different times for the case when 33 × 33 EDFM grids are imposed, with 40 fracture elements. The EDFM errors are plotted against Pore Volume Injected (PVI) which is a non-dimensional time measure. The pressure errors are shown to be increasing slightly twice, and this is most likely because there are two pressure transient stages. Initially, the reservoir is hot and therefore water viscosity is lower and water density is higher, making the pressure gradient low. However, the injected water is cold and therefore increasing the pressure gradient in areas close to injection wells (which in turn decreasing the pressure farther from Fine-scale reference temperature with 99 × 99 matrix and 85 fracture elements (a) and multiscale approximate temperature solutions obtained using independent (b) and semi-dependent coupling (c) methods with 9 × 9 coarse matrix and 8 fracture grid cells at the first iteration stage before smoothing. The corresponding relative error norms (d and e) are e T 2 = 0.0343 (independent coupling) and e T 2 = 0.0198 (semi-dependent coupling). After 1 stage of smoothing these errors reduce to e T 2 = 0.0146 (independent coupling) and e T 2 = 0.0035 (semi-dependent coupling) Fig. 13 Fracture geometry taken from outcrop data [39] (a) (b) (c) the injection wells). After the injection water gets farther into the reservoir, the pressure gradient close to the injection wells gets lower and the gradient farther in the reservoir gets higher until it reached semi-steady state. More specifically, the error is due to (1) significant difference between the grid resolutions imposed by each method and (2) the error of EDFM fracture model. Nevertheless, the two approaches are in good agreement.
Test case 2: homogeneous reservoir with a diagonal fracture
A quarter of a five-spot test case is considered in a homogeneous reservoir with a diagonal fracture. The simulation parameters are shown in Table 2. EDFM imposes 85 fracture and 99 × 99 matrix elements. The geometry of the fracture within the reservoir is shown in Fig. 7. The multiscale simulator imposes 9 × 9 coarse grids for matrix and 8 for fractures with two different coupling strategies for basis function calculation. Figures 8 and 9 show the converged solution of both fine scale reference as well as multiscale pressure and temperature. The white lines shown in the plots are the primal coarse cell boundaries. The relative error norms of the multiscale solutions are e p 2 = 2.65 × 10 −5 and e T 2 = 1.62 × 10 −5 (independent coupling), and e p 2 = 2.18 × 10 −5 and e T 2 = 1.58 × 10 −5 (semidependent coupling). It is shown that both independent and semi-dependent coupling strategies result in very good results.
The multiscale pressure and temperature solutions at the first iteration (before smoothing) are also presented in Figs. 10 and 11, respectively, to show that the multiscale provides very good approximations even with no secondstage smoother nor any other (inner and outer) iterations. These results are also compared to the reference fine scale solutions, demonstrating the accuracy of the developed multiscale formulation.
Independent coupling for pressure basis function calculation results in slightly higher error at the fracture tips, where-as expected-the interaction of matrix and fracture domain is relatively high. Note that the temperature field experiences a rapid change in the location of the fracture, due to rapid transport of cold water through the fractures. As such, the significant temperature contrast is created fairly quickly throughout the reservoir, leading to strong nonlinear time-dependent solution field. In the area near the midpoint of the fracture, the semi-dependent coupling provides a slightly lower error for temperature. This can be explained by the fact that in this area, pressure difference between matrix and fracture is lower and therefore, the heat exchange is more conduction dominated. Since the matrix-fracture coupling in the semi-dependent approach for temperature is formulated based on conduction, it leads to better approximation in this area. This is clear from results presented in Fig. 11. Nevertheless, as shown, the multiscale method can represent the complex solution field accurately, even with no smoothing stage. At the first iteration stage, the relative error norms of the multiscale solution obtained before smoothing are e p 2 = 0.0081 and e T 2 = 0.0343 (independent coupling), and e p 2 = 0.0016 and e T 2 = 0.0198 (semi-dependent coupling). After 1 stage of smoothing, the errors are reduced to e p 2 = 0.0076 and e T 2 = 0.0146 (independent coupling) and e p 2 = 0.0008 and e T 2 = 0.0035 (semi-dependent coupling). The results are summarized in Table 3.
The semi-dependent coupling strategy leads to lower errors compared to the independent coupling, especially in the area surrounding the fracture. However, it does not bring much improvements. The errors obtained using independent coupling are not significant and could be resolved with several smoothing and iterations.
Test case 3: fracture geometry from outcrop data
A quarter of a five spot test case is considered in a heterogeneous reservoir with dense and complex fracture networks taken (by applied geologists of TU Delft) from outcrop data in Brazil [39]. The base-10 logarithm of permeability and average thermal conductivity are plotted in Fig. 12, and the simulation parameters are shown in Table 4. EDFM generates 3860 fracture and 100 × 100 matrix elements. The geometry of the fractures within the reservoir is shown in Fig. 13. The multiscale simulator imposes 10 × 10 coarse grids for matrix and 386 for fractures. For this test case, all the results presented are using independent coupling method.
As shown in test case 2, the decoupled approach approximates the solution really well. Therefore, for this test case, only the results obtained using the decoupled approach are shown for conciseness. Figures 14 and 15 show the converged solution of both fine scale reference as well as multiscale pressure and temperature. The relative error norms of the multiscale solution obtained are e p 2 = 8.22 × 10 −6 and e T 2 = 7.07 × 10 −6 .
To make it more suitable for real field application and to further validate the multiscale method for this complex test case, mass and enthalpy production rate at the production Fig. 16.
In the plot, it is shown that both multiscale and the fine scale mass and enthalpy production rate have a very good match, with e p 2 = 0.028 and e T 2 = 0.036. The error of the enthalpy production rate is slightly higher due to the fact that the enthalpy production rate is calculated based on the mass production rate, and therefore the error from the mass production rate calculation is propagated. Nevertheless, the error is still relatively low and therefore acceptable. The multiscale solutions for both pressure and temperature at the first iteration stage before smoothing, along with the fine scale reference solutions for comparison, are shown in Figs. 17 and 18. The corresponding relative error norms before smoothing are e p 2 = 0.0111 and e T 2 = 0.0467, which are relatively low for a complex model. After smoothing, the errors are further reduced to e p 2 = 0.0110 and e T 2 = 0.0180. The results of this test case are summarized in Table 5. The error reduction in pressure after the smoothing stage is not significant, while error reduction in temperature is more drastic because of the higher complexity of the heat transfer equation and the simplicity of the basis function used to calculate it. It is also clear that the temperature distribution in Fig. 18 (without smoothing) already captures the preferential flow path of the fluid. From the results obtained, it can be concluded that independent coupling method gives reasonably good approximations, even before the smoothing stage for a heterogeneous system and very dense fracture networks.
Conclusion
In this work, a multiscale method for coupled single-phase flow-heat equations in fractured reservoirs was developed. The method avoids excessive upscaling of the parameters, and honours fine-scale heterogeneity in construction of the coarse-scale system for both flow and heat equations. This is achieved by formulating flow and temperature basis functions, allowing for the accurate map between fine and coarse scale solutions. The coupling between the equations was treated by a sequential implicit framework, where both pressure and temperature systems were solved by a MSFV method. The multiscale formulation was enriched due to the presence of the fractures, with two coupling approaches for local basis functions of each solver. An EDFM approach was adapted to the framework, which allows for independent grids for matrix and fractures. This further facilitated the convenient multiscale formulation and implementation, as totally independent coarse grids were also imposed on matrix and fractures. Test cases were performed first to validate the implementation of the simulator (via comparing its results with a DNS approach), and then to systematically assess the performance of the multiscale method for heterogeneous and highly fractured media. A fracture formation from a real-field outcrop was also considered to illustrate the capacity of the algorithm in addressing complex fracture networks.
Although we employed the MSFV iterations to reach convergence in our sequential implicit framework, one can stop iterations before convergence is reached. Specially, as shown in the results, the initial multiscale approximate solutions in presence of heterogeneity and fractures were close to the fine-scale solution at the same stage of iteration. The tolerance to stop iterations of a conservative multiscale solver needs to be defined based on the influence of the solution in the overall accuracy of the coupled solutions, the stability of the time-dependent solutions, and the uncertainty within the parameters. Similar to previous studies for coupled flow and transport [40], such a study is needed for coupled P-T as a future work. Specially, in the presence of strong coupling one may consider formulating a multiscale methodology for fully implicit systems [41,42].
As for the multiscale basis functions, to exploit the maximum efficiency, the temperature basis function was formulated based on the elliptic part of the energy conservation equation (i.e., the conduction term). Numerical results showed that such an approach is well suited for the considered single-phase fluid-dynamic system, i.e., it leads to accurate results even without smoothing stage.
In this work, a robust approach for solving the coupled pressure and temperature equations in fractured heterogeneous reservoirs was developed. The results presented show promising framework for further developments for fieldscale enhanced geothermal systems. Future developments need to consider multiphase (including steam) effects for fluid and the geo-mechanical effects (including fracture activations or closures and propagation) for solid rock.
Moreover, the empirical parameters, n i , are shown in Table 6. The combination of u ws = 420 kJ/kg, C pw = 4.2 kJ/kg, and T s = 373 K was found to provide the best fitting values for internal energy calculation. More precisely, compared with the data, the density relative error norms were below 1% in most regions and 2.2 % near the critical point. Similarly, the internal energy errors were less than 6%. [39] due to the high number of fractures in the model. The fractures are defined by two points: A and B and the x and y coordinates are given in the tables.
B.1 Fracture coordinates for test case 1
The fracture coordinates for test case 1 are listed in Table 7.
B.2 Fracture coordinates for test case 2
The fracture coordinates for test case 2 are listed in Table 8. | v2 |
2022-08-10T01:15:43.557Z | 2022-08-08T00:00:00.000Z | 251442699 | s2orc/train | Entropy of rigid k-mers on a square lattice
Using the transfer matrix technique, we estimate the entropy for a gas of rods of sizes equal to k (named k-mers), which cover completely a square lattice. Our calculations were made considering three different constructions, using periodical and helical boundary conditions. One of those constructions, which we call Profile Method, was based on the calculations performed by Dhar and Rajesh (Phys. Rev. E 103, 042130 (2021)) to obtain a lower limit to the entropy of very large chains placed on the square lattice. This method, so far as we know, was never used before to define the transfer matrix, but turned out to be very useful, once it produces matrices with smaller dimensions than those obtained by other approaches. Our results were obtained for chain sizes ranging from k=2 to k=10 and they are compared with some other results already available in the literature, such is the case for dimers (k=2), which is the only exactly solvable case, trimers ($k=3$), recently investigated by Ghosh, Dhar, and Jacobsen (Phys. Rev. E 75, 011115 (2007)0 and simulational estimates obtained by Pasinetti et. al. (Physical Review E 104, 054136 (2021)). Besides the entropy values itself, our results are consistent with the asymptotic expression for the behavior of the entropy as a function of the size $k$, proposed by Dhar and Rajesh for very large chains.
I. INTRODUCTION
In this paper we study a system of rigid rods formed by k consecutive monomers placed on the square lattice, which will be called k-mers, calculating the entropy of the system. This is a problem which has a long history in statistical mechanics. The particular case when k = 2 (dimers) and in the full lattice limit, when all sites of the lattice are occupied by endpoints of rods, is one of the few exact solutions of interacting models which were obtained so far [1]. Another aspect of the thermodynamic behavior of long rod-like molecules was already anticipated by Onsager in the 40's: he argued that at high densities they should show orientational (nematic) order [2], due to the excluded volume interactions. In a seminal paper, for the case of rods on the square lattice, [3] Ghosh and Dhar found, using simulations, that for k ≥ 7 at low density of rods an isotropic phase appears, but as the density is increased a continuous transition to a nematic phase happens. Evidence was found that close to the full lattice limit the orientational order disappears at a density 1 − ρ c ∼ k −2 . The presence of the nematic phase at intermediate densities of rods was proven rigorously [5]. Because simulations at high densities of rods are difficult, an alternative simulational procedure allowed for * Electronic address: [email protected] † Electronic address: [email protected] ‡ Electronic address: [email protected] more precise results for the transition from the nematic to the the high density isotropic phase [4]. Recent results suggested this transition to be discontinuous [6].
Here we consider the estimation of the entropy of kmers on the square lattice in the full lattice limit, for k ≥ 2. This has been discussed before in the literature. Baumgärtner [7] generated exact enumerations of rods for 2 ≤ k ≤ 12 on L × L square lattices, but did not attempt to extrapolate his results to the two-dimensional limit L → ∞. His interest was actually more focused on the question if the system is isotropic or nemaic in this limit. Bawendi and Freed [8] used cluster expansions in the inverse of the coordination number of the lattice to improve on mean field approximations. For dimers on the square lattice, their result is about 8 % lower than the exact result [1], and there are indications that the differences are larger for increasing rod lengths k. A study of trimers (k = 3) on the square lattice using transfer matrix techniques similar to the ones we use here, was undertaken by Ghosh, Dhar and Jacobsen [9] and has led to a rather precise estimate for the entropy. Computer simulations have also been useful in this field, and estimates for the entropy of k-mers on the square lattice were obtained by Pasinetti et al [10] for 2 ≤ k ≤ 11, besides studying other statistical properties of the high density phase of the system. Another analytic approximation to this problem may be found in the paper by Rodrigues, Stilck and Oliveira [11], where the solution of the problem of rods on the Bethe lattice for arbitrary density of rods [12] was performed for a generalization of this lattice called the Husimi tree. These solutions on the central re-gion of treelike lattices may be seen as improvements of mean field approximations to the problem. Again there are evidences that the quality of the estimates decreases for increasing values of k, while the difference of the estimate for dimers to the exact value is of only 0.03 %, it already grows to 3 % when compared to the estimate for trimers presented in [9].
The approach we employ here to study the problem is to formulate it in terms of a transfer matrix, as was done for trimers in [9]. It consists to define the problem on strips of finite widths L with periodic and helical boundary conditions in the finite transverse direction. The leading eigenvalue of the transfer matrix determines the entropy of the system, as will be discussed below. The values of the entropies for growing widths are then extrapolated to the two-dimensional limit L → ∞, generating estimates and confidence intervals for each case. For the case of periodic boundary conditions (pbc), besides using the conventional definition of the transfer matrix, in which L sites are added at each application of it, we used an alternative approach, inspired on the generating function formalism which was developed by Dhar and Rajesh in [13] to obtain a lower bound for the entropy of the system. This alternative procedure turned out to be more efficient for this problem than the conventional one, in the sense that the size of the transfer matrices were smaller, thus allowing us to solve the problem for larger widths L. For helical boundary conditions (hbc), only the conventional formulation of the transfer matrix was used.
Finally, we already mentioned that the possible orientational ordering of the rods in the full lattice limit was, for example, a point which motivated the exact enumerations in [7]. For dimers, it is known exactly that no orientational order exists [14], but on the square and hexagonal lattices, which are bipartite, orientational correlations decay with a power law [1], while there is no long range orientational order on the triangular lattice in the same limit [15]. This point is also investigated numerically for trimers in [9], with compelling evidences that the dense phase in the full lattice limit is not only critical but has conformal invariance. As already mentioned, so far all indications are that the high density phase of the system is isotropic on the square lattice, possibly with orientational correlations decaying with a power law. This paper is organized as follows: the construction of the transfer matrices and determination of the leading eigenvalues and the entropies are described in section II. The numerical results for the entropies of the rods on strips, the extrapolation procedure and the estimates for the entropy of the rods on the square lattice may be found in section III. Final discussions and the conclusion are found in section IV.
II. TRANSFER MATRIX, LEADING EIGENVALUES AND ENTROPY
The transfer matrix will be determined by the approach used to describe the transverse configurations of the strips at different levels, which define the states of lines and columns of the matrix. The two approaches we used are described below. We consider using a lattice in the (x, y) plane, with 1 ≤ x ≤ L and 0 ≤ y ≤ ∞. Periodical or helical boundary conditions are used in the transverse direction, that is, horizontal bonds are placed between sites (L, y) and (1, y) in the first case and (L, y) and (1, y + 1) in the second case. Fixed boundary conditions are used in the longitudinal direction. For periodic boundary conditions, we use two approaches in order to obtain the transfer matrix, which we call Usual Approach and the Profile Method. Those two approaches, although defining the states of the matrix in different ways, will of course produce exactly the same results for the entropy per site of chains with length k placed on a lattice with width L. For helical boundary conditions, only the Usual Approach is used. In the following, we shall describe each of those approaches.
A. Periodical Boundary Conditions
The definition of the transfer matrix for this problem will be done in two different ways: in the Usual Approach, at each application of the transfer matrix L new sites are incorporated into the lattice, while in the second approach a variable number of k-mers is added to the system at each step, so that the ensemble is grand-canonical in this case.
Usual Approach
This way to build the transfer matrix is the same used by Ghosh et. al. [9] (named as "Second Construction") to study the case of trimers (k = 3). It was also was applied in previous works by two of the authors, such is the case in [16,17]. As mentioned in those papers, this method is inspired by the work of Derrida [18], which applied it to the problem of an infinite chain placed on a cylinder.
The states which define the transfer matrix in this formulation are determined by the possible configurations of the set of L vertical lattice edges cut by a horizontal reference line which is located between two rows of horizontal edges of the lattice, such as the dashed lines R 1 and R 2 in Fig. 1. These states may be represented by a vector, where each component corresponds to the number of monomers already connected to it, i.e, those located on sites below the reference line. Thus, the components are restricted to the domain [0, k − 1]. So, from the information given by this vector, we can find all possible configurations for the vertical edges cut by the reference line situated one lattice spacing above, allowing us to define the transfer matrix of the problem. An illustration of possible configurations and their representative vectors for pbc can be observed in figure 1, where we have a state defined for the case k = 3 and L = 4. At the reference line R 1 , separating the levels y n−1 and y n , we have the vector |v 1 = (0, 0, 2, 0), while at R 2 , linking the levels y n and y n+1 , the configuration is represented by |v 2 = (1, 1, 0, 1). We proceed developing an algorithm to obtain, exactly, the elements of the transfer matrix, for given values of k and L. However, we are limited by the amount of computational memory and/or by the time necessary to compute those elements. For a given value of k, the number of states grows roughly exponentially with L. Even considering rotation symmetry, which makes states such |v 1 = (0, 1, 0) and |v 2 = (0, 0, 1) equivalent, and reflection symmetry, where |v 1 = (0, 1, 2, 3) and |v 2 = (3, 2, 1, 0) can be treated as the same state, this property imposes an upper limit to the widths that we are able to study for each rod size, k.
In principle, without considering the reduction of the size of the transfer matrix due to symmetries, one would suppose that this size would be equal to k L , but the transfer matrix is actually block diagonal, each state being associated to one of the blocks. It happens that the leading eigenvalue always belongs to the block generated from the state |v 0 = (0, 0, . . . , 0). So, instead of determining the entire transfer matrix, we proceed using the same strategy developed by Ghosh et. al. [9] for trimers, generating the subset of states which starts from the state |v 0 and counting all other states connected to it.
Once we compute the transfer matrix, T , to obtain the value of the entropy per site for the case of gas of monodisperse rigid chains with size k in a stripe of size L, we may then compute the dimensionless entropy per lattice site for a strip of width L where N = Lℓ is the number of the sites and Ω is the number of configurations of the rods of size k placed on the stripe. So, that number is related with the transfer matrix as Ω = T r(T ℓ ) and if λ 1 is the largest eigenvalue of T , we get, in the thermodynamic limit ℓ → ∞: So, to obtain the entropy of a given width L, we should determine the largest eigenvalue for the transfer matrix. Fortunately, the typical transfer matrix is always very sparse, which allows us to use a method such as the Power Method, so that the determination of this eigenvalue becomes a possible task for quite large widths.
Profile Method
This alternative method of defining the transfer matrix is inspired on the generation function approach used by D. Dhar and R. Rajesh to obtain lower bounds for the entropy of k-mers in the full lattice limit [13]. It is convenient in this case to consider the dual square lattice, whose center of elementary squares correspond to sites in the lattice of the previous section, and represent the kmers as k × 1 rectangles on this lattice. Unlike the Usual Approach, where L sites are added at each multiplication of the transfer matrix, in this method a variable number of k-mers is added at each step. We consider the profile of the upper end of the stripe at a particular point in filling it up with rods, such as the one shown in Fig. 2, as defining the states to build the transfer matrix. For a particular profile, we define the baseline as the horizontal line passing through the lowest points of the profile. We then consider the operation of adding rods to all points in the baseline, so that a new baseline is generated at a level at least one lattice parameter higher than the previous one. There may be more than one way to accomplish this, involving different numbers of added rods. We will denote by z the fugacity of one rod, so that the contribution to the element of the transfer matrix corresponding to a particular choice of new rods added to the stripe will be z nr , where n r is the number of new rods added to the stripe. Notice that no k-mer will be added which will not have at least one monomer located on the baseline. The profiles, which define the states of the transfer matrix, may be represented by a vector with L integer components, ranging between 0 (the intervals on the baseline) and k − 1. Thus, in general, there will be k L possible states. However, as mentioned before, this general transfer matrix will be block diagonal, and as was done before for the case of trimers [9] we will restrict ourselves to the subset of states which include the horizontal profile (0, 0, . . . , 0), since in all cases where we were able to consider all profiles the leading eigenvalue of the transfer matrix may be found in this block.
In the grand-canonical ensemble we are considering, let M be the number of times the transfer matrix is applied. In the thermodynamic limit M → ∞ the partition FIG. 2: Illustration of one step of the process of filling the stripe of width L = 7 with trimers (k = 3). The initial profile is the thick black line and its baseline is at the level pointed by the black arrow. The height profile in this case will be (0, 0, 0, 0, 1, 1, 0), notice that there are two steps (5 and 6, from left to right) which are at the same height in both profiles. One possibility is to aggregate one horizontal rod (red on line) and two vertical rods (yellow on line). The new baseline is pointed at by the blue arrow and the new profile will be represented by (0, 0, 2, 2, 0, 0, 0). The contribution of this configuration is z 3 .
function Y M (z) will be determined by the leading eigenvalue λ 1 of the transfer matrix Y M (z) ≈ λ M 1 , so that the thermodynamic potential will be: (3) where z = e βµ , µ being the chemical potential of a rod. The entropy will be given by the state equation and the total number of rods will be The dimensionless entropy per lattice site occupied by rods will then be: In the grand-canonical ensemble, the remaining extensive variable of the potential is usually the volume. The number of rods will be different in the configurations which contribute to the partition function, and by construction they occupy the lower part of the lattice in a compact way. For simplicity, let us consider widths L that are multiples of k. We then see that for a given value of M, the height H of the region occupied by the rods will be in the range [M, kM ], so that the volume should be at least equal to L × kM . Actually, it could be fixed at any value above this one without changing the results. This means that this condensed phase of k-mers actually coexists with the part of the lattice which is empty, and since the grand canonical potential of the coexisting phases should be equal we conclude that Φ(T, V, µ) = 0, because this will be the potential of the phase which corresponds to the empty lattice. In other words, we recall that the grand canonical potential is proportional to the pressure (force per unit length in the two-dimensional case), which should be the same in the coexisting phases. This condition of coexistence determines the activity of a rod and substitution of this restriction into Eq. 6 leads to the final result for the entropy per site occupied by the rods in this formulation of the transfer matrix: In summary, in the formulation where the states are determined by the height profile of the k-mers in the stripe, we solve numerically Eq. 7 for the activity z c which corresponds to a vanishing pressure of the condensed phase of rods and then determine the entropy per site of this phase using Eq. 8. It is then interesting to consider explicitly the simplest non-trivial case using the Profile Method, which is L = k. Starting with the horizontal profile, we notice that for L = k there will be two possibilities to add a new set of rods and shift the baseline upwards: either a single horizontal rod or k vertical rods are added, and the new profile is again horizontal in both cases. Due to the periodic boundary conditions, in the first case there are L = k different ways to place the horizontal rod. We thus conclude that there is a single profile state in this case and the size of the transfer matrix is 1 × 1 so that We see then that z c is defined by the equation z k c + kz c − 1 = 0 and the entropy per site will be given by Eq. 8.
We proceeded using both methods described above to calculate the entropy for a set of rod sizes k and growing widths L. To reduce the size of the transfer matrices, we use rotational and reflection symmetries of the states. Since we want to obtain estimates for the entropies per site in the two-dimensional limit L → ∞, it is important to reach the largest possible widths L for each rod size k. It should be noticed that in the Profile Method we have to solve numerically Eq. 7 for z c , so that the leading eigenvalue λ 1 has to be calculated several times, while in the conventional method only one determination of the leading eigenvalue is needed. This seems to indicate that the Usual Approach should allow us to reach the largest widths. However, if we compare the numbers of states (size of the transfer matrices) in both methods, we obtain the results shown in Fig. 3. We notice that the transfer matrices are systematically larger for the Usual Approach, the difference increasing monotonically with the rod size. Therefore, at the end the Profile Method allowed us to reach the largest widths in all cases, which are determined by the limitations in time and memory of the computational resources available to us.
B. Helical boundary conditions
An alternative way to define the boundary conditions of stripes of width L is to make them helical. This was already used by Kramers and Wannier in their seminal paper about the Ising model [19]. To visualize these boundary conditions, if we consider the model on a cylinder with perimeter of size L, the transverse lattice edges are on a helix with pitch L as seen in Fig. 4. The states are defined, as in the Usual Approach, by the number of monomers already incorporated into the rods on the L + 1 edges cut by a line which divides the stripe into two sectors. In the Usual Approach, this line, as may be seen in Fig. 1, is horizontal and cuts L edges, while for helical boundary conditions it is also parallel to the transverse edges for L steps, ending with a vertical step. This is illustrated by the dashed line in Fig. 4, at a given step the line starts at point A, cuts L vertical edges and finally cuts an additional transverse edge. All sites below the curve are occupied by monomers. While for periodic boundary conditions L lattices sites are added to the system as the transfer matrix is applied (the sites between lines R 1 and R 2 in Fig. 1) as compared to the periodic one, is that only one or two elements of each line of the transfer matrix are equal to 1, all others vanish, so in general they lead to sparser transfer matrices, which of course is desirable if we use the Power Method to calculate the leading eigenvalue. The drawback is that the reflection and rotation symmetries are not present in this case.
III. NUMERICAL RESULTS
In this section, we discuss the numerical results obtained from the three approaches used to determine the transfer matrix for the case of a monodisperse gas of rigid chains with size k, filling a stripe of width L with periodical and helical boundary conditions.
Besides presenting the values of the entropy for each case, we also discuss the question about the transfer matrix dimension, which turns out to be the major obstacle in obtaining the entropy for a given (k, L) pair. Also, after collecting some figures for the entropies we should deal with the task of how to extrapolate them to obtain an estimate for s ∞ (k), from a set {s L (k)}, valid to the two-dimensional limit, i.e, where L → ∞. For that, we are aware that for critical two-dimensional isotropic statistical systems, presenting only short-range interactions, conformal invariance predicts that in a cylinder of width L, the entropy per site must be given by the relation [20], where A is related to the central charge. Using the methods previously described to build the transfer matrix, we could determine the entropy of a given size of rods k for different values of L, limited by the amount of computer memory required in each case and/or by the processing time. Once we have obtained the elements of the matrix, the calculation of its dominant eigenvalue was carried out using the Power Method to diagonalize the matrix, where p is a positive real number and I is a unitary matrix. Such procedure was necessary because the original matrix, T , usually has a set of dominant eigenvalues, which, despite always presenting at least one of those eigenvalues at the real axis, has others with the same modulus in the complex plane. Such feature turns out to be a difficulty for the Power Method to work properly. However, using that translation we can shift the eigenvalues along the real axis all the eigenvalues, making the positive real one the only dominant eigenvalue, λ ′ for the matrix T ′ . Then, to recover the value which we are looking for, λ, it suffices to consider λ = λ ′ − p.
The choice of the parameter p may be a sensitive issue in order to get the right results for the dominant eigenvalues. We adopted the strategy to fix this parameter maximizing the ratio between the real positive eigenvalue and the second largest modulus. However, in the cases we verified here, even spanning the values of p over a large interval, such is [1 : 100], only minor differences among the results (≈ 10 −14 ) appear. In fact, the only noticeable effect caused by changing the size of this translation is observed for the number of steps needed to the Power Method converge with a given precision (in our case this precision is about 10 −13 ). For growing values of p the number of steps increases, roughly, in a linear fashion.
Just as it happens for trimers [9], each other k-mer has its entropy values following the relation Eq. 10 in separate sets depending on the rest, R, of the division L/k. Hence, if for trimers we have three sets (values with remainders 0, 1, and 2), in other cases there will be k sets of values for the entropy obeying the relation 1/L 2 , as we can see in the figure 5(a) for the case k = 4, calculated with periodical boundary conditions. Such behavior obviously poses an extra difficulty in order to get from each set a good extrapolation for the entropy in its thermodynamical limit, when L → ∞.
Now we start to discuss the results obtained from each of the approaches presented in the previous section, considering its peculiarities and the limitations of each of them concerning widths which could be reached.
A. Periodic boundary conditions
For these boundary conditions, we applied the Usual Approach and the Profile method. As already mentioned before, the Profile Method turns out to be more effective for larger values of k and L. We will thus restrict ourselves to present the results furnished by that method, after remarking that we have verified that for trimers the numbers of states we have obtained using the Usual Approach are equal to the ones in reference [9] obtained from the second construction. Of course, as already mentioned before, both approaches lead to the same values for the entropies. The dimension of the transfer matrix, for a given value of k, grows nearly exponentially as a function of L -as we can see in figure 1(b) -considering the behavior for each set of values of a given remainder R. Then, for a high value of k the number of elements for the set {s L } cannot be as big as it is when we consider smaller chains. The dimension reached in our calculations using the Profile Method, for each set of remainder R is discriminated in table I.
Using the entropy values in each set, we can obtain an extrapolated result for s ∞ (k). This was done using the approach known as BST extrapolation method [21]. Since this method can be functional even in situations where the number of entries to extrapolate is not that big, it appears to be convenient to use it in our problem. As it is described in the reference [21], the BST method has a parameter ω, which in our case should be set as ω = 2, due to the relation Eq.10. Also, because the desired limit, s ∞ , is obtained from a table of extrapolants, T (i) m , where m is related to the extrapolant generation, then the error of the estimate will be defined as: when m → ∞. In practical terms, this limit is applied considering the difference between the two approximants k R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 So, using the BST extrapolation method we were able to obtain the values shown in table II for each set associated with the remainder of the ratio L/k. To finally get a values ∞ (k), representing the extrapolation for all sets considered, we calculate an average and a total error weighted by the errors of each value of s i , obtained for some remainder R. Once we consider the values s i (k) statistically independent of each other, the average and its deviance have to obey the following relations, where s i is the extrapolated value of the entropy for some set of ratio R, while σ i is the error related to it, which is obtained from the equation Notice from the table II that as the size k of the rods grows, the precision for the values s i is smaller, since the number of entries for each set {s(k)} diminishes. It is also perceivable that the sets associated with the remainder R = 0 lead to the worst results for the extrapolation. This happens because such cases have a slower approach to the limit s ∞ (k). Therefore, that set, although related to the smaller transfer matrix dimensions, needs a larger number of entries to produce a better result. On the other hand, our final results are in excellent agreement with the exact value obtained for the dimer case (k = 2) [1], where s ∞ (2) = G/π, where G ≈ 0.9159655941772... is Catalan's constant. From the table III we see that our estimate coincides with this exact result up to the 11th decimal place. Also, for the case k = 3, we can compare our result with that obtained by Ghosh et. al [9], i.e, s ∞ (3) = 0.158520 (15), which is also in accordance with the one shown in table III. For the rest of the cases, we have also a good agreement with the results obtained by Pasinetti et. al [10] through numerical simulations, although, our extrapolations exceed their values, in precision, at least in one order of magnitude.
In figure 6 we can see how the entropy globally behaves as a function of the chain size k. First of all, such values are constrained between two limits. A lower bound, obtained by Dhar and Rajesh [13], considering a lattice with dimensions 2k × ∞ and k ≫ 1. The upper bound was calculated by Gagunashvili and Priezzhev [22], being expressed by the equation, where γ = exp(4G/π)/2, with G being the already mentioned Catalan's constant. Notice that this upper limit coincides with the exact value of the dimer entropy on the square lattice when k = 2.
We can also observe that as k grows, the behavior for s(k) has a tendency to approach that one predicted by Dhar and Rajesh [13], s = ln k k 2 , for the case of very large chains. Actually, beyond k = 5 the difference between our values and the asymptotic prediction differ less than 3%. [13] and [22](Eqs. 14 and 15, respectively). The dashed-dotted line follows the behavior predicted by Dhar and Rajesh [13] when k → ∞, i.e, s = ln k/k 2 .
B. Helical boundary conditions
The transfer matrix obtained through this approach displays a larger number of states than those obtained considering periodical boundary conditions. A comparison between those numbers can be seen in Fig. 7, where besides noting the exponential dependence between the number of states and the width L, for a given value of k, already seen in the pbc case, we also can perceive that these numbers can be almost 1000 times bigger when the matrix is calculated considering helical boundary conditions. In part this drawback is compensated by the fact that for helical boundary conditions the transfer matrix is much sparser when compared to the case of periodic boundary conditions, as already mentioned, but this also has the effect that the number of iterations needed in the Power Method to reach a selected convergence will be larger for helical boundary conditions. Evidently, because of that, the largest value of L attained for each size of the chains is smaller than those reached for the pbc calculations. Then, once the eigenvalues, as it also happens for the periodical case, are arranged in sets of sizes sharing the same remainder for the division L/k, the number of elements for each set is smaller when compared with those shown in the table I. For this condition those numbers are presented in table IV.
Another similarity found in those two approaches is the disposition for the leading eigenvalues of the transfer matrix. Just as it happens for the periodical boundary conditions, the largest eigenvalue is degenerate on the complex plane, at least one of them being located at the real axis. Again, because in this case the transfer matrix is even sparser than those obtained for the periodical boundary conditions, we have used the Power Method in order to get this leading eigenvalue. As al-k R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 ready mentioned in the previous discussion for the pbc case, to circumvent this degeneracy, which put the Power Method in jeopardy, we diagonalize a transformed matrix T ′ , translating all the diagonal elements from the original matrix, T , by a real number p -as it is illustrated by Fig. 8. Doing so, we produce another leading eigenvalue free from any degeneracy and we can recover the value we are looking for only subtracting p from the largest eigenvalue of T ′ . However, unlike the pbc case, the choice of p in this situation can be a sensitive issue. Also, we noticed that the estimates for the leading eigenvalue as the iterations are done show a pattern which has oscillations of a period of about 2kL with slowly decreasing amplitude. This behavior is distinct to what happens for pbc, where the convergence, after a short transient, is usually monotonical. Therefore, great care has to be used in establishing the condition for numerical convergence.
Comparing the numbers shown in tables II and V, we see that, in general the uncertainties for the pbc case are smaller. This is actually an expected result, since the helical boundary conditions offer a less favorable scenario concerning how the number of states grows with L. Because of that and because our extrapolation must separate the values into sets respecting the remainder of the division L/k, the number of elements for each set is smaller than those coming from the pbc calculations. Hence, the extrapolation has a tendency to be less accurate for hbc. On the other hand, if we examine the extrapolated values shown in table V, we can see that the fluctuations among them are clearly less pronounced than the difference among the values for different sets in the pbc case. The origin for that behavior is the way how to each set approaches to the asymptotic result, s ∞ . While for the periodical boundary conditions, as we can see from Fig. 5(a), this relaxation can be quite slow, particularly for the values pertaining to the set with remainder R = 0, the same does not occur with the helical boundary conditions. Then, if this condition is somehow handicapped by smaller values of L attained, even so, the extrapolation is not completely damaged, since the values are closer to their asymptotic limit. Given the results shown in table V we proceed to the final values for the entropy per site, considering rigid chains of size k placed at the sites of a square lattice, using the Eq. 13. The final results for such entropies are shown in table VI. We do not have a complete agreement between the results shown in tables III and VI. Although they are close enough, the smaller range of widths L that we were able to compute for the hbc case has an impact in making the extrapolations less accurate for that case, although the estimates seem still to be better than the ones calculated from the simulational analysis [10].
IV. FINAL DISCUSSION AND CONCLUSION
In the present work, we have dealt with the problem to determine the configurational entropy for colinear chains of size k, named k-mers, fully covering a square lattice. To do so, we have employed transfer matrix calculations using three different constructions. Two of them were employed for periodical boundary conditions, the so-called Usual Approach, already used by Ghosh et. al [9] to obtain the entropy for the trimers (k = 3), and the Profile Method, based on the calculation developed by Dhar and Rajesh [13] in order to estimate a lower boundary for the value of the entropy as a function of the chain size, k, considering k ≫ 1. To our knowledge, this second approach was never used in the transfer matrix method and it has been useful to deal with this problem. Since we seek to determine the entropy for full coverage in the thermodynamic limit from the results obtained for the entropy of the k-mers placed on stripes with a finite widths equal to L, our results tend to be better when we reach large values of L. The Profile Method, in the majority of cases, produces transfer matrices with smaller dimensions than those obtained via the Usual Approach, allowing us to obtain better numerical results for the entropies. We notice that in the Usual Approach the entropy is direcly related to the leading eigenvalue of the transfer matrix, while in the Profile Method, which is grand-canonical, it is necessary to find the value of the activity of a k-mer which corresponds to a leading eigenvalue with a unitary modulus. So, while in the first approach we need to find the leading eigenvalue only once, in the second approach it is necessary to repeat this operation several times to reach the required numerical precision. Neverthless, the Profile Method allowed us to reach larger widths. Another construction we have applied for these calculations was the Usual Approach considering helical boundary conditions. However, even being less effective to reach large values of L, this approach has a tendency to generate values closer to the asymptotic limits associated with the thermodynamic limit, although displaying greater uncertainties.
Although we have not presented results on details about the convergence of the results of the entropies on strips of finite widths to the two-dimensional values, as was, for instance, done for trimers in [9] it was clear that the scaling form Eq. 10 is followed by our results, for both periodic and helical boundary conditions. This is an indication that the phase in the full lattice limit is critical and conformal invariant for periodic boundary conditions. We plan to come back to this point in the future.
Our results show values that are in accordance with some previous results in the available literature, such as the case for dimers (k = 2), the only case which was exactly solved and for which our result agrees up to the 11th decimal place, and also the for trimer case, wich entropy obtained here agrees with that estimated by Ghosh et. al in [9]. Another source for comparison are the simulational results obtained by Pasinetti et. al [10] which also are in complete agreement with our values, although they are less precise. We may also compare our results with recent estimates for the entropies for the same problem provided by a sequence of Husimi lattice closed form approximations [11], which are numerically exact solutions on treelike lattices that may be considered beyond mean field approximations. These results, for k in the range {2 − 6}, in a similar way to ours, become less precise for growing values of k. While the relative differences between the present and the former estimates are of the order of 3 % for k = 2, 3, they reach about 40 % for the higher values of k. It is also noteworthy that the behavior displayed by the entropies s and the sizes k seemingly obey the relation predicted by Dhar and Rajesh [13], s ≈ ln k/k 2 , when k → ∞. As it has been mentioned previously, from k = 5 up to k = 10 our results differ from that expression by less than 3%.
V. ACKNOWLEDGMENTS
This work used computational resources of the "Centro Nacional de Processamento de Alto Desempenho" in São Paulo (CENAPAD-SP, FAPESP). We also acknowledge the help by Rogerio Menezes for his aid with some other computational resources used in our calculations. | v2 |
2017-08-03T00:27:42.906Z | 2016-11-10T00:00:00.000Z | 4025789 | s2orc/train | Phospholipidomic Analysis Reveals Changes in Sphingomyelin and Lysophosphatidylcholine Profiles in Plasma from Patients with Neuroborreliosis
In recent years, the number of patients suffering from Lyme Disease (LD) has significantly increased. The most dangerous manifestation of LD is neuroborreliosis associated with invasion of the central nervous system by Borrelia burgdorferi. Phospholipids (PL) and their metabolites are involved in inflammation, which plays a dominant, but still unclear, role in the pathogenesis of neuroborreliosis. We analyzed the plasma PL profiles of neuroborreliosis patients (n = 8) and healthy volunteers (n = 8) using a lipidomic approach. Significant increases in the lysophosphatidylcholines LysoPtdCho 16:0 and LysoPtdCho 18:2 were observed. The plasma of neuroborreliosis patients appeared to have an increased relative abundance of sphingomyelin CerPCho d18:1/24:1 and a decrease in CerPCho d18:0/18:0. Principal components analysis of the relative abundances of all PL class species distinguished between neuroborreliosis patients and healthy subjects. This is the first report comparing PL classes and their molecular species in neuroborreliosis patients and healthy subjects. Electronic supplementary material The online version of this article (doi:10.1007/s11745-016-4212-3) contains supplementary material, which is available to authorized users.
Introduction
Lyme disease is a human infection transmitted by ticks (Ixodidae) and caused by the spirochete Borrelia burgdorferi. A dramatic increase in the number of cases of Lyme disease in Europe has been reported in the past two decades (200,000 new cases/year) and the United States (15,000-20,000 new cases/year), and this number continues to rise. The most dangerous manifestation of Lyme disease is neuroborreliosis, which is associated with infection of the central nervous system. Previously, we showed enhanced phospholipid (PL) peroxidation and decreased phospholipase A 2 (PLA 2 ) activity, the main enzyme releasing peroxidation products, during neuroborreliosis [1]. Despite these findings, the pathogenesis of neuroborreliosis has still not been fully determined. However, there are indications that PL and their metabolites participate in the inflammatory response in Lyme disease [2,3]. It is known that some PL species such as lyso-phosphatidylcholines (LysoPtdCho), phosphatidyethanolamines (PtdEtn), phosphatidylcholines (PtdCho), phosphatidylinositol (PtdIns), and sphingolipids (CerPCho) are involved in the development of inflammatory diseases such as rheumatoid arthritis, pancreatic cancer, and ovarian cancer [4][5][6]. To date there are no published reports that have focused on the profiles of the main PL species in plasma from patients with neuroborreliosis. This data might lead to the identification of altered metabolic pathways, and be useful for monitoring pharmacotherapy. Therefore, the aim of this study was to extend our knowledge of PL participation in the development of neuroborreliosis using a lipidomic approach.
Chemicals
All solvents used were of LC-MS grade. All chemicals were purchased from Sigma-Aldrich (St. Louis, MO, USA) and had greater than 95% purity. PL internal standards were purchased from Avanti Polar Lipids.
Biological Material
We obtained plasma samples from neuroborreliosis patients and healthy subjects collected in the Department of Infectious Diseases and Neuroinfections, Medical University of Bialystok (Poland). The samples were collected from eight patients with neuroborreliosis (three female and five male) with an average age of 48 years (range 21-83). The control group consisted of eight healthy subjects (three female and five male), with an average age of 47 years (range 22-72).
The diagnosis of neuroborreliosis was confirmed by epidemiological anamnesis. Fifty percent of the neuroborreliosis patients reported previous tick bites, clinical manifestations of Bannwarth's syndrome, lymphocytic meningitis with or without nerves paresis, and serological detection of anti-B. burgdorferi IgM and IgG antibodies in enzymelinked immunosorbent assays (ELISA; Borrelia recombinant IgG and IgM High Sensitivity, Biomedica, Austria). In all cases, the ELISA results were confirmed by western blotting. Additionally, IgM and IgG immunoblot tests were performed (Virotech, Germany) to estimate intrathecal synthesis of antibodies in cerebrospinal fluid (CSF).
All neuroborreliosis patients had serum anti-B. burgdorferi antibodies, with mean titres for IgM and IgG of 24 ± 19 BBU/ml and 55 ± 23 BBU/ml, respectively. Three patients (37%) had intrathecal synthesis of IgM antibodies, five (62%) had intrathecal synthesis of IgG antibodies, and two patients (25%) had both in their CSF. Based on these criteria we diagnosed a definitive clinical picture, lymphocytary pleocytosis in CSF, intrathecal immunoglobulin synthesis and probable clinical history and at least one of the following findings: lymphocytary pleocytosis in CSF, erythema migrans >5 cm in diameter, or prompt clinical response to antibiotic treatment. In all patients tick-borne encephalitis was excluded based on serological tests on serum and CSF.
The exclusion criteria for both groups were as follows: pregnancy, lack of written consent, or recent treatment with nonsteroidal anti-inflammatory drugs, steroids, or oral contraceptives. In the control group, there was no history of other diseases which could influence increased PL oxidation, e.g., arthritis of any etiology. Patients and healthy subjects with alcohol abuse and heavy smokers were also excluded from the study. The study had approval from the Local Bioethics Committee at the Medical University of Bialystok, and written informed consent was obtained from all patients.
Blood was collected from all participants into ethylenediaminetetraacetic acid tubes and centrifuged at 2000×g (4 °C) to obtain the plasma.
Lipid Extraction
Total lipids from all plasma samples were extracted using a modified Folch method [7]. In brief, 1.5 ml of ice-cold methanol was added to each 200 µl of plasma sample and vortexed thoroughly. Then, 3 ml of chloroform was added, vortexed, and incubated on ice for 60 min. To induce phase separation, 1.25 ml ultra-pure Milli-Q water was added. After 10 min incubation on ice, samples were centrifuged at 2500×g for 10 min at room temperature to obtain the aqueous top and organic bottom phases from which lipids were obtained.
PL Quantification and Separation of PL Classes by Thin Layer Chromatography (TLC)
Silica gel TLC plates 20 × 20 cm (Merck, Darmstadt, Germany) were used to separate the PL classes. First, plates were treated with 2.3% boric acid in ethanol. Then, 20 µl of 20-30 µg PL extract were seeded on the TLC plate and developed using a mixture of chloroform/ethanol/water/ triethylamine 35:30:7:35, v/v/v/v. PL spots were observed by exposure to primuline 50 µg/100 mL acetone:water, 80:20, v/v, and visualized with a UV lamp at λ = 254 nm [9]. Identification of the different PL classes was performed by comparison with PL standards applied to the same plate. Estimation of the total amount of PL in total lipid extracts and in the spots after TLC separation was performed according to Bartlett and Lewis [8]. The relative abundance (%) of each PL class was calculated by relating the amount of phosphorous in each spot to the amount of total phosphorous in each plasma lipid extract. Mobile phase A consisted of 25% water, 50% acetonitrile, 25% v/v methanol with 10 mM ammonium acetate. Mobile phase B consisted of acetonitrile 60%, methanol 40% with 10 mM ammonium acetate. Total PL samples (20 µg) were diluted in mobile phase B and 5 µl of the mixture was introduced into an Ascentis Si HPLC Pore column 15 cm × 1.0 mm, 3 µm (Sigma-Aldrich). The solvent gradient was programmed as follows: the gradient started with 0% of A, linearly increased to 100% over 20 min, held isocratically for 35 min, and returning to the initial conditions over 5 min. The flow rate through the column was 40 µl/min. ESI Agilent Dual AJS ESI conditions were as follows: electrospray voltage, −3.0 kV; capillary temperature, 250 °C; sheath gas flow, 13 L/min. Parent scan spectra were acquired in the range of m/z 100-1500. Collision energy was fixed at 35 for MS/MS. Data acquisition was carried out with Mass Hunter data software version B0.6.0 (Agilent Technologies, Santa Clara, CA, USA). An isolation width of ~1.3 Da was used for the MS/MS experiments. MS/MS was performed for each ion to identify and confirm their structure, according to the typical fragmentation pathways [10]. Internal standards PtdCho 14:0/14:0, PtdIns 16:0/16:0, and PE 14:0/14:0, (Avanti Polar Lipids) were used to confirm the ion variations observed in the MS spectra according to Lipid Maps [11]. The relative abundance of each ion was calculated by normalizing the area of each extracted ion chromatogram peak to the area of an internal standard.
Statistical Analysis
Means ± standard deviations (SD) were calculated for all data. The relative ion abundances obtained by HILIC-LC-MS from the two groups of plasma extracts were analyzed using one-way analysis of variance (ANOVA) with Bonferroni post hoc tests used to determine significant differences between samples. Differences were considered significant if p < 0.05. Statistical analysis was performed using GraphPad Prism 5 for Windows version 5.0.1 (GraphPad Software, San Diego, CA, USA). Principal component analysis (PCA) classification of the data was performed using SIMCA-P + version 12.0.1 software (Umetrics, Umeå, Sweden). This was performed using data from the most abundant PL species in each class, following log transformation to achieve normal distribution of the data, followed by Pareto-scaling.
Results and Discussion
The pathology of many infectious diseases, including Lyme disease, is associated with altered PL metabolism. The composition of PL can be considered an index of the organism in health and disease, as well as an indicator of metabolic responses to pharmacotherapy [12]. TLC analysis of plasma extracts confirmed that the most abundant PL class in all plasma samples was PtdCho, while CerP-Cho, LysoPtdCho, PtdIns, and PtdEtn were less abundant ( Fig. 1), which is the typical PL pattern for human plasma [13]. The results of PL quantification by phosphorous assay showed decreases of PtdCho in the plasma of neuroborreliosis patients compared with controls, while CerPCho and LysoPtdCho were significantly more abundant (Fig. 1).
The formation of LysoPtdCho species can occur as a result of oxidative degradation of PtdCho, as demonstrated by an increase of lipid peroxidation in plasma of neuroborreliosis patients in our previous work [1]. The decrease in plasma PtdCho may explain the increase in lipid peroxidation found in those patients. The other PL classes (PtdIns and PtdEtn) were not significantly different in healthy subjects and patients. We then analyzed PL profiles using a HILIC-LC MS/MS approach, focusing on the four main classes: phosphatidylcholine, sphingomyelin, lysophosphatidylcholine, and phosphatidylethanolamine and phosphatidylinositol.
Phosphatidylcholine
The most abundant PtdCho species observed in plasma [13] were identified by HILIC-LC-MS, and the fatty acyl chain compositions annotated. No significant differences in the relative abundance of PtdCho species were found between healthy subjects and neuroborreliosis patients (Fig. 2).
These results suggest that CerPCho may play an important role in the development of pathological changes in the central nervous systems of neuroborreliosis patients. It has been shown that B. burgdorferi can induce an autoimmune attack on myelin sheaths, as glycolipid galactocerebroside, the major component of myelin, has structural similarities to the B. burgdorferi glycolipid antigen BbGL-2 [14]. The inflammatory demyelination of neurons, as well as peripheral nerves, in Lyme disease has been suggested in previous studies [15,16]. We analyzed the relative abundances of major CerPCho species using PCA. There were some differences in the clustering of samples from healthy volunteers and neuroborreliosis patients (Fig.S1 supplementary material), but these groups did not form distinct clusters.
Lysophosphatidylcholine
All of the most abundant LysoPtdCho species identified showed the tendency to increase in patients, with LysoPt-dCho 16:0 and LysoPtdCho 18:2 significantly more abundant in neuroborreliosis patients than in controls (Fig. 4).
It is well known that LysoPtdCho can be generated under physiological conditions by PLA 2 -mediated hydrolysis of PtdCho [17], or from the hydrolysis of oxidized Ptd-Cho by PAF-acetylhydrolase [18]. Both mechanisms are possible, although previous work demonstrating increased lipid peroxidation in neuroborreliosis patients suggests that the latter mechanism may prevail [1]. LysoPtdCho are probably involved in demyelination [19], although, to date no substantiated data supporting LysoPtdCho-induced demyelination in Lyme disease has been published. PCA analysis of the major LysoPtdCho species completely distinguished between neuroborreliosis patients and healthy controls (Fig. S2, Supplementary Material).
Phosphatidylethanolamine and Phosphatidylinositol
PtdIns and PtdEtn (Figs. 5 and 6) were not significantly different between healthy subjects and neuroborreliosis patients.
Finally, we analyzed all the PL classes (PtdCho, LysoPt-dCho, CerPCho, PtdEtn, and PtdIns) using PCA. The resulting plot (Fig. S3, Supplementary Material) revealed a good separation between healthy subjects and neuroborreliosis patients. This indicates that the plasma PL profiles of patients is significantly different from those of the controls. In conclusion, to our knowledge, this is the first report of differences in plasma PL classes and their molecular species in neuroborreliosis patients and healthy subjects. Total PL quantification showed that the abundance of PtdCho in plasma is significantly lower in neuroborreliosis patients than in controls. However, the abundances of CerPCho and LysoPtdCho were significantly higher in these patients. HILIC-LC-MS data showed that the two most abundant lysophosphatidylcholines, LysoPtdCho 16:0 and LysoPt-dCho 18:2, were significantly different between neuroborreliosis and controls; although, the relevance of this finding remains to be determined. Moreover, significant differences in the molecular composition of sphingomyelin profiles were also observed. The plasma of neuroborreliosis patients had a significantly higher relative abundance of CerPCho d18:1/24:1 and a lower relative abundance of CerPCho d18:0/18:0. These changes could be related to the evolution of the disease, including the occurrence of the demyelination process. PCA revealed a good separation of the relative abundances of all PL class species in healthy controls and neuroborreliosis patients with PC axes revealing almost complete distinction between them. Further studies are needed to clarify if these changes are specific to neuroborreliosis, and whether and how they are related to its pathogenesis. These results may be a useful starting point in defining potential PL neuroborreliosis biomarkers. tuguese Mass Spectrometry Network. This paper was prepared as a result of the cooperation between partners in the AACLifeSci project co-funded under the Erasmus + KA2 project in 2015.
Compliance with ethical standards
Conflict of interest The authors declare that they have no potential conflicts of interest.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | v2 |
2019-04-22T13:06:45.856Z | 2017-06-01T00:00:00.000Z | 126105609 | s2orc/train | Pricing and ordering strategies of supply chain with selling gift cards
Gift cards is frequently used to replace traditional gift cash and gift products, especially when gift givers do not know gift receivers' performances. Basing on this phenomenon, we analyze the supplier's and the retailer's strategies with selling gift cards. First, we develop a Stackelberg model without selling gift cards. Next, we develop two models with selling gift cards when unredeemed gift cards balances become the retailer's property and the state's property, respectively. We present the optimal solutions and exam the impacts of parameters on the optimal decisions and the supply chain performance. When the retailer sells gift cards, the optimal order quantity is smaller than that without selling gift cards. The optimal wholesale price with selling gift cards is related to the treatment of unredeemed gift card balances. When unredeemed gift cards balances become the retailer's property, the wholesale price is lower than that without selling gift cards. However, when unredeemed gift cards balances become the state's property, the wholesale price is lower than that without selling gift cards in some conditions. And with selling gift cards, the optimal expected profits of retailer and supply chain are better off, but the optimal expected profit of supplier is worse off.
1.
Introduction. Giving and receiving gifts are long-standing traditions on holiday season (Principe and Eisenhauer [15]). We often see that, in the pre-holiday period, customers (referred to as gift givers) buy gift products from the retailer, and give them to their family members, friends and so on (we call them gift receivers) in the holiday period. As the demands of gift products are much greater than those in normal days, the retailer have to order more gift products from the supplier in advance.
Gift products, gift cash and gift cards are three common gifts in holidays. They have different characteristics and utilities for both gift givers and receivers. Buying gift products is a good choice, but it costs gift givers much time and energies to carefully select the appropriate gifts for the gift receivers with different performances. Due to the fact that gift givers cannot know well the preference of gift receivers, the selected gifts may not be the ones those gift receivers really like and even may be the ones receivers already have. These unmatched behaviors between gift givers and gift receivers can lead to social risk (Austin and Huang [4]). To avoid wastage of the Although gift cards have a great potential development, there are few researches address the optimal pricing and ordering strategies of gift products with selling gift cards. The existing researches on gift cards mainly focus on following three aspects. Firstly, most papers study the "free" gift cards, which are offered by retailers or suppliers free to customers to attract additional purchase. Khouja et al. [10] developed a model to derive optimal purchase amount thresholds and gift card values, in which the retailer offers gift cards "free" to consumers who spend above specified thresholds in a single purchase. Khouja, Park and Zhou [12] proposed a newsvendor model to analyze the optimal strategies when the retailer gives "free" gift cards to customers who purchase a regularly priced product at the end of the selling season. Secondly, for the "no-free" gift cards, most scholars do qualitative research to investigate the impacts of gift cards on gift card users' behaviors. Waldfogel [25] designed behavior experiments to prove that compared with receiving gifts, bounded rational customers prefer to buying gifts personally. It is because that a lot of cost are consumed when people give gifts, and giving gifts is not an effective way to allocate social resource. Yao and Chen [30] compared gift cards with gift cash in terms of the effects in customer's information processing and product evaluating. Thirdly, few researchers do quantitative research on selling gift cards. Khouja, Pan and Zhou [11] studied the optimal ordering and discount of seasonal products when the retailers selling gift cards within the newsvendor model framework, but they did not consider the problem of supply chain. Although the literature about the decision of pricing and ordering with gift cards is limited, the research on the decision of pricing and ordering in supply chain without gift cards has received significant attention in the past two decades (Ghoreishi et al. [7], Sadigh, Chaharsooghi and Sheikhmohammady [16]).
In this paper, we do analysis of "no-free" gift cards in supply chain environment. When the retailer sells gift cards, gift cards can offset part of gift product return and(or) bring additional profit from redeeming gift cards. Meanwhile, the demand of gift product decreases because customers buy gift cards and redeem for non-gift products, the order quantity of gift product from the supplier decrease eventually. So it is important for the supplier and the retailer to make decisions to maximize their profits with selling gift cards, respectively. We answer the following three questions: (1) does the use of gift cards benefit to the retailer? (2) what are the optimal pricing and ordering strategies of the supplier and the retailer when the retailer sells gift cards, respectively? (3) how does the sale of gift cards impact supply chain's performance?
The rest of this paper is organized as follows. In Section 2, we describe the problem with some reasonable assumptions. In Section 3, we present two Stackelberg models without and with selling gift cards, and give the optimal solutions, respectively. We further discuss the optimal strategies when demand is uniformly distributed in Section 4. Section 5 presents numerical analysis. Finally, we make conclusions in Section 6.
2. Basic assumptions. This paper analyzes the optimal pricing and ordering strategies of selling gift cards in supply chain with Stackelberg model. We suppose the demand of gift product is random by all customers. Let f (x), F (x) and F (x) be the probability density function, the cumulative distribution function and the complementary cumulative distribution function, respectively. We define the failure rate of x as h(x) = f (x) F (x) and assume that x has an increasing failure rate (IFR). This assumption is not restrictive as it is satisfied by a large range of probability distributions, including but not limited to the uniform, Weibull, normal, and exponential distributions, and their truncated versions (Chua and Liu [5]). We further define the generalized failure rate of x as h(x) = f (x) F (x) . Distributions with an increasing failure rate (IFR) are clearly increasing generalized failure rate IGFR (Lariviere and Porteus [13]).
The supplier, as the Stackelberg leader, provides a single kind of gift products with marginal cost c and wholesale price w. Meanwhile, the retailer orders the gift product from the supplier and sells it to customers. The order quantity is q and sale price is p, and we assume that p is exogenous in all conditions. In the pre-holiday period, gift givers purchase gifts (gift products/gift cards) from the retailer, then send the gifts to gift receivers on the holiday. In the post-holiday period, the unsold gift product is sold at a discount and the salvage value is v. In the event of a stock out, unmet demand is lost, resulting in the margin being lost, but without any additional stock-out penalty (Lariviere and Porteus [13]). Let p > w > c > v. A summary of notation is given in Table 1. We use π to respect the profit. Superscripts R and S denote the retailer and the supplier, respectively. And subscripts NG, RG and SG denote the three different conditions that without selling gift cards, with selling gift cards when unredeemed gift cards balances become the retailer's property and with selling gift cards when unredeemed gift cards balances become the state's property, respectively. In our research, we do not consider the time value of money from unredeemed gift cards. Notice that the transaction can only proceed when the supplier and the retailer can get profits from the deal (Lariviere and Porteus [13]). Figure 1. First, in pre-holiday period, the supplier determines the wholesale price of the gift product, then the retailer determines the order quantity of the gift product from the supplier. Next, the gift giver purchases the gift product from the retailer. Finally, in holiday period , the gift receiver gets the gift product from the gift giver. If the gift receiver does not like the gift product, she returns it to the retailer in post-holiday period. Suppose the return rate of the gift product is θ(0 ≤ θ ≤ 1) and the gift receiver can return the gift product for full cash refund 1 . The retailer's profit function is In Equation (1), the first term is income of the selling gift product, the second term is salvage value of the returned gift product, the third term is salvage value of the unsold gift product and the fourth term is purchasing cost of the gift product. Let A = (1 − θ)p + vθ, then the retailer's expected profit is Accordingly, the supplier's expected profit is According to solving method of Stackelberg model, we can get Theorem 3.1.
Theorem 3.1. Without selling gift cards, the supplier's optimal wholesale price of the gift product is w * N G = c−v 1−g(q * N G ) + v, the retailer's optimal order quantity of the gift product is q *
Proof. Taking the first and second derivations of Eπ
According to Equation ( Hence, from the Equation (4), the retailer's optimal order quantity of gift product q * N G is set such that According to Equation (6), we know that the wholesale price w and the retailer's optimal order quantity of gift product q * N G have a one-to-one correspondence. Hence, let w(q * N G ) be the unique wholesale price that induces the retailer to order q * N G , then Substituting Equation (7) into Equation (3), the supplier's expected profit is written as Thus, the supplier's expected profit function Eπ S N G (w(q * N G )) is unimodal, and the optimal order quantity q * N G is determined by the unique solution to the first-order condition. According to Equation (8), the first-order condition can be written as Then we can use Equation (9) and (6) According to Theorem 3.1, when w * N G = c, the optimal stocking level of the integrated supply chain is q I Unredeemed gift cards balances become the retailer's property (RG). When the retailer sells gift cards, decision behaviors among the supplier, the retailer, gift givers and gift receivers in decentralized supply chain are depicted in Figure 2. Firstly, in pre-holiday period, the supplier determines the wholesale price of gift product, the retailer determines the order quantity of gift product, then the retailer offers the gift product and gift cards to customers. Next, the gift giver purchases the gift product or gift cards from the retailer. If the supply of gift product is sufficient, the gift giver who find the gift product is satisfied with the preference of gift receiver will purchase it, otherwise the gift giver will turn to buy gift cards instead of the gift product. Suppose that the probability of gift giver buying gift cards is θ(0 ≤ θ ≤ 1) 2 . If the supply of gift product is insufficient, some gift givers who find the gift product is satisfied with the preference of gift receiver have to buy gift cards. Suppose the probability of a gift product buyer buying gift card when the gift product is stock-out is β(0 ≤ β ≤ 1). Thirdly, in holiday period, the gift giver sends the gift product or gift cards to the gift receiver. Finally, in post-holiday period, some gift receivers who get gift cards will redeem for non-gift products in required time 3 , others will not redeem as a result of some reasons, such as losing gift cards or missing redemption time. Suppose the average redemption rate of gift cards is α(0 ≤ α ≤ 1), and the profit margin of non-gift product is m(0 < m < 1). In this subsection, we suppose that the unredeemed gift cards balances become the retailer's property, then the retailer's profit function is In Equation (10), the first term is income of selling gift product, the second term is income of redeemed gift cards, the third term is income of unredeemed gift cards, the fourth term is salvage value of unsold gift product and the fifth term is purchasing cost of gift product. Let B = (1 − αθ + mαθ)p and C = (1 − β + αβ − mαβ)p, then the retailer's expected profit is And the supplier's expected profit is According to solving method of Stackelberg model, we get Theorem 3.2.
Theorem 3.2. when unredeemed gift cards balances become the retailer's property, + v, the retailer's optimal order quantity of the gift Taking the first and second derivations of Eπ R RG in Equation (11) with respect to q, we have
JINGMING PAN, WENQING SHI AND XIAOWO TANG
According to Equation (14), Hence, from the Equation (13), the retailer's optimal order quantity of gift product q * RG is set such that According to Equation (15), we know that the wholesale price w and the retailer's optimal order quantity of gift product q * RG have a one-to-one correspondence. Hence, let w(q * RG ) be the unique wholesale price that induces the retailer to order q * RG , then Substituting Equation (16) into Equation (12), the supplier's expected profit is written as Since F (x) is IGFR, then g( Thus, the supplier's expected profit function Eπ S RG (w(q * RG )) is unimodal, and the optimal order quantity q * RG is determined by the unique solution to the first-order condition. According to Equation (17), the first-order condition can be written as From Equation (16), the optimal wholesale price w * Then we can use Equation (18) and (15)
Unredeemed gift cards balances become the state's property (SG).
Next, we consider a model in which the unredeemed gift cards balances become the state's property. The retailer's profit function is In Equation (19), the first term is income of selling gift product, the second term is income of redeemed gift cards, the third term is salvage value of unsold gift product and the fourth part is purchasing cost of gift product. Let D = (1 − θ + mαθ)p and K = (1 − mαβ)p, then the retailer's expected profit is The supplier's expected profit is According to solving method of Stackelberg model, we get Theorem 3.3.
+ v, the retailer's optimal order quantity of the gift product is Taking the first and second derivations of Eπ R SG in Equation (20) with respect to q, we have According to Equation (23) Hence, from the Equation (22), the retailer's optimal order quantity of gift product q * SG is set such that According to Equation (24), we know that the wholesale price w and the retailer's optimal order quantity of gift product q * SG have a one-to-one correspondence. Hence, let w(q * SG ) be the unique wholesale price that induces the retailer to order q * SG , then Substituting Equation (25) into Equation (21), the supplier's expected profit is written as Since F (x) is IGFR, then g( Thus, the supplier's expected profit function Eπ S SG (w(q * SG )) is unimodal, and the optimal order quantity q * SG is determined by the unique solution to the first-order condition. According to Equation (26), the first-order condition can be written as From Equation (25), the optimal wholesale price w * Then we can use Equation (27) and (24) Comparative static analysis. Based on the above analysis, we know that the optimal wholesale price and the optimal order quantity are related to many factors. We investigate the effects of some parameters on the optimal wholesale price and the optimal order quantity and the results are shown in Theorem 3.4 and Theorem 3.5.
Theorem 3.4. For the optimal order quantity of gift product, (i) under condition NG, the optimal order quantity q * N G is decreasing in the return rate θ; (ii) under condition RG, the optimal order quantity q * RG is increasing in the average redemption rate of gift cards α, but is decreasing in the probability of a gift product buyer buying gift cards when gift product is stock-out β and the profit margin of non-gift products m; and (iii) under condition SG, the optimal order quantity q * SG is decreasing in the average redemption rate of gift cards α, the probability of a gift product buyer buying gift cards when gift product is stock-out β and the profit margin of non-gift products m.
Theorem 3.5. For the optimal wholesale price of gift product, (i) under condition NG, the optimal wholesale price w * N G is decreasing in the return rate θ; (ii) under condition RG, the optimal wholesale price w * RG is increasing in the average redemption rate of gift cards α, but is decreasing in the probability of a gift product buyer buying gift cards when gift product is stock-out β and the profit margin of non-gift products m; and (iii) under condition SG, the optimal wholesale price w * SG is decreasing in the average redemption rate of gift cards α, the probability of a gift product buyer buying gift cards when gift product is stock-out β and the profit margin of non-gift products m.
Proof of Theorem 3.4 and 3.5. (i) According to Theorem 3.1, let where A = (1 − θ)p + vθ. Taking the first derivations of H 1 and H 2 in Equation (28) and (29) with respect to q * N G , w * N G and θ, respectively. We have ∂H1 Considering the impacts of θ on q * N G and w * N G , we have (ii) According to Theorem 3.2, let where C = (1−β−αβ−mαβ)p and C −v > 0. Taking the first derivations of H 3 and H 4 in Equation (30) and (31) with respect to q * RG , w * RG , α, β and m, respectively. We have ∂H3 Considering the impacts of α on q * RG and w * RG , we have So we get Considering the impacts of β on q * RG and w * RG , we have Where K = (1 − mαβ)p and K − v > 0. Taking the first derivations of H 5 and H 6 in Equation (32) and (33) with respect to q * SG , w * SG , α, β and m, respectively. We have ∂H5 Considering the impacts of m on q * SG and w * SG , we have So we get Theorem 3.4 and 3.5 show that without selling gift cards, the optimal order quantity and the optimal wholesale price are decreasing with the return rate of gift product. This is because that larger return rate of gift product will increase the retailer's return cost, then the retailer will decrease the order quantity of gift product. So, the supplier need to decrease the wholesale price to encourage the retailer to order more gift products. From Theorem 3.4 and 3.5, we know that with increasing of the proportion of buying gift cards from unmet gift product buyers, the retailer decreases the optimal order quantity and the supplier decreases the optimal wholesale price. It is because that the gift card, as a substitute for the gift product, can decline the demand of gift product. It also means that selling gift cards will diminish the supplier's power in supply chain. Similarly, with increasing of the profit margin of non-gift products, the retailer decreases the optimal order quantity and the supplier decreases the optimal wholesale price too. If the profit margin of nongift product is larger, then the retailer's profit from gift card redemption is larger, so the retailer is more willing to sell gift cards. Hence, it is reasonable for the retailer to decline the order quantity of gift product. This imply that the profit margin of nongift product will further decrease the profitability of the supplier. Under condition RG, since the unredeemed gift cards balances become the retailer's property, then the retailer can get more profit from lower redemption rate. So, if the redemption rate is lower, the retailer has no incentive to order more gift product, hence the supplier needs to decrease the wholesale price. Conversely, if the redemption rate is larger, the retailer's optimal order quantity is larger, then the supplier can set a higher wholesale price. Under condition SG, since the unredeemed gift cards balances become the state's property, then the retailer cannot get benefit from unredeemed gift cards. So, with decreasing of the redemption rate, the retailer's profit obtained from non-gift products will decrease, then the retailer will order more gift products. Consequently, the supplier may increase the optimal wholesale price. 4. Uniformly distributed demand. According to Theorem 3.1 -Theorem 3.3, we can get the optimal solutions for uniformly distributed demand x ∼ U (0, b), listed in Table 2.
Optimal wholesale price Optimal order quantity Conditions Table 2, we have , then we have the results of Theorem 4.1. From Table 2, we also have According to Theorem 3.4, q * SG is decreasing in β, so q * N G ≥ q * SG for all 0 ≤ β ≤ 1. Thus, for the optimal order quantity, we have q * N G ≥ q * SG ≥ q * RG . Theorem 4.1 shows that the optimal wholesale price in SG is always larger than that in RG. Note that θ represents the return rate without selling gift cards and the proportion of gift card sales with selling gift cards. We know that the optimal wholesale price in NG is decreasing in θ from Theorem 3.5. Theorem 4.1 also shows that, the optimal wholesale price in SG is always larger than that in RG; however, with the increasing of θ, the optimal wholesale price in NG will change from above the optimal wholesale price in SG to below the optimal wholesale price in RG.
As for the optimal order quantity, the optimal order quantity in NG is always larger than those in RG and in SG. It means that gift cards can effectively reduce the retailer's loss that due to uncertain demand. Meanwhile, when unredeemed gift card balances stay with the retailer, the retailer prefers to reduce the order quantity of gift product.
5. Numerical analysis. In this section, we do some numerical analyses to exam the influences of different parameters on the optimal strategies and the supply chain performance with normally distribution demand. We also consider the condition of integrated supply chain without selling gift cards (denotes it by ING) in this section. Suppose that the mean value µ = 100, the sale price of gift product p = 10 and the salvage value of gift product v = 0.5c. 5.1. Coefficient of variance. The impacts of CV (cofficient of variance, σ/µ) on the optimal wholesale price, the optimal order quantity and the expected profits are shown in Figure 3, respectively. In Figure 3(a), the optimal wholesale prices in NG, RG and SG are decreasing in CV. With the increase of CV, the demand will be more and more instable, the supplier has to reduce wholesale price to encourage the retailer to order more gift products. The optimal wholesale price without selling gift cards is higher than that with selling gift cards, and the optimal wholesale price in SG is higher than that in RG. Figure 3(b) shows that the optimal order quantities with and without selling gift cards decrease in small CV, but increase in big CV. The optimal order quantity without selling gift cards is larger than that with selling gift cards, meanwhile, the optimal order quantity in SG is larger than that in RG. As for the retailer's performance, Figure 3(c) shows that the retailer has the largest optimal expected profit in RG, however the retailer's optimal expected profit in NG is the lowest. All these profits are increasing in CV. Figure 3(d) shows that the supplier's optimal expected profit without selling gift cards is larger than that with selling gift cards, and the supplier's optimal expected profit in SG is larger than that in RG. Figure 3(d) also shows that the supplier's optimal expected profits in three conditions decrease as CV increase. Figure 3(e) illustrates that the performances of supply chain in three conditions are also decreasing in CV. According to Figure 3(c), 3(d) and 3(e), we can know that selling gift cards is benefit to the retailer, but it is not benefit to the supplier. While, the supply chain's optimal expected profit with gift cards is larger than that without gift cards in integrated supply chain. Hence, selling gift cards gives much rise to the supply chain's optimal expected profit.
5.2.
Unit cost of gift product/sale price of gift product. The impacts of c/p (unit cost of gift product/sale price of gift product) on the optimal wholesale price, the optimal order quantity and the optimal expected profit are shown in Figure 4, respectively. Figure 4(a) shows that the optimal wholesale price in RG is lower than those in NG and SG, but the optimal wholesale price in SG is lower than that in NG just for bigger c/p. The optimal wholesale prices in three conditions are all increasing in c/p. Figure 4(b) shows the optimal order quantity without selling gift cards is larger than that with selling gift cards, and the optimal order quantity in SG is larger than that in RG. The optimal order quantities in three conditions are all decreasing in c/p. From Figure 4(c), 4(d) and 4(e), we know that the optimal expected profits of the retailer, the supplier and the supply chains are all decreasing in c/p because of larger manufacturing cost margin of gift product. Selling gift cards is benefit to the retailer, but it is not benefit to the supplier. Although the supply chain's optimal expected profit with selling gift cards is not always larger than the integrated supply chain's optimal expected profit without selling gift cards, selling gift cards is also benefit to the supply chain.
5.3.
Return rate of gift products (purchase rate of gift cards). The impacts of return rate of gift product (or purchase rate of gift cards, θ) on the optimal wholesale price, the optimal order quantity and the optimal expected profit are shown in Figure 5, respectively. Figure 5(a) shows that the optimal wholesale price of gift product without selling gift cards is decreasing in θ, however the optimal wholesale price of gift product with selling gift card is constant in θ. The optimal wholesale price in NG is higher than the optimal wholesale prices in SG and RG for small θ. Figure 5(b) shows that the optimal order quantities in three conditions are all decreasing in θ. From Figure 5(c), 5(d) and 5(e), we get gift products return is harmful to the supplier, the retailer and the supply chain under without selling gift cards condition, however, selling gift cards can improve their profits because gift cards can offset return loss in a certain extent.
5.4.
Average redemption rate of gift cards. The impacts of average redemption rate of gift cards (α) on the optimal wholesale price, the optimal order quantity and the optimal expected profit are shown in Figure 6, respectively. Figure 6(a) and 6(b) show that the optimal wholesale price and the optimal order quantity in SG are larger than those in RG. When α increases, the optimal wholesale price and the optimal order quantity in RG increase, while the optimal wholesale price and the optimal order quantity in SG decrease. When α is smaller, the optimal wholesale price in SG is higher than that in NG. While, the optimal wholesale price in SG is smaller than that in NG for bigger α. The more gift cards are redeemed, the less unredeemed gift cards balances belong to the retailer. Figure 6(c) shows that the retailer's optimal expected profit in RG is decreasing in α, and the retailer's optimal expected profit in SG is increasing in α. Figure 6(d) shows that the supplier's optimal expected profit in RG is increasing in α, and the supplier's optimal expected profit in SG is decreasing in α. From Figure 6(e), we get that selling gift cards makes the supply chain's optimal expected profit increase significantly.
5.5.
Proportion of gift cards sales from unmet gift product buyers. The impacts of the proportion of a gift product buyer buying gift cards when gift product is stock-out (β) on the optimal wholesale price, the optimal order quantity and the optimal expected profit are shown in Figure 7, respectively. Figure 7(a) shows that the optimal wholesale price with selling gift cards is lower than that without selling gift cards for bigger β, and the optimal wholesale price with selling gift cards is decreasing in β. It means that with the increase of the proportion of gift cards sales from unmet gift product buyers, the supplier should reduce the wholesale price of gift product. Figure 7(b) shows that the optimal order quantity with selling gift cards is decreasing in β. It means that with the increase of β, the retailer will decrease the order quantity of gift product. The impacts of on profits are shown in Figure 7(c), 7(d) and 7(e). The retailer's and supply chain's optimal expected profits with selling gift cards are increasing in β, while the supplier's optimal expected profit with selling gift cards is decreasing in β. For smaller β, selling gift cards is benefit to the supplier. 5.6. Profit margin of non-gift products. The impacts of profit margin of nongift products (m) on the optimal wholesale price, the optimal order quantity and the optimal expected profit are shown in Figure 8, respectively. If the profit margin of non-gift products is higher, the retailer prefers to sell gift cards. Figure 8(a) shows that the optimal wholesale prices in RG and SG are decreasing in m. It means that with the increase of profit margin of non-gift products, the optimal wholesale price with selling gift cards becomes lower and lower. Figure 8(b) shows that the optimal order quantity of gift product with selling gift cards is decreasing in m. Figure 8(c), 6. Conclusion. In this paper, we develop Stackelberg models in three conditions to analyze the optimal strategies of the supply chain. Based on our theoretical and numerical analysis, we conclude our work as follows.
For the retailer, selling gift cards can improve her profit dramatically, especially for unredeemed gift cards balances belong to her property. So the retailer has an incentive to encourage selling gift cards, but the retailer does not have an incentive to encourage gift cards redemption when the retailer keeps unredeemed gift cards balances. Meanwhile, higher proportion of gift cards sales from unmet gift product buyers and higher profit margin of non-gift products increase the profitability of the retailer.
For the supplier, his expected profit with selling gift cards is lower than that without gift cards in most conditions. So, in general, selling gift cards is harmful to the supplier. The reason is that the gift card, as a substitute for the gift product, can make the retailer order less gift products from the supplier. Hence, the supplier has to reduce the wholesale price to encourage the retailer to order more gift products. Especially, when the unredeemed gift cards balances belong to the retailer's property, the supplier set the wholesale price at a lower level. Therefore, the supplier prefers the state government taking the unredeemed gift cards balances or a higher redemption rate of gift cards.
For the supply chain, selling gift card increases the total expected profit of supply chain. The additional profit mainly comes from gift cards redemption for non-gift products. Meanwhile, selling gift cards will make the supply chain decrease the shortage cost and overstock cost. We also find that the decentralized supply chain's expected profit with selling gift cards may be larger than the integrated supply chain's expected profit without selling gift cards. It provides the possibility of developing cooperation between the supplier and the retailer to make the supply chain be better off.
There are several limitations in this paper. Firstly, we did not consider the consumer behavior of buying gift cards. In this paper, we only consider the benefits from reduced product return. In fact, there are many factors which affect the gift giver's buying decision. Secondly, the price of gift product was exogenous in our model. The extension of this paper may consider the price as a decision variable. Thirdly, we did not consider the supplier's participation constraint and the coordination of supply chain. | v2 |
2016-05-12T22:15:10.714Z | 2013-08-12T00:00:00.000Z | 2532379 | s2orc/train | Structures of Streptococcus pneumoniae PiaA and Its Complex with Ferrichrome Reveal Insights into the Substrate Binding and Release of High Affinity Iron Transporters
Iron scarcity is one of the nutrition limitations that the Gram-positive infectious pathogens Streptococcus pneumoniae encounter in the human host. To guarantee sufficient iron supply, the ATP binding cassette (ABC) transporter Pia is employed to uptake iron chelated by hydroxamate siderophore, via the membrane-anchored substrate-binding protein PiaA. The high affinity towards ferrichrome enables PiaA to capture iron at a very low concentration in the host. We presented here the crystal structures of PiaA in both apo and ferrichrome-complexed forms at 2.7 and 2.1 Å resolution, respectively. Similar to other class III substrate binding proteins, PiaA is composed of an N-terminal and a C-terminal domain bridged by an α-helix. At the inter-domain cleft, a molecule of ferrichrome is stabilized by a number of highly conserved residues. Upon ferrichrome binding, two highly flexible segments at the entrance of the cleft undergo significant conformational changes, indicating their contribution to the binding and/or release of ferrichrome. Superposition to the structure of Escherichia coli ABC transporter BtuF enabled us to define two conserved residues: Glu119 and Glu262, which were proposed to form salt bridges with two arginines of the permease subunits. Further structure-based sequence alignment revealed that the ferrichrome binding pattern is highly conserved in a series of PiaA homologs encoded by both Gram-positive and negative bacteria, which were predicted to be sensitive to albomycin, a sideromycin antibiotic derived from ferrichrome.
Introduction
Iron is an essential component of many biological systems and plays important roles in most living organisms. Although iron is abundant in nature, free iron ion is scarce in most local environments. Under an aerobic aqueous environment at neutral pH, the concentration of free iron is only about 10 218 M, which is extremely low compared to the micromolar level of iron concentration required for bacteria [1]. To overcome this nutrition limitation, pathogenic and commensal bacteria have evolved alternative strategies to uptake iron from the host. In human, iron is abundant and usually exists as the protein-bound form in transferrin, ferritin, hemoglobin and cytochrome [2]. The iron in these proteins could be deprived by a wide variety of microorganisms via the low-molecular-weight (500-1000 Da) iron chelators, termed siderophores, which can bind iron with an association constant as high as 10 24 M 21 [1,3].
In Gram-negative bacteria, highly specialized siderophorebinding outer membrane receptors, such as Escherichia coli FepA binding to enterobactin, are first employed to transport ferricsiderophores into the periplasmic space driven by the energy-transducing TonB-ExbB-ExbD system [4,5,6]. Afterwards, the ferric-siderophores are forwarded to the cytosol via the inner membrane by the ATP binding cassette (ABC) transporters. In contrast, the Gram-positive bacteria can directly uptake ferricsiderophores from the environment by using the substrate-binding proteins of ABC transporters.
The cyclic hexapeptide (Gly) 3 -(N-d-acetyl-N-d-hydroxy-L-ornithine) 3 , a well-defined fungal ferrichrome, could be synthesized by a group of human parasitic fungi, such as Aspergillus fumigatus, Coccidioides immitis and Histoplasma capsulate [11,12]. Previous reports indicated that the ferrichrome secreted from fungi might be shared by a variety of bacteria such as E. coli, Staphylococcus aureus and Streptococcus pneumoniae [13,14,15].
During the invasion to human host, S. pneumoniae has to escape from the host immune response system, and acquire sufficient nutrients to survive and proliferate. The iron scarcity is one of the key nutrient limitations encountered by the bacteria in the human host. To obtain sufficient iron, S. pneumoniae evolved two major highly conserved iron ABC transporters, which are termed Piu (for pneumococcal iron uptake) and Pia (for pneumococcal iron acquisition) systems, respectively [16,17,18]. Pia system is responsible for the transport of hydroxamate siderophores such as ferrichrome and ferrioxamin B, whereas Piu system for the transport of heme from hemoglobin [15,19]. Similar to E. coli FhuD, PiaA was demonstrated to bind the antibiotic albomycin (a derivate of ferrichrome). In addition, PiaA can bind to another antibiotic salmycin, a derivate of ferrioxamine B [15].
Compared to the traditional antibiotics, such as gentamicin and amoxicillin, albomycin is a more effective antibiotic against S. pneumoniae [20]. However, the arising of drug-resistant bacteria makes it an extreme emergence to develop novel antibiotics. For instance, conjugates of between siderophores and antibiotics have been applied to arrest the growth of certain bacteria [21]. Despite the structure of E. coli FhuD in complex with gallichrome was determined [10], the structural basis of the siderophore-binding and transportation remains largely unknown.
To gain more insights into the the molecular details of ferrichrome binding to PiaA, we solved the crystal structures of the periplasmic portion of PiaA before and after binding to ferrichrome at 2.7 Å and 2.1 Å , respectively. Although PiaA shares an overall structure similar to E. coli FhuD, it adopts a quite different ferrichrome-binding cleft, which enables PiaA to capture ferrichrome directly from the environment. We found that, two highly flexible segments at the entrance of the cleft adopt variable conformations, indicating their contributions to the binding and/ or release of ferrichrome. Moreover, structure-based multiplesequence alignment indicated that a series of PiaA homologs have the capacity of binding to ferrichrome, indicating the corresponding bacteria might be senstitive to albomycin.
Cloning, Expression and Purification of PiaA
Recombinant PiaA/Sp_1032 (to express residues Asn23-Lys341) was produced by cloning piaA from the genomic DNA of S. pneumoniae TIGR4 into a pET28a-derived expression vector with an N-terminal 66His-tag. The N-terminal secretion signal and lipidation site were deleted from the recombinant protein.
The construct was transformed into E. coli strain BL21-RIL (DE3) (Novagen), growing at 37uC in 26YT culture medium (5 g of NaCl, 16 g of Bacto-Tryptone, and 10 g of yeast extract per liter) containing 30 mg/ml kanamycin and 34 mg/ml chloramphenicol. When the OD 600 nm reached about 1.0, the culture temperature was shifted to 16uC, and protein expression was induced with 0.2 mM isopropyl b-D-1-thiogalactopyranoside for an additional 20 hr. Cells were collected and resuspended in 40 ml lysis buffer (20 mM Tris-Cl, pH 7.5, 100 mM NaCl). After sonication for 20 min followed by centrifugation at 12,0006g for 30 min, the supernatant containing the soluble protein was collected and loaded onto a Ni-NTA column (GE healthcare) equilibrated with the binding buffer (20 mM Tris-Cl, pH 7.5, 100 mM NaCl). The target protein was eluted with 400 mM imidazole, and further loaded onto a Superdex 75 column (GE Healthcare) preequilibrated with 50 mM NaAc, pH 5.2. Fractions containing the target protein were pooled and concentrated to 10 mg/ml for crystallization. The selenium-methionine (Se-Met)-labeled PiaA protein was expressed in E. coli strain B834 (DE3) (Novagen). Transformed cells were grown at 37uC in Se-Met medium (M9 medium with 25 mg/ml Se-Met and the other essential amino acids at 50 mg/ml) containing 30 mg/ml kanamycin until the OD 600 nm reached about 1.0 and were then induced with 0.2 mM isopropyl b-D-1thiogalactopyranoside for another 20 hr at 16uC. Se-Met substituted His 6 -PiaA was purified in the same manner as described above for the native His 6 -PiaA.
Crystallization, Data Collection and Processing
The ferrichrome (iron free) was purchased from Sigma, and its ferric form was made by mixing with FeCl 3 at 1:1 molar ratio. For the crystal of PiaA complex with ferrichrome, the ligand was added to 10 mg/ml PiaA at a 1:1 molar ratio in 50 mM NaAc, pH 5.2. Both crystals of Se-Met substituted and ferrichrome-binding PiaA were grown at 289 K using the hanging drop vapordiffusion methods, with the initial condition of mixing 1 ml protein solution with an equal volume of the reservoir solution (30% polyethylene glycol 400, 0.1 M NaAc, pH 4.6, 0.1 M CdCl 2 ). The crystals were transferred to cryoprotectant (reservoir solution supplemented with 25% glycerol) and flash-cooled with liquid nitrogen. The Se-Met derivative data for a single crystal were collected at 100 K in a liquid nitrogen stream using the beamline at the Shanghai Synchrotron Radiation Facility (SSRF). The datasets were integrated and scaled with the program HKL2000 [22]. The subsequent processing of the Se-Met substituted data by PHENIX showed a severe pseudo-translation [23]. Thus another crystal obtained at 289 K with the initial condition of mixing 1 ml protein solution with an equal volume of the reservoir solution (2.4 M sodium malonate, 0.1 M Bis-tris propane, pH 7.0) was used to solve the phase problem using heavy atom methods. The crystal with iodine was obtained by quick cryo-soaking in the solution containing 300 mM KI for about 30 sec, and mounted in rayon loop for data collection.
Structure Solution and Refinement
The structure of PiaA was determined using the singlewavelength anomalous dispersion phasing method [24] with the iodine anomalous signal using the program phenix.solve implemented in PHENIX [25]. The initial model was built automatically with the program AutoBuild in PHENIX [25]. The resultant model was subsequently used as a search model against the 2.0 Å data of PiaA in complex with Bis-tris propane. Using the PiaA structures as the search model, the structure of Se-Met substitued and ferrichrome-binding PiaA were determined by molecular replacement with the program MOLREP [26] implemented in CCP4i [27]. The Se-Met substituted and ferrichrome-binding PiaA were further refined by using the maximum likelihood method implemented in REFMAC5 [28] as part of CCP4 program suite and rebuilt interactively by using the sA-weighted electron density maps with coefficients mFo-DFc and mFo-DFc in the program COOT [29]. The structure of PiaA in complex with Bis-tris propane was further refined with the refinement program from PHENIX and rebuilt in COOT. The final models were evaluated with the programs MOLPROBITY [30] and PRO-CHECK [31]. The data collection, processing and structure refinement statistics were listed in Table 1. All structure figures were prepared with the program PyMol [32].
Isothermal Titration Calorimetry Assays
Microcalorimetric titrations were carried out at 25uC employing a MicroCal ITC 200 instrument (GE Healthcare). Both samples of protein and ferrichrome were dissolved in the buffer of 50 mM sodium acetate pH 5.2, then vacuum degassed before use and injections carried out at 2-min intervals. The heat of the dilutions was determined by carrying out suitable reference titrations. The titration data were analyzed using a single-site model and evaluated with the Origin software provided by MicroCal. The affinity of PiaA and the W63A mutant towards ferrichrome is expressed as the dissociation constant (Kd).
Binding Affinity of PiaA Towards Ferrichrome
A previous report has shown that the Pia ABC transporter is responsible for the acquisition of hydroxamate siderophores in S. pneumoniae [15]. To determine the affinity of substrate-binding protein PiaA towards ferrichrome, the changes of heat were plotted against the molar ratio of ferrichrome to PiaA (Fig. 1). The [33]. This much higher affinity enables the Gram-positive bacteria S. pneumoniae to bind the substrates at a rather lower concentration in the environment, whereas FhuD does not require such a high affinity because the substrates have been readily enriched in the periplasmic space in E. coli [34].
Overall Structure of PiaA
The crystal of PiaA in complex with ferrichrome was obtained by adding the ligand to the protein at a molar ratio of 1:1. To solve the phase problem, Se-Met substituted PiaA was crystallized, but the diffraction data at the selenium edge showed the existence of pseudo-translation during crystal packing. A crystal from 2.4 M sodium malonate and 0.1 M Bis-tris propane (B3P), pH 7.0 was applied to soaking with 300 mM KI and eventually enabled us to solve the structure (termed PiaA-B3P) at 2.0 Å using the phases obtained from a SAD experiment at the iodine edge. The structures of Se-Met substituted apo-PiaA and ferrichromebinding complex were subsequently solved at 2.7 Å and 2.1 Å , respectively, by molecular replacement.
In the structure of PiaA-B3P, a molecule of B3P with two conformations binds to a cleft of PiaA. Structural alignment of our PiaA-B3P with the recently released structure of B3P-binding PiaA (PDB: 4H59) from S. pneumoniae Canada MDR_19A yields a root mean square deviation (RMSD) of 0.24 Å over 268 Ca atoms. They share an almost identical overall structure and active-site, except that B3P molecules adopt different conformations.
Each asymmetric unit of the apo-PiaA structure contains two molecules, which are quite similar to each other with an RMSD of 0.24 Å over 278 Ca atoms. The overall structure of PiaA is composed of an individual N-terminal and a C-terminal domain linked by an a-helix (Fig. 2a). Both the N-and C-terminal domains consist of a central b-sheet sandwiched by a-helices on both sides. Similar to E. coli FhuD [10], this pattern of two separated domains connected by an a-helix is a common feature of the class III substrate-binding proteins [35]. The N-and C-terminal domains face each other to form a hydrophobic and plastic cleft. Segments 1 (residues Asn842Lys90) and 2 (residues Glu2482Glu252) at the entrance of this cleft are missing from the electron density map due to their high flexibility ( Fig. 2a and Fig. S1a). Notably, Segments 1 and 2 are located at the N-and C-terminal domains, respectively, indicating the cooperativity between the two domains. Different from E. coli FhuD, the most N-terminus of apo-PiaA structure has an extended b-hairpin, which may function as a hinge to connect
Conformational Changes upon Ferrichrome Binding
In the complex structure, a molecule of ferrichrome is bound at the center of the inter-domain cleft, with the iron moiety sitting against the bottom of the cleft and the backbone pointing outwards (Fig. 2b). Similar to most class III substrate-binding proteins with a relative rigid linker of a-helix, binding of ferrichrome does not trigger significant conformational changes. Superposition of apo-PiaA and PiaA-B3P against the ferrichromecomplexed form gives an RMSD of 0.31 Å for 262 Ca atoms and 0.45 Å for 283 Ca atoms, respectively.
As a class III binding protein, this phenomenon of subtle conformational changes upon ligand binding has been a longstanding puzzle: how the transmembrane subunits discriminate the apo from the ligand-bound state. Recently, the complex structure of vitamin B12 ABC transporter (BtuCD-F) demonstrated that the substrate-binding protein BtuF binds to the transmembrane BtuC dimer via two pairs of salt bridge between Glu74/Glu202 from BtuF and two Arg56 from BtuC dimer, respectively [36]. Structural comparison of the apo and holo-BtuF revealed that the segment containing Glu202 adopts a different conformation for discrimination [37]. Activity assays combined with site-directed mutagenesis of the class III substrate-binding proteins E. coli FecB [38], S. aureus FhuD2 [39] and Bacillus subtilis FeuA [40], indicated that the corresponding Glu-Arg salt bridges are indispensible for properly docking the substrate-binding protein to the permease subunits. Thus, even subtle conformational changes at either of the two Glu regions will be transferred to the salt bridges. Superposition of PiaA-ferrichrome against the structure of BtuF (PDB code: 2QI9) combined with the sequence alignment enabled us to find two highly conserved glutamate residues: Glu119 and Glu262, which might form salt bridges with the corresponding arginine residues of PiaB and PiaC, respectively ( Fig. 3a and 3b).
Upon ferrichrome binding, the Segments 1 and 2 missing in the apo-form structure undergo significant conformational changes ( Fig. 2a and 2b). As a result of induced fit, residues corresponding to the two missing segments are folded into a loop and a g-helix, respectively ( Fig. 2b and Fig. S1b). However, the relatively higher B-factor values of these two segments indicated they are still somewhat flexible in the complex structure ( Fig. 2c and 2d).
Superposition of the two molecules in each asymmetric unit (Fig. S1c) further indicated that the flexibility of these two segments could be also seen from the relative higher RMSD values (1.41 Å for Segment 1 and 0.39 Å for Segment 2) compared to the overall RMSD of 0.24 Å . In detail, residues Ser87, Ala88 and Asp89 in Segment 1 of one molecule shift towards ferrichrome against the corresponding residues of another molecule in the asymmetric unit at a distance of about 3 to 4 Å . All together, we propose that these two flexible segments, in addition to residues Glu119 and Glu262, contribute to the majority of a tunable interface with the permease subunits PiaB and PiaC.
The Ferrichrome-binding Site
As shown in the electron density map of the complex structure, the inter-domain cleft captures a molecule of ferrichrome with Lcis configuration (Fig. 4a). The ferrichrome is stabilized by a couple of residues via both hydrophilic and hydrophobic interactions (Fig. 4b). The iron moiety of ferrichrome is pulled to the C-terminal domain via three hydrogen bonds with residues Arg231 and Tyr225. The two side-chain amino groups of Arg231 form two hydrogen bonds with carbonyl oxygen atoms of two hydroxamic acid moieties, respectively, whereas the hydroxyl group of Tyr225 makes a hydrogen bond with the carbonyl oxygen of the third hydroxamic acid moiety. In addition, the backbone of ferrichrome is stabilized at the opening side of the inter-domain cleft via three hydrogen bonds with the side chains of Trp158 and Asn83. These six hydrogen bonds keep the ferrichrome to adopt an orientation with the iron moiety pointing inward to the binding cleft. In addition, the methylene carbon atoms of the hydroxyornithine moieties are further packed by a hydrophobic barrel (Fig. 4b). This barrel is mainly composed of four residues (Met213, Trp223, Tyr225 and Phe255) from the Cterminal domain and three residues (Trp63, Tyr84 and Trp158) from the N-terminal domain. A combination of both intensive hydrophilic and hydrophobic interactions attributes PiaA a very high affinity towards ferrichrome. Superposition of the complex against the apo-PiaA structure revealed that most residues in the binding cleft undergo subtle conformational changes, except for residues Asn83 and Tyr84 that locate at the invisible Segment 1 in the apo-form. To further validate the contributions of ferrichromebinding residues, we mutated one of the conserved residues, Trp63, to Ala. The W63A mutant showed a Kd towards ferrichrome of about 32.8612.1 nM (Fig. S2), which is about 6fold to that of the wild-type.
Superposition of PiaA-ferrichrome against E. coli gallichromecomplexed FhuD yielded an RMSD of 7.3 Å over 195 Ca atoms, with the ligands adopting different orientations (Fig. 5). Nevertheless, the ligand binding pattern of PiaA is quite similar to that of E. coli FhuD [10]. Although the ligands in the two structures are stabilized in a similar hydrophobic pocket by the residues from both the N-and C-domains, respectively, the metal moieties of two ligands adopt different orientations. The two C-terminal residues Arg231 and Tyr225 in PiaA to stabilize the iron moiety of ferrichrome are corresponding to the N-terminal residues Arg84 and Tyr106 of FhuD, respectively. Therefore, the metal moiety of ferrichrome is pulled by the C-terminal domain of PiaA, whereas that of gallichrome points towards the N-terminal domain of FhuD.
A Universal Pattern to Recognize Ferrichrome by PiaA and Homologs
Sequence homology search of PiaA against the NCBI nonredundant protein database gave 74 homologs with a sequence identity of 30% or higher from both Gram-positive and negative bacteria. Based on the phylogenetic analysis, eight representative sequences were selected for the subsequent multiple-sequence alignment (Fig. 6). Most ferrichrome-binding residues are highly conserved in all species. Except for one case in Citricoccus sp. CH26A with a variation of Tyr to Phe, residues Arg231 and Tyr225 that forms hydrogen bonds with the iron moiety are strictly conserved, indicating the iron moiety of ferrichrome also points towards the C-terminal domain of these proteins. Other residues composed of the ferrichrome-binding cleft of PiaA, except for Met213 and Trp225, are also conserved. Taken together, these homologs should bind to the ferrichrome and its derivates in a pattern similar to that of PiaA. Moreover, all homologs employ the two highly conserved glutamate residues corresponding to Glu119 Figure 6. Multiple-sequence alignment of PiaA and homologs. The multiple-sequence alignment was performed with the programs Multialign [41] and Espript [42]. The secondary-structure elements of PiaA were displayed above the sequences. Residues forming hydrogen bonds with the three carbonyl oxygen atoms of the hydroxamic acid moieties were marked by red triangles. Residues participating in hydrophobic interactions with ferrichrome and hydrogen-bond with the backbone of the siderophore were labeled with black triangles. Glu119 and Glu262 were marked with blue asterisks. All sequences were downloaded from the NCBI database (www.ncbi.nlm.nih.gov). The sequences (NCBI accession numbers codes in parantheses) are S. pneumoniae PiaA (NP_345507.1), Rhodococcus pyridinivorans AK37 iron-siderophore ABC transporter substratebinding protein (ZP_09309942.1), Microbacterium testaceum StLB037 iron hydroxamate ABC transporter periplasmic protein (YP_004226481.1), Cellvibrio gilvus ATCC 13127 periplasmic binding protein (YP_004599979.1), Citricoccus sp. CH26A hypothetical protein CCH26_07037 (ZP_09825038.1), marine actinobacterium PHSC20C1 substrate binding protein (ZP_01129479.1), Pelagibacterium halotolerans B2 periplasmic iron-siderophore binding protein (YP_004898951.1), Kribbella flavida DSM 17836 periplasmic binding protein (YP_003378920.1). doi:10.1371/journal.pone.0071451.g006 and Glu262 of PiaA to form the salt bridges. It was reported that the antibiotic albomycin, a derivate of ferrichrome, could be captured at a very low concentration via PiaA and subsequently kill S. pneumoniae [15]. Thus we proposed that all bacterial species possessing a PiaA homolog should be sensitive to albomycin at a certain extent. Figure S1 Induce fit of the two highly flexible segments. a) The density map of the two missing segments as stereoviews in the apo structure. The two molecules in the asymmetry unit were separately colored in blue and orange. The terminal residues of the two missing segments were shown as yellow sticks and counted at 1.0 s in the omit map. b) The density map of the two segments as stereoviews in the PiaA-ferrichrome structure. The two molecules in the asymmetry unit were shown in red and green, respectively. The residues of the two segments were exhibited as yellow sticks and counted at 1.0 s in the omit map. c) Superposition of the two independent molecules in PiaA-ferrichrome structure. The two segments were boxed with rectangles of dotted lines. (TIF) Figure S2 Representative raw and fit ITC isotherms for ferrichrome titrated into the W63A mutant. Calorimetric titrations were performed at 25uC by stepwise adding 19 drops of 2 ml ferrichrome at 300 mM dissolved in 50 mM sodium acetate, pH 5.2 to 200 ml PiaA W63A mutant at 30 mM. (TIF) | v2 |
2015-05-12T20:03:21.000Z | 2015-05-12T00:00:00.000Z | 17572729 | s2orc/train | Release Early, Release Often: Predicting Change in Versioned Knowledge Organization Systems on the Web
The Semantic Web is built on top of Knowledge Organization Systems (KOS) (vocabularies, ontologies, concept schemes) that provide a structured, interoperable and distributed access to Linked Data on the Web. The maintenance of these KOS over time has produced a number of KOS version chains: subsequent unique version identifiers to unique states of a KOS. However, the release of new KOS versions pose challenges to both KOS publishers and users. For publishers, updating a KOS is a knowledge intensive task that requires a lot of manual effort, often implying deep deliberation on the set of changes to introduce. For users that link their datasets to these KOS, a new version compromises the validity of their links, often creating ramifications. In this paper we describe a method to automatically detect which parts of a Web KOS are likely to change in a next version, using supervised learning on past versions in the KOS version chain. We use a set of ontology change features to model and predict change in arbitrary Web KOS. We apply our method on 139 varied datasets systematically retrieved from the Semantic Web, obtaining robust results at correctly predicting change. To illustrate the accuracy, genericity and domain independence of the method, we study the relationship between its effectiveness and several characterizations of the evaluated datasets, finding that predictors like the number of versions in a chain and their release frequency have a fundamental impact in predictability of change in Web KOS. Consequently, we argue for adopting a release early, release often philosophy in Web KOS development cycles.
Introduction
Motivation. Knowledge Organization Systems (KOS), such as SKOS taxonomies and OWL ontologies, play a crucial role in the Semantic Web. They are at the core of any Linked Data vocabulary and provide structured access to data, formalize the semantics of multiple domains, and extend interoperability across the Web. Concepts are central entities in KOS and represent objects with common -RQ1. Can past knowledge be used to predict concept change in Web KOS?
Can this be done by extending a class-enrichment prediction method into a concept change prediction method? -RQ2. What features encoding past knowledge have a greater influence on future changes? What classifier performs best to predict these changes? -RQ3. Can this new method predict change in KOS independently of the domain of application? What features characterize the Web KOS where this method works best?
Findings. We run our pipeline in 139 different KOS version chains in RDF, including the Dutch historical censuses, the DBpedia ontology, in-use ontologies in the SPARQL endpoints of the LOD cloud, and Linked Open Vocabularies used all over the Web. We obtain solid evaluation performances, with f-measures of 0.84, 0.93 and 0.79 on predicting test data with learnt models. We characterize the datasets in which our approach works best. We find that features such as dataset size, the number of versions in the chain, the time gap between each version, the complexity of their schemas or the nature of the edits between versions have a strong influence in the quality of the predictive models of change.
The rest of the paper is structured as follows. In Sections 2 and 3 we survey previous efforts to address change in KOS, and define our target problem and formalism. Section 4 describes our approach, pipeline and feature set. In Section 5 we perform an experimental evaluation in 139 Web KOS version chains, describing the input data, process, results and dataset characterization. In Section 6 we discuss these results with respect to our research questions, before we conclude.
Related Work
In Machine Learning changes in the domain are related with the fenomenon of concept drift. It is difficult to learn in real-world domains when "the concept of interest may depend on some hidden context, not given explicitly in the form of predictive features. (...) Changes in the hidden context can induce more or less radical changes in the target concept, which is generally known as concept drift" [19]. Hence, drift occurs in a concept when the statistical properties of a target variable (the concept) change over time in unforeseen ways. Multiple concept drift detection methods exist [6].
With the advent of the Semantic Web, changes in concepts have been investigated by formally studying the differences between ontologies in Description Logics [7]. [4] propose a method based on clustering similar instances to detect concept change. [20] define the semantics of concept change and drift, and how to identify them, in a Semantic Web setting. The related field of ontology evolution deals with "the timely adaptation of an ontology and consistent propagation of changes to dependent artifacts" [1]. As stated by [18], the first step for any evolution process consists in identifying the need for change; change capturing can then be studied as structure-driven, data-driven or usage-driven. Accordingly, change is only a step in the evolution process, although the definition of the goal of ontology change ("deciding the modifications to perform upon an ontology in response to a certain need for change as well as the implementation of these modifications and the management of their effects in depending data, services, applications, agents or other elements" [5,11,8]) suggests that the overlap between the two fields is considerable. [16] propose a method based on supervised learning on past ontology versions to predict enrichment of classes of biomedical ontologies, using guidelines of [18] to design good predictors of change. The need of tracing changes in KOS in application areas of the Semantic Web has been stressed, particularly in the Digital Humanities [14] and Linked Statistical Data, where concept comparability [3,15] is key.
Problem Definition
We base our definition of change in Web KOS on the framework proposed by [20]. Definition 1. The meaning of a concept C is a triple (label(C),int(C),ext(C)), where label(C) is a string, int(C) a set of properties (the intension of C), and ext(C) a subset of the universe (the extension of C).
All the elements of the meaning of a concept can change. To address concept identity over time, authors in [20] assume that the intension of a concept C is the disjoint union of a rigid and a non-rigid set of properties (i.e. (int r (C) ∪ int nr (C))). Then, a concept is uniquely identified by some essential properties that do not change. The notion of identity allows the comparison of two variants of a concept at different points in time, even if a change on its meaning occurs. Definition 2. Two concepts C 1 and C 2 are considered identical if and only if, their rigid intensions are equivalent, i.e., int r (C 1 ) = int r (C 2 ).
If two variants of a concept at two different times have the same meaning, there is no concept change. We define intensional, extensional, and label similarity functions sim int , sim ext , sim label in order to quantify meaning similarity. These functions have range [0, 1], and a similarity value of 1 indicates equality. We implement this framework as our definition of concept change between two KOS versions in a version chain.
Approach
The basic assumption of our proposed approach is that the knowledge encoded in past versions of a Linked Dataset can be used to faithfully predict which parts of it will suffer changes in a forthcoming version. Features that have an influence in changing an ontology have been previously studied and classified [18] as: structure-driven, derived from the structure of the ontology (e.g. if a class has a single subclass, both should be merged); data-driven, derived from the instances that belong to the ontology (e.g. if a class has many instances, the class should be split); and usage-driven, derived from the usage patterns of the ontology in the system it feeds (e.g. remove a class that has not been accessed in a long time). [16] have successfully proven the use of these features (i) to predict class enrichment, that is, to estimate if a class will be extended (e.g. with new children or properties) in the future; (ii) in (OBO/OWL) ontologies; and (iii) in the biomedical domain. However, it remains unclear if supervised learning and features of [18] can be generally applied (I) to predict general change, that is, to estimate if a concept will experience change in its meaning; (II) in any Linked Dataset (i.e. generic RDF graphs); and (III) in a domain-independent manner.
In order to investigate these, we present a pipeline that includes: (a) an abstraction of the input parameters required for the learning process; (b) an abstraction of features that apply not only to OBO/OWL ontologies, but to any Linked Dataset; and (c) a pre-learning optimization technique to merge features of identical versioned concepts into single training/test individuals. Figure 1 shows the pipeline of our proposed approach. Taking input {Feature generation parameters, change definition, version chain, learning parameters}, the system returns output {Feature selection, classifier performance}.
Pipeline
First, the Feature Generator (FG) generates k training datasets and one test dataset, according to the following input set elements: (a) version chain containing N versions of a KOS, in any RDF serialization, where the change prediction is to be performed; (b) several user-set feature generation parameters that control the feature generation process (the ∆FC parameter, setting the version to be used to decide if a concept of the training dataset has changed; and the ∆TT parameter, setting the version to be used to decide if a concept of the test dataset has changed); and (c) a customizable definition of change that determines the value of the target variable. The last element of the input set, learning parameters, is passed further to be used in a later stage. Once all set, k training datasets and the test dataset are built by the FG as shown in Figure 2. The parameters N , ∆F C and ∆T T are used to determine which versions will play the role of {V t }, V r and V e . {V t } is the set of training versions, which are used to build the training dataset. V r is the reference version, against which all versions in {V t } are compared, using the definition of change provided as input, to determine whether there is concept change or not. V e is the evaluation version and is used to build the test dataset, following a similar procedure as with {V t } and V r , this time comparing V r with V e . V e is set by default to the most recent version. While extracting features, each concept is labeled depending on whether change happened between one version of the concept and the next, using definitions of Section 3. Since versions can only be compared pairwise, the FG produces k training datasets. In order to preserve identity of learning instances, the Identity Aggregator (IA) matches concepts in the k training datasets and merges their features into one individual, modifying the dataset dimensionality accordingly. The training and test datasets are then ingested by the Normalizer (Norm), which adjusts value ranges, recodes feature names and types, and discards outliers. Finally, the training and test datasets are used by the Machine Learning Interface (MLI) as an input for the feature selection and classification tasks. These are done in a generic and customizable way, building on top of the implementation of state-of-the-art machine learning algorithms contained in the WEKA API [9]. The last element of the pipeline's input set, learning parameters, is used here to achieve this and contains: (a) a feature selection algorithm to rank features according to their influence on conceptual change; (b) a relevance threshold t to filter these selected features; and (c) the list of classifiers to be trained. First, the MLI runs the chosen feature selection algorithm. Second, it trains the chosen subset of WEKA classifiers (all by default). Last, it evaluates the trained models and stores results.
Feature Set
We propose sets of concept structural features and membership features. Structural features measure the location and the surrounding context of a concept in the dataset schema, such as children concepts, sibling concepts, height of a concept (i.e. distance to the leaves), etc. Since classification schemas are graphs in general and may contain cycles, these properties are defined with a maxDepth threshold that indicates the maximum level at which the property will be calculated (e.g. direct children, children at depth one, two, etc.). A concept is considered to be a child of another if they are connected by a user-specified property (e.g. skos:broader, skos:narrower or rdfs:subClassOf). We use direct children (descendants at distance 1) [dirChildren], children at depth ≤ maxDepth [dirChildrenD], direct parents (concepts this concept descends from) [parents], and siblings (concepts that share parents with this concept). Membership features measure to what extent a concept in the classification is used in the data. A data item in a Linked Dataset is considered to be using a concept of the classification if there is a user-defined membership property linking the data item with the concept (e.g. dc:subject or rdf:type). We use members of this concept [dirArticles] and total members considering all children at depth ≤ maxDepth [dirArticlesChildrenD] as membership features. Finally, we define a set of hybrid features that combine the previous into a single one (e.g. ratio of members per number of direct children) [ratioArticlesChildren, ratioArticlesChildrenD]. These sets of features map conveniently to the different types of change discovery described by [18]: structural features implement structure-driven change discovery; and membership features can be seen both as data-driven (since they describe instances belonging to the ontology) and usage-driven (since users querying these are indirectly using their classes).
These features are computed for each concept in all versions as indicated by the training and test dataset building parameters (see FG module, Section 4.1). However, not all of them may be used for predicting change. [16] show that similar features based on [18] are good candidates for modelling class enrichment. We only select those that prove to be good predictors of concept change in arbitrary domains, as chosen by the feature selection (see MLI module, Section 4.1).
Evaluation
We apply our proposed approach to 139 KOS version chains retrieved from the Web. We describe the properties of such version chains, the experiment setup and the evaluation criteria. We report on our results, providing evidence to RQ2 and RQ3, evaluating: (a) the performance of the feature set as a generic predictor of change in KOS version chains (see Section 4.2); (b) the performance of the classifiers at the predicting task; and (c) characteristics of the KOS version chains where our approach works best.
Input Data
In order to study the genericity of our approach and its applicability in a domainindependent setting, we use a set of 139 multi-and interdisciplinary KOS version chains represented as Linked Data. We classify these 139 version chains in four groups: (1)
Experimental Setup
Our evaluation process is two-fold. First, we assess the quality of our features as concept change predictors, and we choose the most performing ones. We do this via feature selection (see Section 4.1). Second, we use these selected features for learning, and we evaluate quality of the resulting classifiers on predicting concept change. To evaluate classifiers we follow a simple approach: we compare the predictions made by the classifiers with the actual concept change going on CEDAR feature DBpedia feature 1 siblings dirChildren 2 dirArticlesChildrenD2 siblings 3 ratioArticlesChildren dirChildrenD2 , we execute several learning tasks adding more past versions to {V t } incrementally. We study how this impacts prediction of change in V i . We also run a learning task considering all versions, and we use the trained classifiers to predict change in the most current version.
For assessing model quality, we use standard performance measures: precision, recall, f-measure, and area under the ROC curve. We perform a two-fold evaluation. On one hand, we evaluate the quality of the models produced without making any predictions and using 10-fold cross-validation with the training data. On the other hand, we use the same indicators to evaluate the classifiers' prediction performance using the unseen test datasets V e /V i . We compare our results to a random prediction baseline. Table 1 shows the top selected features by the Relief algorithm [10], included in the WEKA API. The features are ordered according to their selection frequency. We observe that membership features (dirArticles, dirArticlesChildren) are systematically selected in the CEDAR data instead of structural properties (siblings, dirChildren). Conversely, we observe a clear preference for structural properties (dirChildren, dirChildrenD, siblings) in the DBpedia data. We execute our approach six times in the Dutch historical censuses (1) and the DBpedia (2) version chains, adding one Linked Dataset version to {V t } and shifting V i forward once each time. We identify each experiment with the year/timestamp of the version to be refined. Figure 5 shows the results. We also predict the most recent version of the DBpedia ontology, using all available versions as training set {V t }, and leaving the last for testing (V e ). Table 3
Characterization of Version Chains
The last part of our evaluation consists of studying what specific characteristics of the input version chains have a relationship with the quality of the learnt models and their predictive power (RQ3). To investigate this, we compute, for each version chain, a set of version chain characteristics that include: size of the chain (totalSize) in number of triples; number of versions in the chain (nSnapshots); average time gap (in days) between the release date of each version (avgGap); average size of each version (avgSize); number of inserted new statements between versions (nInserts) 8 ; number of deletes (nDeletes); number of common statements (nComm); is the KOS a tree or a graph (isTree); maximum tree depth among versions (maxTreeDepth); average tree depth (avgTreeDepth); number of instances (totalInstances); ratio of instances over all statements (ratioInstances); number of structural relationships (totalStructural ); and ratio of structural relationships over all statements (ratioStructural ). First, we use regression to analyse which dataset characteristics are good predictors of the performance of the best selected classifier in our approach, using the area under the ROC curve as a response variable. The best model is shown in Figure 3 9 . In these models we find that, under the null hypothesis of normality and non-dependence, the predictors nSnapshots, avgTreeDepth, ratioStructural, ratioInserts and ratioComm are good explanatory variables with respect to the performance of change detection in KOS version chains. The model in Figure 3, which includes ratioInserts discarding ratioDeletes and ratioComm due to multi-colinearity, shows the best model fit with respect to the data. Secondly, we use multinomial logistic regression to analyse what dataset characteristics are good predictors of the classifier type selected as best in our approach. A simulation with the best model is shown in Figure 4 10 . In this model we find that avgGap is influential at selecting a tree 8 To measure insertions and deletions between versions we use the standard diff UNIX tool. 9 Additional model details at http://bit.ly/kos-change 10 Additional model details at http://bit.ly/kos-change classifier instead of a bayes one. We also find that totalSize is influential at selecting functions and rules based classifiers instead of bayes ones. In Figure 4 we show a simulation on how these predictors 11 influence the choice of the different classifier families. Observe that all classifier families will be less likely chosen for the task when the time gap between KOS versions decreases, except for treebased classifiers; in other words, more frequent releases will favour most models predicting change. Interestingly, ratios on instance and schema data will influence the best classifier type in an inverse way: more instance data will favour tree-based and rules classifiers, while more schema data will favour bayes classifiers. We discuss consequences of these results in the next section.
Discussion and Lessons Learned
In this Section we discuss our findings, by (1) observing specific correctly predicted changing concepts; (2) arguing the different classifier performances; and (3) claiming that the relationship found between some predictors in KOS version chains and their predictability empirically supports the release early, release often philosophy in KOS development. Fig. 4: Simulation of how predictors influence the best classifier chosen using multinomial logistic regression. E.g., avgGap shows that smaller time gaps between releases favours almost all classifier types, except those tree based.
We first explore some particular concepts predicted to change. For instance, http://cedar.example.org/ns#hisco-06 is an example concept of [CEDAR] predicted to change which in fact did: the class of "medical, dental, veterinary and related workers". Most of its features present high stability across the versions; except those related to its instances. These vary from 841 sets of observations, to 68, 143, 662 and 110, while structural properties like number of children (4) or siblings (9) remain relatively stable. In the [DBpedia] version chain we find that http://dbpedia.org/ontology/CollegeCoach is a concept also expected to change. The number of Wikipedia articles pointing to it increases linearly (2787, 3520, 4036, 4870...); it always remains a leave with a unique parent, so its children subhierarchy does not change either. Interestingly, its siblings remain stable (21, 21, 23, 23) until it gets a new parent and its siblings suddenly explode (23, 344). Therefore, it is easy to see why membership and structural features are influential in modelling changes in [CEDAR] and [DBpedia].
More generally, we discuss the performance of classification and the selection of classifiers. Although the Logistic, the MultilayerPerceptron and the treebased algorithms have good performance in specific situations, the NaiveBayes classifier shows consistent results in all change prediction experiments. Similar behavior and results have been described [16]. Interestingly, we observe how the non-overfitting tendency of NaiveBayes is an advantage if the classifier is trained with more past versions (nSnapshots): MultilayerPerceptron, for instance, pre- [12]), making their changes harder to predict. Second, corner cases of conceptual change might not be captured with the feature set. Third, these [CEDAR] versions contain scarce member data that might insufficiently describe uncommon changes. Still, our refinement approach proves to be useful on detecting these coherence data-issues. Figure 5 shows that classification, in general, outperforms the random baseline. After observing that past knowledge allows building predictive models for change in KOS, a meaningful question to discuss is: what characteristics of KOS version chains make changes in these chains more predictable? In Section 5.4 (a) 10-fold CV scores on the DBpedia ontology training dataset.
(b) Prediction scores on the DBpedia ontology test dataset. we build regression models to understand the genericity of our approach, by observing what characteristics of our evaluated 139 KOS version chains have an influence on (a) the performance of the change prediction; and (b) the selection of one or another classifier (RQ3). According to our findings (see Figure 3), the predictors nSnapshots, avgTreeDepth, ratioStructural, ratioInserts and ratioComm are good explanatory variables of the performance of change prediction in KOS version chains. This leads to three important observations: (1) a longer version history in a KOS makes its changes more predictable; (2) schema information is more important than instance information for change modelling; and (3) inserting new statements and leaving the existing ones in a new release helps more in preserving change consistency than removing old statements. Good practices in the maintenance life cycle of Web ontologies, schemas and vocabularies can be built on top of these observations. For instance, it is important to stimulate the design of vocabularies and practices for dataset versioning, explicitly describing and linking the change history of KOS versions as Linked Data. Guidelines should encourage the inclusion of as much structural and schema triples in datasets as possible, by making their count explicit (e.g. extending the VoID vocabulary [2] to include ratios of schema and instance data) and rewarding such datasets with more visibility. In addition, the behaviour of predictor avgGap (see Figure 4) suggests that a majority of classifiers will predict change better if the time between KOS releases is short. Hence the evidence that supports this paper's title: we encourage KOS publishers to release early, release often [17]. As in the software development philosophy, we emphasize the importance of early and frequent KOS releases. Besides the empirical evidence shown in this paper, we believe this will create a tighter feedback loop between KOS publishers and KOS users, allowing ontologies and vocabularies to progress faster, and enabling users to help define the KOS to better conform to their requirements and avoid their disuse. An early, frequent, and consistent KOS update will lead, under the assumptions of this paper, to a more consistent and meaningful Web towards change.
Conclusions and Future Work
Changes in KOS pose challenges to Linked Data publishers and users. Releasing new KOS versions is a knowledge-based and labor-intensive task for publishers, and compromises the validity of links from users' datasets. We automatically detect which parts of a Linked Dataset will undergo change in a forthcoming version using supervised learning and leveraging change knowledge contained in past versions. Recalling back our research questions, our approach tackles RQ1 by providing generic and customizable change definition functions; generic and customizable features, including free choice of predicates to use in their generation; customizable learning algorithms (feature selection and classification); and fully automated executions -from input Linked Data KOS version chains to output feature/classifier performances. The assumption that change in KOS version chains can be predicted using past knowledge is acceptable considering intensional, extensional and label changes. We predict change accurately (f-measures of 0.84, 0.93 and 0.79 in test data) in 139 different KOS by generalizing the state of the art methods and features in a Machine Learning pipeline for Linked Data (RQ1). We study the variance in relevant features from our feature set, and how classifiers behave using these features to predict change (RQ2). With respect to its domain-independent applicability and the features that characterize Web KOS where our method works best (RQ3), we study the characteristics of these KOS version chains, and we find that specific features such as the number of snapshots, the time gap between versions, the complexity and amount of schema statements and the number of inter-version insertions characterize KOS with good change predictability, and we suggest research lines to foster a more meaningful and consistent Web towards change. Multiple challenges are open for the future. First, we will study how different definitions of concept change affect the predictive models. Second, we plan to apply our approach to additional domains for the sake of genericity. Finally, we plan to scale up our approach in a distributed environment to cope with larger datasets and detect change in real time. | v2 |
2018-12-20T20:36:38.928Z | 2013-05-22T00:00:00.000Z | 59392259 | s2orc/train | Self-consistent calculation of spin transport and magnetization dynamics
A spin-polarized current transfers its spin-angular momentum to a local magnetization, exciting current-induced magnetization dynamics. So far, most studies in this field have focused on the direct effect of spin transport on magnetization dynamics, but ignored the feedback from the magnetization dynamics to the spin transport and back to the magnetization dynamics. Although the feedback is usually weak, there are situations when it can play an important role in the dynamics. In such situations, self-consistent calculations of the magnetization dynamics and the spin transport can accurately describe the feedback. This review describes in detail the feedback mechanisms, and presents recent progress in self-consistent calculations of the coupled dynamics. We pay special attention to three representative examples, where the feedback generates non-local effective interactions for the magnetization. Possibly the most dramatic feedback example is the dynamic instability in magnetic nanopillars with a single magnetic layer. This instability does not occur without non-local feedback. We demonstrate that full self-consistent calculations generate simulation results in much better agreement with experiments than previous calculations that addressed the feedback effect approximately. The next example is for more typical spin valve nanopillars. Although the effect of feedback is less dramatic because even without feedback the current can induce magnetization oscillation, the feedback can still have important consequences. For instance, we show that the feedback can reduce the linewidth of oscillations, in agreement with experimental observations. Finally, we consider nonadiabatic electron transport in narrow domain walls. The non-local feedback in these systems leads to a significant renormalization of the effective nonadiabatic spin transfer torque.
Abstract
A spin-polarized current transfers its spin-angular momentum to a local magnetization, exciting various types of current-induced magnetization dynamics. So far, most studies in this field have focused on the direct effect of spin transport on magnetization dynamics, but ignored the feedback from the magnetization dynamics to the spin transport and back to the magnetization dynamics. Although the feedback is usually weak, there are situations when it can play an important role in the dynamics. In such situations, simultaneous, self-consistent calculations of the magnetization dynamics and the spin transport can accurately describe the feedback. This review describes in detail the feedback mechanisms, and presents recent progress in self-consistent calculations of the coupled dynamics. We pay special attention to three representative examples, where the feedback generates non-local effective interactions for the magnetization after the spin accumulation has been integrated out. Possibly the most dramatic feedback example is the dynamic instability in magnetic nanopillars with a single magnetic layer. This instability does not occur without non-local feedback. We demonstrate that full self-consistent calculations generate simulation results in much better agreement with experiments than previous calculations that addressed the feedback effect approximately.
The next example is for more typical spin valve nanopillars. Although the effect of feedback is less dramatic because even without feedback the current can make stationary states unstable and induce magnetization oscillation, the feedback can still have important consequences. For instance, we show that the feedback can reduce the linewidth of oscillations, in agreement with experimental observations. A key aspect of this reduction is the suppression of the excitation of short wave length spin waves by the non-local feedback.
Finally, we consider nonadiabatic electron transport in narrow domain walls. The non-local feedback in these systems leads to a significant renormalization of the effective nonadiabatic spin transfer torque. These examples show that the self-consistent treatment of spin transport and magnetization dynamics is important for understanding the physics of the coupled dynamics and for providing a bridge between the ongoing research fields of current-induced magnetization dynamics and the newly emerging fields of magnetization-dynamics-induced generation of charge and spin currents.
Introduction
When electrons flow through systems that include a ferromagnetic region, the flowing electrons become partially spin polarized due to the exchange interaction between conduction electron spins and local magnetizations. Spin transfer torques [1][2][3][4] then occur when the spin polarized current passes through another region with a magnetization non-collinear to that in the first region. The spin-polarized current exerts a torque on the non-collinear magnetization by transferring its transverse spin-angular momentum. Spin transfer torques generate a wide variety of magnetization dynamics such as full reversal of magnetization [5,6], steady-state precession [7][8][9][10], domain wall motion [11,12], and modification of spin waves [13,14].
In order to investigate current-induced magnetic excitation, it is essential to formulate the where a J and b J are the coefficients of the in-plane and out-of-plane spin transfer torques, respectively, where the plane is defined to contain two vectors, m and p, and p is the direction vector of the pinned-layer magnetization, which is usually assumed to be fixed. On the other hand, when the current flows within a magnetic layer (or nanowire) with a continuously varying magnetization, e.g. domain walls and spin waves, N i ST for onedimensional system is taken as [23][24][25] , ) is the spin current velocity corresponding to adiabatic spin transfer torque, P is the spin polarization, j e is the charge current density, M S is the saturation magnetization, and is the ratio of the nonadiabatic spin transfer torque to the adiabatic one [24,25].
Equations (2)-(3) for the spin transfer torque are based on the assumptions that the spin transfer torque depends on the magnetization only instantaneously and locally. Using the instantaneity assumption, N i ST is derived by solving the spin transport equation for relevant systems with fixed (= time-independent) magnetization profiles and then applied to the magnetization dynamics. This instantaneity assumption depends on the ability to decouple the spin transport dynamics from the magnetization dynamics. The decoupling is justified based on the difference in time scales [21,26]. Two characteristic time scales for the spin transport The local approximation is that in Eqs. (2) and (3), N i ST is determined by the local values of magnetization (= m i ) and/or local spatial derivative of the magnetization (= ∂m/∂x| i ).
However, the local approximation is not always valid. For example, consider a system consisting of a single ferromagnet (FM) layer sandwiched by two normal metal (NM) layers, where the charge current flows perpendicular to the FM|NM interfaces. The current through the layers generates a spin accumulation, which in turn can generate a spin transfer torque whenever it is not collinear with the magnetization at an interface. Although the direction and magnitude of the spin transfer torque at a point on an interface depends locally on the spin accumulation at the same point, the spin accumulation has an inherently non-local dependence on the magnetization due to spin diffusion. Strictly speaking the spin transfer torque remains local even in this case, but a local interaction between the spin accumulation and the magnetization leads to non-local effective interactions for the magnetization after spin accumulation has been integrated out. In this paper, we call this feedback non-local spin transfer torque because there is a non-local effective relation between the spin torque and the magnetization profile. For the torque acting on the single FM layer, lateral spin diffusion in the two neighboring NM layers [27,28] is an important source for non-locality of the torque.
Even when net charge flow is perpendicular to the layers, spin diffusion occurs not only along the perpendicular direction but also along the lateral direction (or in-plane direction).
Due to this lateral spin diffusion, spin accumulation at a point in the FM|NM interface depends on the magnetization at other points on the interface within the reach of the spin diffusion. Whenever the magnetization is inhomogeneous in the film plane, the non-local torques will be non-zero. Even if the magnetization is initially in a single domain state, the conventional local spin transfer torques or thermal fluctuations make the magnetization inhomogeneous [29][30][31][32][33][34] and the non-local torques then becomes non-zero. This non-local spin transfer torque acts as a source of feedback from the magnetization to the spin transport, which, in turn, further affects the magnetization dynamics.
A complete understanding of current-induced magnetic excitations requires a careful treatment of this non-local feedback. In this review, we do so by self-consistently solving the two dynamic equations simultaneously, one for magnetization and the other for spin accumulation. In Secs. 2 and 3, we present examples where the self-consistent calculation is essential to capture properties of the coupled dynamics. Section 2 presents the effect of lateral spin diffusion on the magnetization dynamics in layered structures. We first analyze in detail current-induced excitation of a single FM and then the current-driven magnetization oscillation in spin valves that contain two FM layers. Section 3 presents current-induced motion of a narrow domain wall. Here, we use a semiclassical approach to calculate spin transfer torques in the ballistic limit. We end the paper by remarking on the prospects for future work on self-consistent calculation of spin transport and magnetization dynamics.
Non-local spin transfer torque in layered structures
We consider two types of non-local spin transfer torques in layered structures. One is caused by lateral spin diffusion along the interface of FM|NM. The other is related to the coupling of local spin accumulation along the vertical (thickness) direction of the layers, which is effective when there are more than three ferromagnetic layers. In this section, we focus on the former and briefly discuss the latter in Section 2.4.
Basic concept of non-local spin transfer torque due to lateral spin diffusion
Spin transfer torques caused by lateral spin diffusion, which we will refer to as "lateral spin transfer torque", were proposed by Polianski and Brouwer [27]. The geometry of the system under consideration is shown in Fig. 1 This s 1 laterally diffuses along the interface, hits the interface at another point with magnetization m 3 , and then scatters from the interface, transmitting with some probability and reflecting with some probability. The moment of the reflected s 1 is anti-parallel to m 3 and that of the transmitted electron is parallel to m 3 . Since the spin angular momentum of s 1 changes due to this scattering process, the amount of the change should be transferred to m 3 to satisfy the conservation of the spin angular momentum. As a result, m 3 experiences spin transfer torque 1 that pushes m 3 to align with m 1 ; i.e., spin transfer effect on the side of the interface NM 1 |FM where the majority spins accumulate tends to suppress any inhomogeneity in the ferromagnetic magnetization. On the other hand, on the side of the interface FM|NM 2 (bottom right panel) where minority spins tend to accumulate, the conduction electron spin s 2 scattered by a local magnetization m 1 initially has its moment anti-parallel to m 1 . Through the lateral diffusion and the backscattering process by m 3 , the moment of s 2 becomes antiparallel to m 3 . This backscattering process generates spin transfer torque 2 whose direction is opposite to 1 ; i.e., spin transfer effect on the side of the interface FM|NM 2 where the spin accumulation is negative tends to enhance inhomogeneity in the magnetization. Note that the lateral spin transfer torque is inherently non-local because the magnetization everywhere couples together through lateral spin diffusion.
In symmetric systems, 1 and 2 cancel each other and the lateral spin transfer torque has no net effect. Here we assume that the FM layer is sufficiently thin that the magnetization is uniform along the thickness direction. Making the thickness of NM 1 and NM 2 different; i.e., L 1 ≠ L 2 , breaks the symmetry and removes this cancellation. The spin accumulation at the interfaces NM 1 |FM and FM|NM 2 can be found by solving the two second-order differential equations proposed by Valet and Fert [35]. It is straightforward to use Valet-Fert theory in one dimension to show that asymmetric devices give asymmetric spin accumulation.
where l sf is the spin diffusion length, e is the electrochemical potential for the electron density, and S is the spin chemical potential (that is proportional to the spin accumulation n S through the Einstein relation , where is the electrical conductivity, D is the diffusion constant, and n is the number density corresponding to the spin accumulation). Figure 2 shows the profiles of S along the z-axis for symmetric (L 1 = L 2 , Fig. 2(a)) and asymmetric (L 1 < L 2 , Fig. 2(b)) structures. We use the boundary condition S = 0 at both interfaces between the non-magnetic layers and the reservoirs. This choice is motivated by the idea that the reservoirs have an infinite density of states. That drives the spin accumulation to zero. Alternatively, placing NM layers at the interfaces with a large spinorbit coupling, such as Pt or Pt-alloy, induces rapid spin-flip scattering, which also drives the spin accumulation to zero. For a symmetric structure ( Fig. 2(a)), the spin accumulations at the left and right interfaces of the FM are of the same magnitude but with the opposite sign, whereas for an asymmetric structure ( Fig. 2(b)), they are of different sign and magnitude.
Note that Fig. 2(b) describes the case of charge current flowing from NM 1 to NM 2 , where the sum of the spin accumulations at the interfaces of FM|NM is negative. In this case, 2 dominates over 1 ; i.e., lateral spin transfer torque tends to increase any inhomogeneities in the magnetization. Reversing the current polarity reverses the spin accumulation so that 1 dominates over 2 ; i.e., lateral spin transfer torques suppress inhomogeneities.
Previous studies on non-local spin transfer torque due to lateral spin diffusion
Besides Ref. [27], several experimental [36][37][38] and theoretical [28,[39][40][41] studies have been performed to understand the lateral spin diffusion effect. Özyilmaz et al. [36] experimentally observed current-induced excitation of a single ferromagnetic layer. For an asymmetric Cu/Co/Cu nanopillar structure, current-induced excitations were observed for only one polarity of the current, where, according to the prediction [27], the lateral spin transfer torque should increase the magnetization inhomogeneity. In addition, they did not observe such excitations in a symmetric structure, as expected from the discussion above.
Özyilmaz et al. [37] also reported experimental results indicating that strong asymmetries in the spin accumulation cause spin wave instabilities in spin valve structures at high current densities, similar to those observed for single magnetic layer junctions.
One of us [28] theoretically extended the initial calculation [27] of lateral spin transfer torque to general situations to allow for variation of the magnetization in the direction of the current-flow. Such variation can give instabilities at a single interface, a possible explanation for spin transfer effects seen in point contact experiments [38]. Brataas et al. [39] reported a theoretical study on the mode dependence of current-induced magnetic excitations in spin valves, and found agreement with the experimental results of Ref. [37]. These calculations [27,28,39] are limited to the linear regime. Even though they identify the onset of instabilities, they do not address the behavior of instabilities after the initial nucleation. Adam et al. [40] performed finite-amplitude self-consistent calculations of spin transport and magnetization dynamics for current-induced magnetic excitations of a thin ferromagnetic layer with asymmetric non-magnetic layers. Their work provided an important proof-of-principle for lateral spin transfer torque, but lacked the spatial resolution and sophistication of full-scale micromagnetic simulations. Hoefer et al. [41] performed a numerical study based on semiclassical spin diffusion theory for a single-layer nanocontact using a convolution approach to calculate the steady-state spin accumulation. They found that directionally controllable collimated spin wave beams can be excited by the interplay of the Oersted field and the orientation of an applied field. These self-consistent calculations [40,41] computed the spin accumulation with either one-dimensional or two-dimensional steady-state solutions of the spin accumulation.
In this section, we show numerical results based on the three-dimensional dynamic solutions of the spin accumulation self-consistently coupled with the magnetization dynamics.
Such self-consistent treatments are essential to correctly describe the finite amplitude evolution of the spin wave modes excited by lateral spin transfer torque.
Modeling scheme
We self-consistently solve the equations of motion of local magnetization (Eq. (6)) and spin accumulation n S (Eq. (7)) [27,28,39] where m is the unit vector of local magnetization, is the gyromagnetic ratio, 0 H eff is the effective field (including magnetostatic fields, crystalline anisotropy, exchange, current- where e is the electric potential, is the potential drop over the interface, G s (s = or ) is the spin-dependent interface conductance, and the last term proportional to t / m of Eq. (9) gives the spin-pumping contribution [44], which couples the magnetization dynamics and the spin current. It is characterized by the mixing conductance G . Generally, the mixing conductance has a real and an imaginary part, which couple to the in-plane and out-of-plane terms in the dynamics respectively. Although the outof-plane spin transfer torque is important in magnetic tunnel junctions [45][46][47][48][49][50][51][52][53], it is negligible in fully metallic multilayers [54,55]. Thus we neglect Im(G ) and the associated out-of-plane spin transfer torque. At the interface FM|NM, J e and J S m are continuous under the condition of S m =0 in the ferromagnet. S and m are related through Eqs. (7)- (9), and the spinversion of Ohm's law with boundary conditions of e = -eV (0) and S = 0 (0) at the far-right (-left) end of the non-magnetic electrodes. We note that the Eq. (9) is valid for a ferromagnet thinner than the exchange length but thicker than the transverse penetration length.
Since the spin accumulation in NM should be taken into account, the patterned part of Cu leads or spacer is also included in the simulation. Thus, an additional boundary condition for the spin accumulation is required at the side wall of the nano-pillar. We assume that there is no spin-current flow out of the system, i.e., 0 / n S r n , where r n is the surface normal vector at the side wall. All simulations repeat two alternating steps: (i) solve Eq. (6) with all boundary conditions to obtain a converged magnetization configuration, and then (ii) solve Eq. (7) to obtain the equilibrium spin accumulation configuration. These steps are repeated.
The choice of boundary conditions at the side wall of the nanopillar gives different results than the convolution method used in Ref. [41]. We show that this difference is not important and discuss other differences between the two approaches in Appendix A.
Single ferromagnet
In this section, we show the main features of current-induced single ferromagnetic layer excitations, obtained from self-consistent calculations. The layer structure is Cu 1 (10 nm) | Co (t Co nm) | Cu 2 (52 nmt Co ) where t Co varies from 2 nm to 8 nm. As explained above, asymmetric Cu leads provide asymmetric spin chemical potential S at each side of Co layer.
The average spin chemical potential μ at interfaces (= S Cu1|Co + S Co|Cu2 ) is negative when the electron flows from the thick to thin Cu layers, corresponding to a negative current. This negative μ provides negative lateral spin transfer torques.
We use the following geometric and magnetic parameters for the single ferromagnetic layer of Co. We consider a nanopillar with an elliptical shape of 60 nm × 30 nm, M S is 1420 kA/m, the exchange stiffness constant is 2×10 -11 J/m, the gyromagnetic ratio of the ferromagnet and non-magnet are 1.9×10 11 T -1 s -1 and 1.76×10 11 T -1 s -1 respectively, we assume there is no anisotropy field, is 0.01, the unit cell size is 3 nm, and the discretization thickness of the Cu layers varies depending on the total Cu thickness and is not larger than 5 nm. Our results with these cell sizes are converged based on test calculations for a few configurations using smaller cells. The transport parameters for Cu, Co, Pt, and their interfaces are summarized in Table 1. The non-local self-consistent calculation of the dynamics takes approximately 300 times longer than a local calculation.
We calculate magnetic excitations as a function of the out-of-plane field ( 0 H = 0 T to 4.6 T) and current (I = -15 mA to +15 mA) at 0 K. Initial magnetic configurations are obtained with applying the out-of-plane field for each case at zero current and zero temperature, and then a current is applied. Figure 3 However, the magnetization does not saturate at large negative currents even though H is larger than H d , consistent with the data in Ref. [36]. The normalized modulus of the magnetic moment (= |M|/M S ) is smaller than 1 for those bias conditions ( Fig. 3(b)), indicating that the magnetic state deviates considerably from the single domain state. , where k is the wave vector characterizing the spatial variation [28]. obtained from self-consistent calculation with those derived theoretically in the linear limit, which is given by [27,28,39] .
Here J ex is the spin stiffness, S is the area of free layer, 0 H d is the out-of-plane demagnetization field, and is the renormalized Gilbert damping constant, where q is the wave number of spin wave, and ) (q G is given by where ± reads the left and right (or top and bottom) Cu layer. S 1 is the magnitude of the lateral spin transfer torque in dimensionless units, where g m is given by In Fig. 5(b), we compare the calculation results of the slope (= dI C /d 0 H) with those obtained from Eq. (10) for various q values. Here, we use the same spin transport parameters as those used in the self-consistent calculations to get the theoretical slopes. The simulation results are in reasonable agreement with analytic ones for q = /(60 nm). This good agreement is obtained only when the spin pumping term in Eq. (9) is included. Note that 60 nm is the length of the device along the in-plane easy axis. It suggests that the wavelength of the lowest energy spin wave mode is twice of the device length, due to the geometry and the Oersted field. However, the slopes from the simulations and the analytic results do not agree well with those observed in the experiment (black solid symbols in Fig. 5(b)). This discrepancy may be due to differences between the spin transport parameters used here and the true experimental values.
One aspect of the comparison between theory and experiment that improves going from the analytic model to the full solution is the intercept of the extrapolated boundary at I = 0.
From Eq. (10), the theoretical intercept at I = 0 is the out-of-plane demagnetization field 0 H d .
The value of 0 H d slightly decreases from 1.6 T to 1.4 T as t Co increases from 2 nm to 8 nm, caused by the change in the demagnetizing factors depending on the geometry of FM junction.
In the experiment of Ref. [36], however, the intercept is found to be much smaller than 0 H d .
The simulated results of the intercept are also considerably smaller than 0 H d , and the intercept decreases with increasing t Co, as shown in the inset of Fig. 5(b). Thus, the intercepts obtained from the self-consistent calculation are in better agreement with the experimental observations than the theoretical ones. We attribute this better agreement to the fact that the self-consistent model more realistically takes into account the influence of the shape and finite size of the nano-pillar on the spin wave mode as we discuss below. The interplay between this laterally inhomogeneous magnetization and negative lateral spin transfer torque excites spin waves, resulting in a rapid decrease in <M z > within a few nanoseconds.
To understand spin wave mode excitation by lateral spin transfer torques, we perform an eigenmode analysis for the magnetization dynamics ( Fig. 6(b)-(d)). To calculate eigenmodes, we choose the bias condition of I = -11 mA and 0 H = 2.5 T, which shows a periodic oscillation of <M z >. We note that such periodic oscillations are observed only for some bias conditions and the magnetic excitation is highly nonlinear in general. The spectral density of <M z > shows two peaks at two frequencies, f L (≈ 75 GHz) and 2f L (≈ 150 GHz) ( Fig. 6(b)), where f L satisfies f L = Co 0 H/2. On the other hand, for a single domain state, we expect the precession frequency to be Co 0 (H-H d <M z >/M S )/2 because the effective magnetic field experienced by the magnetizations is the summation of the external field and the internal demagnetization field. At I = -11 mA, <M z >/M S is about 0.6 ( Fig. 6(a)); in this approximation, the precession frequency would be 46 GHz, which is much smaller than the obtained precession frequency f L . This disagreement indicates that the precession frequency is mostly determined by the external out-of-plane field 0 H, and that contributions from 0 H d are negligible. An eigenmode analysis of the spatial patterns ( Fig. 6(c) and 6(d)) for the two peak frequencies gives some insight into this peculiar field dependence. The eigenmode images are obtained from local power spectrum S z (r, f) [57] .
The precession region with a higher power is localized at the edges. These eigenmodes are unique features originating from lateral spin transfer torque and not expected for the fielddriven excitation [57].
Spin valves
In this section, we apply the self-consistent non-local model to a spin-valve structure with two ferromagnetic layers experimentally studied by Sankey et al. [42], Cu (80) We use the same parameters for Cu as used in the previous section and replace the parameters for Co by parameters for Py provided by the Cornell group. The pillar has an elliptical shape with 120 nm × 60 nm, M S is 645 kA/m [42], the exchange stiffness constant is 1.3×10 -11 J/m, is 0.025 [10], the unit cell size is 5 nm. The transport parameters for Py and Py|Cu are summarized in Table 1. We assume the magnetization of the pinned layer (Py 20 nm) is fixed along the in-plane easy axis and that it gives no stray field. While the pinned layer is likely not to be fixed in reality, we keep it fixed to focus on the effect of lateral diffusion. For finite temperature simulations, we add the Gaussian-distributed random fluctuation field [58] (mean = 0, standard deviation = 2k B T/(M S Vt), where t is the integration time step, V is the volume of unit cell) to the effective field for magnetization. We test convergence of the stochastic calculations and find that the results are converged for t below 50 fs based on the average magnetization along the easy axis. For stochastic simulation, one may require temperature-and cell-size-dependent renormalization of parameters in order to take into account effect of magnons having a shorter wavelength than the unit cell size employed in simulations. Several ways to renormalize the exchange constant and the saturation magnetization have been proposed [59,60]. However, we are not aware of any way to renormalize the damping constant and the spin transfer torques. These parameters are of critical importance for the calculation of current-induced magnetic excitations. In this work, we do not consider temperature-and cell-size-dependent renormalization of parameters. We also neglect any temperature dependence of the transport parameters.
To investigate whether or not the reduced linewidth originates from lateral spin transfer torques, we perform numerical simulations based on three different approaches: i) a macrospin model (MACRO), ii) a conventional micromagnetic model without considering lateral spin transfer torque (CONV), and iii) a non-local, self-consistent model (SELF). Fig. 7 shows contours of spectral density of <M X > as a function of the current at the temperature T= 4 K when a field of 50 mT is applied along the in-plane easy axis (// x). Positive current corresponds to the electron-flow from Cu (2) to Cu (6), and thus positive lateral spin transfer torque. The macrospin simulations show the familiar red-and blue-shift depending on the bias current I (Fig. 7(a)). The conventional simulations show only a red-shift up to a critical current (I C CONV ≈ 2 mA, Fig. 7(b)). Here, the magnetization dynamics becomes complicated due to excitation of incoherent spin-waves when I > I C CONV . As indicated by an arrow, secondary peaks are observed at about half of the frequency of main peaks, corresponding to the precession of end domains [31]. In the non-local, self-consistent simulations, similar secondary peaks are observed, indicating deviations from a single domain state, but peak structures are much clearer than they are in the conventional simulations up to about 2.4 mA, which is larger than I C CONV (Fig. 7(c)). The blue-shift followed by a transition region is also observed. It indicates that positive lateral spin transfer torques improve the coherence of the magnetization dynamics. . It is evident that at low temperature, the non-local, self-consistent simulations give the narrowest linewidth. We calculate the temperature (T) dependence of linewidth from Lorentzian fits ( Fig. 8(b)). At low temperatures (T < 50 K), the non-local, self-consistent simulations provide narrower linewidths than other two approaches, consistent with experimental observations [42]. However, we observe that the linewidths computed from the macrospin simulation are wider than those computed from the conventional micromagnetic simulation. This counterintuitive result may be due to the fact that the linewidth is affected by the precession angle [8]. By estimating the precession angle of micromagnetic results from the spatial average of magnetization component, we find that the macrospin and micromagnetic models give different precession angles whereas two micromagnetic models give similar angles at the bias current. Because of these limitations, direct comparisons of the linewidths between the macrospin and micromagnetic simulations may be limited. Below, we discuss effect of the self-consistent feedback on the linewidth by comparing the two micromagnetic modeling approaches; this comparison would be relatively free from the above-mentioned limitations and shows that the feedback reduces the linewidth.
From Fig. 8(b), we find that the non-local, self-consistent model gives a narrower linewidth than the conventional micromagnetic model for T < 50 K. It suggests that the coupling among local magnetizations induced by positive lateral spin transfer torque indeed results in a substantial improvement of the coherence time of precession at a low temperature.
For T > 50 K, however, the linewidth in the non-local self-consistent simulation increases more rapidly than in the conventional micromagnetic simulation. We note that this does not mean that the positive lateral spin torque makes the linewidth very broad at high temperatures.
As shown in Fig. 8(c), the more rapid increase in the linewidth in the non-local selfconsistent simulations originates from a mode splitting. We find that power spectra calculated from the non-local self-consistent simulations consist of two peaks; a narrow main peak at a higher frequency indicated by up-arrows, and a secondary broad peak at a lower frequency.
The frequency of the secondary peak does not change much with temperature, whereas that of the main peak increases slightly with temperature. This kind of mode splitting has been observed in experiments [61] and numerical studies based on a conventional micromagnetic model with no lateral spin torque [62,63]. Because of this mode splitting, the linewidth obtained from the fit using a single Lorentzian function increases rapidly with temperature.
In the low-temperature limit, two nonlinear effects of the positive lateral spin transfer torques may cause the narrower linewidths in the non-local, self-consistent simulations: an increase of the effective exchange stiffness at short range and an increase of the damping of incoherent spin-waves at long range. As a result, positive lateral spin transfer torques provide an additional nonlinearity to the spin-wave damping. For spin-torque nano-oscillators, the linewidth in the low-temperature limit (i.e. T < 10 K in our case) is given by [64,65] is the positive damping of the oscillator, is the equilibrium linewidth in the passive region, Q is a phenomenological coefficient characterizing the nonlinearity of the positive damping, P is the normalized power, k B T is the thermal energy, is the average energy of the stable autooscillation, 0 is the ferromagnetic resonance frequency, I C is the critical current for the magnetic excitation, is the nonlinear frequency shift coefficient obtained is the effective nonlinear damping, I is the bias , is the spin-polarization efficiency, V 0 is the volume, nonlinear feedback [66,67] which has been widely observed in various fields such as optics [68], mechanics [69], and biology [70]. While this nonlinear feedback typically requires an external feedback element, in spin-valves it is inherent. should be noted that in Ref. [65,71], the large Q is purely phenomenological. Our non-local self-consistent treatment suggests that the large Q may be caused by the lateral spin diffusion.
Thus, the nonlinear spin-wave damping due to lateral spin transfer torque is probably responsible for narrower linewidths in the non-local, self-consistent simulations at low temperatures. For the opposite polarity of the current (i.e. negative lateral spin transfer torque), we observe an increase of the linewidth (not shown) as would be expected for the case when lateral spin diffusion enhances inhomogeneity.
Summary
To summarize this section, we report non-local, self-consistent calculations for currentinduced excitation of a single ferromagnetic layer and spin valves. The former are in good agreement with previous theoretical [27,39] and experimental studies [36]. They provide an improved understanding of the coupled dynamics between magnetizations and spin transport, and the excitation of spin wave modes for negative lateral spin transfer torques. In case of a single ferromagnetic layer, only a negative net lateral spin transfer torques lead to spin wave instabilities, while positive net lateral spin transfer torques do not. In spin valve structures, self-consistent calculations are crucial for correct evaluation of the oscillation linewidth.
Whereas the conventional spin transfer torque and its interplay with the Oersted field tend to cause a large amplitude incoherent spin wave excitation [29][30][31][32], the positive lateral spin transfer torque effect captured by the self-consistent calculation tends to reduce spatial inhomogeneities (suppress spin waves) and leads to more coherent magnetization dynamics at low temperatures. This effect would be beneficial for microwave oscillators utilizing spin transfer torque, where a narrow linewidth is a key requirement.
Lateral spin transfer torques are non-zero when the following three conditions are Such multilayer structures with a synthetic free layer are of considerable interest for MRAM applications [72][73][74][75][76][77] and spin transfer torque-oscillators [78,79]. In this structure, not only are there spin transfer torques at the NM 1 |FM 2 interface, but also at the FM 2 |NM 2 and NM 2 |FM 3 interfaces (Fig. 9). Furthermore, the spin transfer torques at each interface depend on the orientation of both of the other magnetizations ( Fig. 9(c)), because local spin accumulations at each interface are vertically coupled through the whole layer structure. In this kind of structure, the spin transfer torque is non-local even without the lateral spin diffusion, and requires a self-consistent calculation to investigate current-induced magnetic excitation [80][81][82].
Non-local spin transfer torque for a narrow domain wall
One of the central issues for current-induced domain wall motion is how to reduce the threshold current density to move the domain wall. The reason is twofold. A typical threshold current density for a metallic ferromagnet is about 10 12 A/m 2 [11,88]. Such high current densities cause significant Joule heating, making it difficult to distinguish spin transfer effects from heating effects [89][90][91][92][93]. From an application point of view, devices need to operate with current densities lower than this threshold current density to minimize electromigration. For this reason, there has been substantial research directed toward reducing the threshold current density. Several solutions have been proposed. One approach is to use resonant dynamics of domain wall motion by controlling current pulse widths [94] or injecting consecutive current pulses [95]. Another approach is to reduce the hard-axis anisotropy that the spin transfer torque must overcome to move a domain wall. Such reductions can be achieved by shaping nanowire geometries properly since the hard-axis anisotropy is caused by geometrydependent demagnetizing effects, as predicted theoretically [96] and verified experimentally [97].
Yet another approach is to increase the nonadiabatic spin transfer torque, which controls the wall motion for small currents in ideal nanowires. When electrons flow through a spatially slowly varying magnetization configuration, their moments tend to stay aligned with the magnetization. Since this requires the moments to rotate, there must be a reaction torque on whatever is causing them to rotate, i.e. the magnetization. The reaction torque has the form of the first term in Eq. (3) [1,2] and is referred to as the adiabatic spin transfer torque because it comes from the spins "adiabatically" following the magnetization. The other term in Eq. (3), is perpendicular to the adiabatic spin transfer torque and is referred to as the nonadiabatic spin transfer torque even though some contributions to it occur in the adiabatic limit. Without the nonadiabatic torque, the adiabatic torque in combination with the other terms in the LLG equation leads to intrinsic pinning for currents below a threshold [23]. Intrinsic pinning happens because the wall distorts as it moves and the distortion leads to torques that oppose the motion. The nonadiabatic spin transfer torque acts like a magnetic field for domain walls and thus makes the threshold current density vanish for an ideal nanowire. The larger the nonadiabatic torque, the faster the domain wall motion for small currents. [24]. Similarly, band structures with spin-orbit coupling and electron scattering give both damping [113] and nonadiabatic torques [103], both of which can be calculated from first principles [104,105].
The nonadiabatic torque due to these mechanisms does not depend on the domain wall width.
For domain walls much wider than the characteristic length scales of spin transport, these mechanisms are the only ones that make the spin current deviate from the magnetization direction and give a non-adiabatic spin transfer torque.
Additional mechanisms become more significant as domain walls get narrower. For moderately narrow domain walls (width ≈ 5 nm to 10 nm), spin diffusion can increase the effective nonadibaticity [114,115]. [110,112]. For narrow domain walls, the role of non-local spin transfer torque on the domain wall motion may be important.
Previous studies on self-consistent calculation for current-induced domain wall motion
Manchon et al. [114] theoretically predicted that the spin diffusion generates an additional spin transfer torque that effectively enhances the nonadiabatic torque. This new torque is self-consistent calculation of the drift-diffusion model and the LLG equation [115]. They found that an increase in the effective nonadiabaticity for a vortex wall but only minimal changes for a transverse wall, consistent with the theoretical prediction of Ref. [114].
For Bloch or Nèel walls formed in perpendicularly magnetized nanowires, this spin diffusion torque does not enhance the effective nonadiabaticity because the wall is a simple one-dimensional domain wall in contrast to vortex walls. Then, unless the domain wall is extremely narrow, ballistic spin-mistracking will be the important mechanism for changing the nonadiabatic torque. Ohe et al. [116] performed self-consistent calculations to investigate this effect based on a lattice model [117] where the conduction electrons are treated quantum mechanically and thus spin mixing in the states of the conduction electrons is fully taken into account. They found that when the Fermi energy of the electrons is larger than the exchange energy (i.e., a typical situation for transition metals), spin precession induces spin-wave excitations in the local magnetization. This spin-wave excitation contributes to the domain wall displacement at low current densities but reduces the domain wall velocity for large current densities as compared to the adiabatic limit.
Here, we present self-consistent calculations of the non-local spin transfer torque based on a semiclassical, free electron approach. Our approach differs from the previous selfconsistent calculation [116] in two aspects. One difference is the determination of which electron states are occupied. In a Landauer picture, the Fermi levels of the leads are fixed and different. The Fermi level of the material between the leads adjusts in response to the applied voltage to create local charge neutrality. This adjustment leads to current flow that is half excess electrons moving forward and half a deficit of electrons moving backwards. Ref. [116] introduced extra right-propagating electrons in the energy range E F < E < E F + eV where V is the voltage drop across the nanowire (Fig. 10(a)). Since electrons were added to the equilibrium Fermi sea, charge neutrality was violated in their calculation. In contrast, we induce extra right-propagating electrons in the energy range E F < E < E F + eV/2 and remove left-propagating electrons in the energy range E F -eV/2 < E < E F (Fig. 10(b)), so that charge neutrality is preserved. The difference in occupancy results in an important difference in the spatial distribution of non-local spin transfer torque between Ref. [116] and our work. , respectively. In Ref. [116] the oscillatory non-local spin transfer torque appears at only one side of the domain wall, whereas in our work it appears at both sides of the domain wall (see Fig. 10(c) and (d)). An additional difference between the calculations is that Ref. [116] assumes one-dimensional mesoscopic transport by considering only a single-electron channel (k-normal, k // x), whereas we treat the non-equilibrium spins over the full three-dimensional Fermi surface. Treating the full Fermi surface generates spin dephasing because of the variation of the precession length over the Fermi surface. Figure 10(c) shows that for a spin transfer torque calculation with a single-electron channel of Ref. [116], the non-local oscillation of spin transfer torque is very significant and does not decay even far from the domain wall. In contrast, the oscillation is suppressed at large distances from the wall in our approach due to the strong spin dephasing (Fig. 10(d)).
Semiclassical approach
Here we use a semiclassical approach proposed by one of us [100], which is based on two main approximations; i.e., ballistic transport and a parabolic band structure. With these approximations, we show that mistracking torques can make important contributions to domain wall dynamics. For all but extreme cases, these contributions can be captured through effective values of local parameters. This simple model maximizes the importance of the nonlocal effects, but since the effects can be largely be accounted for by a local approximation, our use of the "best case" is appropriate. We expect a local parameterization to be even more appropriate when scattering and realistic band structures are taken into account.
Before explaining the model details, let us discuss the relevance of this simple model. We expect the ballistic limit to be appropriate for materials with very short domain wall widths (about 1 nm), which are shorter than the mean free path. A ballistic transport picture becomes less appropriate when domain wall widths are greater than mean free paths and precession lengths. In that case, we expect that scattering will reduce the non-local effects obtained from a ballistic transport model. We also expect that non-local effects will be weaker for realistic band structures than for parabolic band structures because dephasing is stronger for realistic band structures. Thus, we expect that the results for a parabolic band structure set an upper limit for the importance of non-local effects. We show below that in most cases we consider, the non-local effects can be accounted for by suitably renormalized local parameters. We expect that conclusion to be even stronger for more realistic band structures in domain walls in which scattering is important. The Hamiltonian is is a vector composed of the three Pauli matrices and is aligned along the local magnetization everywhere and describes the magnetic field experienced by a conduction electron spin through the s-d exchange coupling. Its magnitude is defined as (19) where the Fermi energy is . The spatial evolution of the single-particle spin for a given energy E is obtained from where the spin-dependent Fermi-Dirac distribution function implies that the distribution of electrons outside the region of inhomogeneous magnetization are characteristic of the zero-temperature bulk [100]. Then, ) ( ) ( ) ( x x x spin current density, and the spin transfer torque is given by We plug this semiclassical calculation of spin transfer torque in to the LLG equation, Eq. (1). At every integration time step, we compute the semiclassical calculation of the non-local spin transfer torque for a given magnetization profile, and then update the magnetization dynamics using the spin transfer torque for the next time step. This procedure is repeated and as a result, the effect of spin transfer torques on the magnetization dynamics and subsequent feedback are taken into account self-consistently.
Several remarks on the computation are in order. First, the length of the nanowire treated in the calculation should be much longer than the domain wall width. If not, unphysical nonequilibrium spin density can arise from discontinuities at the edges. Second, multiscale modeling is important to reduce the computation time. In this work, the unit cell size for calculating the LLG equation is more than 10 times larger than for calculating the semiclassical spin transport equation. The smaller cell size for the spin transport calculation is essential to ensure a convergence when solving Eq. (20).
Current-induced domain wall motion by non-local spin transfer torque
A qualitative explanation for the origin of the non-local spin transfer torque is as follows.
When the domain wall is sufficiently wide compared to the precession period of the spin density determined by k F and k B , the precession amplitude of the spin density is small and averaged out when integrated over the Fermi surface. As a result, the local spin direction of spin current is almost perfectly aligned along the local magnetization direction, so that spin transfer torque can be locally defined by the gradient of the local magnetization. In contrast, when the domain wall is narrow and its width is comparable to the precession period, the precession amplitude is considerable even at points far from the domain wall and the spin transfer torque becomes non-local.
In this work, we carry out micromagnetic simulations for a semi-one dimensional nanowire (i.e., the nanowire is discretized along the length direction, but not along the width or the thickness direction), self-consistently coupled with a semiclassical spin transport calculation. We assume a perpendicularly magnetized nanowire with the following are non-local (Fig. 11(c)). Figure 12 shows the domain wall velocity v DW as a function of the spin current velocity u 0 , obtained from the self-consistent calculation. We did not observe any significant spin wave excitations, in contrast to Ref. [116]. We attribute this difference to the fact that the non-local spin transfer torque is not as significant as in Ref. [116] due to the strong spin dephasing (see Fig. 10). Overall trends of v DW are similar to those expected from the local approximation with nonzero local nonadiabaticity [24,25]. When the spin current velocity u 0 (proportional to the current density) is small, v DW is linearly proportional to u 0 , and the slope v DW /u 0 in the linear range increases with decreasing DW . When u 0 exceeds a certain threshold (u WB , indicated by down arrows in Fig. 12), v DW deviates from the linear dependence. The threshold u WB corresponds to the Walker breakdown [83,118], above which the domain wall undergoes a precessional motion. These overall trends of v DW indicate that the non-local spin transfer torque indeed acts like an additional local nonadiabatic spin transfer torque.
We can understand the similarity of the domain wall motion from a collective coordinates approach to analyze the calculation results obtained from the self-consistent model.
Following Thiele's work [119], we assume the domain wall structure is ) cos , sin cos , sin (sin where sin = sech[(x-X(t))/ DW ], cos = tanh[(x-X(t))/ DW ], and = (t). Here, X is the domain wall position, is the domain wall tilt angle, DW is the domain wall width, and t is time. After some algebra, one obtains the equations of motion of the collective coordinates (X, ) in the rigid domain wall limit (i.e., ∂ DW /∂t = 0), where K d is the hard-axis anisotropy of domain wall. In the local approximation, one can In our case, however, J b and J c can be obtained by integrating Eqs. (27) and (28) numerically, because of the non-local nature of both Adia ) that effectively describe the average adiabaticity (≈ effective spin polarization) and nonadiabaticity of non-local spin transfer torque, respectively. The dependence of eff and eff on DW are summarized in Fig. 13. eff is close to 1 for a large DW and decreases with decreasing DW . In contrast, eff is close to 0 for a large DW and increases with decreasing DW . The changes in eff are much more significant than those in eff . Given the uncertainty in the proper parameters to describe these systems, it is likely that change in eff will be much more difficult to observe than those in eff .
Based on Eqs. (25) to (28), one can define several important physical quantities of domain wall dynamics (see Appendix B for details). The threshold spin current velocity u WB for the Walker breakdown, the domain wall velocity v steady for u 0 < u WB , and the average domain wall velocity v for u 0 >> u WB are given by In Fig. 14, we show how well this local approximation for eff and eff shown in Fig. 13 can describe the self-consistent calculation results shown in Fig. 12. When they agree, there is no need for the full self-consistent solution. Instead, one can calculate eff and eff based on the semiclassical calculation in Eqs. (27) and (28), and use them in the LLG equation with the local approximation for spin transfer torque. When it is valid, this procedure significantly reduces the computation time. The plots of v DW versus u 0 are mostly similar in the two approaches ( Fig. 14(a)-(e)), but there are some discrepancies. An important discrepancy is the Walker breakdown threshold, u WB . For instance, when K u is 310 6 J/m 3 (the equilibrium DW ≈ 2.03 nm) ( Fig. 14(b)), u WB for the self-consistent calculation is about 310 m/s whereas u WB for the local approximation is about 220 m/s. This difference in u WB is caused by the fact that DW changes in the simulation but is treated as a constant in deriving the local approximation. As the current increases, the domain wall tilt angle also increases.
This change in causes a change in K d and in turn, a change in DW . Figure 14 Eq. (29) with eff ≈ 1 and = 0.03, one finds that u WB indeed changes substantially due to this nonlinear effect as shown Fig. 14(b). We conclude that the local approximation with eff and eff calculated from spin transport equations would capture the core effect of the nonlocal spin transfer torques qualitatively, but it cannot reproduce the results obtained from the self-consistent calculation quantitatively unless they are artificially adjusted.
Summary
To summarize this section, we show self-consistent calculations for current-induced dynamics of narrow domain walls. We find that for narrow domain walls, the self-consistent calculations predict the spin transfer torque to be non-local and spatially oscillatory due to the ballistic spin-mistracking mechanism. The non-local spin transfer torque generates domain wall motion and thus its effect is similar to the local nonadiabatic spin transfer torque.
However some of its effect such as the Walker breakdown threshold value cannot be fully captured by the local nonadiabatic spin transfer torque approximation. Therefore when DW is close to 1 nm, it is necessary to adopt the self-consistent calculations for quantitative description of current-driven domain wall motion. It is worth comparing our result to available experimental ones. Thomas et al. [94], Heyne et al. [109], and Eltschka et al. [111] have found that vortex cores exhibit a much larger nonadiabaticity ( compared to transverse domain walls ( ). According to our result, this large nonadiabaticity of vortex cores is unlikely to be caused by the ballistic spin-mistracking since a typical width of a vortex core is about 10 nm. The large reported values of in these systems are more likely to be related to spin diffusion effect [114,115] and/or anomalous Hall effect [120]. On the other hand, Burrowes et al. [112] have tested very narrow Bloch-type domain walls of about 1 nm using FePt nanowires and found that such a narrow domain wall does not cause a significant increase in the nonadiabaticity. This experimental result is inconsistent with our self-consistent calculation. Assuming that DW in the experiment is indeed around 1 nm, there are a few possible reasons for this discrepancy. Our model assumes a spherical Fermi surface with the free-electron approximation. However, the shape of a realistic Fermi surface usually deviates substantially from a sphere. If a realistic Fermi surface was considered, the contribution from non-local spin transfer torque is likely to be reduced because of additional spin dephasing due to the complicated Fermi surfaces as we mention earlier in Sec. 3.4. Another possible reason for the inconsistency is that the experiment of Ref. [112] used a thermally activated depinning from a point defect to estimate the nonadiabaticity. Since the width of FePt nanowires in the experiment is about 200 nm, it is reasonable to assume that a domain wall could bend when escaping from a point defect. If this is the case, our one-dimensional model calculation should not be compared to this experiment since a two-dimensional domain wall structure may cause an additional spin dephasing. Therefore, we believe that better defined measurements should be done to experimentally test the role of the non-local spin transfer torque due to ballistic spinmistracking for narrow domain walls.
Although there are some ambiguities in directly comparing our model calculation to experiments, our result indicates that it may be important to perform self-consistent calculations to understand current-induced dynamics of narrow domain walls in detail. Since many recent experiments have utilized materials systems with high perpendicular magnetic anisotropy, combining experimental measurements and self-consistent calculations would be essential to understand the underlying physics and to design efficient domain wall devices.
Conclusion and outlook
In this review, we present self-consistent calculations of transport and magnetization Before ending this review, we remark that the examples discussed so far are not the only cases for which a self-consistent treatment is required. In the following, we will briefly comment on other examples where the feedback mechanism is non-trivial.
Giant magnetoresistance is often considered as an inverse effect of spin transfer torque.
Just as spin transfer torques in multilayers and nanowires are similar processes in different geometries, so are spin pumping and the spin motive force. Spin pumping occurs in bilayer structures where a ferromagnetic layer is attached to a non-magnetic layer [44,121,122]. A precessing magnetization in the ferromagnet pumps a spin current into the non-magnet transferring energy and angular momentum from the ferromagnet to the conduction electrons of the non-magnet. This transfer increases the magnetic damping rate in the ferromagnet. However, the pumped spin current generates a spin accumulation in the nonmagnet. This spin accumulation in turn generates a back-flow current back into the ferromagnet through diffusion processes. The quantitative enhancement of the Gilbert damping [44] and the voltage drop across the interface [126] requires proper treatment of the balance between the pumping and back-flow currents. One approach for such calculations is the magnetoelectronic circuit theory used in Section 2.
The spin motive force, on the other hand, is found in systems with a single ferromagnet [123][124][125] like a magnetic nanowire. When the magnetization varies in both space and time, conduction electrons experience a spin-dependent electric field that generates spin and charge currents. Early calculations of the spin motive force [123][124][125][128][129][130] and the consequent enhancement of Gilbert damping [136][137][138][139][140][141] did not consider other processes that might be important: spin accumulation, spin diffusion, and spin-flip scattering. However, just as it is necessary to properly consider the backflow current for a description of spin pumping, so is it necessary to consider these processes for a calculation of the spin motive force. Several of us have investigated these effects theoretically, and found that spin relaxation processes [142] significantly modify the spin motive force. For example, charge currents are perfectly canceled by diffusion currents in one-dimensional systems. Spin currents become non-local and become smaller depending on the characteristic length of spatial variation of the magnetization and the spin diffusion length. For such one-dimensional systems, we provided an analytical expression of spin motive force including spin relaxation processes [142]. For two-or three-dimensional systems, however, such analytical solutions are not available so that self-consistent calculations would be necessary to describe the coupled dynamics.
Self-consistent calculations would also be very important for descriptions of spin transfer torques and spin motive forces in ferromagnetic systems with strong spin-orbit coupling, for example, ferromagnets with Rashba interactions. Obata and Tatara [143], and Manchon and Zhang [144] independently predicted the existence of field-like spin transfer torque induced by in-plane current in Rashba ferromagnets. A number of experimental [145][146][147][148][149][150] and theoretical [151][152][153][154][155][156][157][158][159] studies have followed this work. Miron et al. reported that an in-plane current-induced field-like spin torque is present for Pt|Co|AlO x structures where the inversion symmetry is broken [145]. Miron et al. also reported that a domain wall in such structures moves against the electron-flow direction with high speed [146]. This reversed domain wall motion with high speed cannot be explained by conventional adiabatic and nonadiabatic spin transfer torques, but may be explained by a damping-like spin transfer torque in addition to all other spin transfer torques (i.e., adiabatic, nonadiabatic, and the field-like torques) [156] and the Dzyaloshinskii-Moriya interaction [159]. The damping-like spin transfer torque may originate from a spin Hall effect in a heavy metal layer like Pt [159][160][161][162][163][164][165] and/or a nonadiabatic correction to the field-like torque [155][156][157][158]. This damping-like torque also allows switching the magnetization by in-plane currents [149,164,166].
At present, the appropriate description of this unconventional current-induced magnetization dynamics is still controversial. It is not clear whether an explanation based on the spin Hall effect, Rashba spin-orbit coupling, both, or something else, is appropriate for all experiments or individual experiments. To resolve this controversy, it may be important to develop a model that takes into account both types of spin-orbit effects and computes the properties of spin transfer torques accurately. For instance, we have developed a Boltzmann transport model considering the two sources of spin transfer torques (i.e., the spin Hall effect and Rashba spin-orbit coupling) and found that both sources can generate not only field-like torques but also damping-like torques for thin ferromagnets [165]. In a different approach, we have found [167] that for two-dimensional electron gases and under the assumption that the spin-orbit potential is comparable to the exchange interaction, the field-like spin torque has a complicated dependence on the angle between the current direction and the magnetization direction. In this case, self-consistent calculations are needed to properly take into account the effect of complicated angle-dependent spin transfer torque on current-induced magnetization dynamics. Furthermore, since spin transfer torques and spin motive forces are closely related, a sizable spin transfer torque due to Rashba spin-orbit coupling suggests that the magnetization dynamics in Rashba ferromagnets can generate a large spin motive force [168][169][170]. In this case, the spin motive force may require self-consistent calculations to accurately account for the spin relaxation process since the Rashba spin-orbit coupling correlates the spin directions with the wave vectors.
Recently, the existence of thermal spin transfer torques was experimentally demonstrated in metallic spin valves [175] and theoretically predicted in magnetic tunnel junctions [178]. This type of torque mediated by magnon-and/or spin-wave-spin current may find use in moving domain walls [179][180][181][182][183][184][185]. It is closely related to spin-dependent thermoelectric effects, such as spin-dependent Seebeck, Peltier, and Nernst effects [186][187][188][189]. These heat-and spindependent phenomena are unexplored largely at the moment and thus would require various self-consistent calculations that couple heat, spin, and magnetization dynamics all together.
Appendix A. Comparison of the convolution method to a full solution of the spin accumulation profiles in the lateral spin diffusion problem
Ref. [41] introduced a convolution method that leads to the speed up in the calculation of the lateral diffusion. Since the speed gain is substantial, it is important to test the validity of the underlying approximations. Here we do so by examining our full solutions of the driftdiffusion equation.
In the convolution method, the spin chemical potential S at a point r is given by is a 3 by 3 tensor that relates S at r to the magnetization m at a different point r'. Its explicit form is given in Ref. [41]. In the convolution method, the kernel K is assumed to depend on (r-r') but not explicitly on r itself. This assumption leads to substantial speed up in the computing time because the kernel can be precomputed and the convolution can be done with fast numerical techniques.
Several approximations underlie this approach. It assumes that the kernel does not change near boundaries in the structure and assumes that the magnetization only has small deviations from the average magnetization.
Here we test the errors that are introduced by the convolution method in nanopillars in which all of the layers have been patterned. Figure A1 Figure A1(b) shows the z-component of spin chemical potential z , calculated by our approach, in the NM region at the bottom interface of FM|NM for two cases; x 0 = 0 (center of nanopillar) and x 0 = 18 nm (close to an edge). In case of x 0 = 0, the spin chemical potential profile is symmetric along the lateral direction (i.e., x-axis) whereas it is slightly asymmetric due to the boundary effect in case of x 0 = 18 nm as indicated by arrows in Fig. A1(b). However, the two agree surprisingly well. In part, this arises because the spin accumulation is much more local than would be expected from the long spin diffusion length. The spin accumulation is more local because the interface with the ferromagnet and the interface with the reservoir acts as effect spin flip scattering sites. Unless the lead is very thick, the spin diffusion length becomes largely irrelevant compared to the spin flip scattering at the interfaces.
The convolution approach will break down when the magnetization varies significantly compared to its average value. We illustrate this point in a spin-valve structure with domain walls in both layers. The problem with the convolution method used in Ref. [41] for this situation is that the kernel is for the transverse magnetization based on a solution for the longitudinal transport that is uniform across the device. This assumption is clearly violated in the structure considered here with domain walls (see Fig. A1(c)).
Overall, the convolution method is a convenient approximation to calculate the spin accumulation profiles in some cases because it uses significantly less computation time ) driven by the local adiabatic spin transfer torque [23,190]. Charge-neutrality-broken calculation [116], and (d) charge-neutrality-preserved calculation (our work). Only the k-normal channel is considered ( ) in (c), whereas the integration over the Fermi surface is performed in (d). Here, K u is assumed to be 4.5×10 6 J/m 3 and the upper panels of (c) and (d) show the domain wall profile. | v2 |
2019-05-29T04:28:02.000Z | 2019-05-29T00:00:00.000Z | 168169839 | s2orc/train | Tuning Dirac nodes with correlated d-electrons in BaCo_{1-x}Ni_{x}S_{2}
Dirac fermions play a central role in the study of topological phases, for they can generate a variety of exotic states, such as Weyl semimetals and topological insulators. The control and manipulation of Dirac fermions constitute a fundamental step towards the realization of novel concepts of electronic devices and quantum computation. By means of ARPES experiments and ab initio simulations, here we show that Dirac states can be effectively tuned by doping a transition metal sulfide, BaNiS2, through Co/Ni substitution. The symmetry and chemical characteristics of this material, combined with the modification of the charge transfer gap of BaCo_{1-x}Ni_{x}S_{2} across its phase diagram, lead to the formation of Dirac lines whose position in k-space can be displaced along the Gamma M symmetry direction, and their form reshaped. Not only does the doping x tailor the location and shape of the Dirac bands, but it also controls the metal-insulator transition in the same compound, making BaCo_{1-x}Ni_{x}S_{2} a model system to functionalize Dirac materials by varying the strength of electron correlations.
I n the vast domain of topological Dirac and Weyl materials (1)(2)(3)(4)(5)(6)(7)(8)(9), the study of various underlying mechanisms (10)(11)(12)(13)(14)(15) leading to the formation of non-trivial band structures is key to discover new topological electronic states (16)(17)(18)(19)(20)(21)(22)(23). A highly desirable feature of these materials is the tunability of the topological properties by an external parameter, which will make them suitable in view of technological applications, such as topological field effect transistors (24). While a thorough control of band topology can be achieved in principle in optical lattices (25) and photonic crystals (26) through the wandering, merging and reshaping of nodal points and lines in k-space (27,28), in solid state systems such a control is much harder to achieve. Proposals have been made by using optical cavities (29), twisted van der Waals heterostructures (30), intercalation (31), chemical deposition (32,33), impurities (34), and magnetic and electric applied fields (35), both static (36) and time-periodic (17,37). Here, we prove that it is possible to move and reshape Dirac nodal lines in reciprocal space by chemical substitution. Namely, by means of Angle Resolved Photo-Emission Spectroscopy (ARPES) experiments and ab initio simulations, we observe a sizable shift of robust massive Dirac nodes towards Γ in BaCo1−xNixS2 as a function of doping x, obtained by replacing Ni with Co. At variance with previous attempts of controlling Dirac states by doping (19,38), in our work we report both a reshape and a significant k-displacement of the Dirac nodes.
BaCo1−xNixS2 is a prototypical transition metal system with a simple square lattice (39). In BaCo1−xNixS2 the same doping parameter x that tunes the position of the Dirac nodes also controls the electronic phase diagram, which features a first-order metal-insulator transition (MIT) at a critical substitution level, xcr ∼ 0.22 (40,41), as shown in Fig. 1(a). The Co-rich side (x = 0) is an insulator with collinear magnetic order and with local moments in a high-spin (S=3/2) configuration (42). Both electron correlation strength and charge-transfer gap ∆CT increase with decreasing x, as typically found in the late transition metal series. The MIT at x = 0.22 is of interest because it is driven by electron correlations (43) and is associated with a competition between an insulating antiferromagnetic phase and an unconventional paramagnetic semi-metal (44), where the Dirac nodes are found at the Fermi level. We show that a distinctive feature of these Dirac states is their dominant d-orbital character and that the underlying band inversion mechanism is driven by a large d − p hybridization combined with the non-symmorphic symmetry (NSS) of the crystal (see Fig. 1(b)). It follows that an essential role in controlling the properties of Dirac states is played by electron correlations and by the charge-transfer gap ( Fig. 1 (c)), as they have a direct impact on the hybridization strength. This results into an effective tunability of shape, energy and wave vector of the Dirac lines in the proximity of the Fermi level. Specifically, the present ARPES study unveils Dirac bands moving from M to Γ with decreasing x. The bands are well explained quantitatively by ab initio calcula-
Significance Statement
The on-demand control of topological properties with readily modifiable parameters is a fundamental step towards the design of novel electronic and spintronic devices. Here we show that this goal can be achieved in the correlated system BaCo1−xNixS2 , where we succeeded in significantly changing the reciprocal space position and shape of Dirac nodes by chemically substituting Ni with Co. We prove that the tunability of the Dirac states is realized by varying the electron correlation strength and the charge-transfer gap, both sensitive to the substitution level, x. Based on our finding, a class of late transition metal compounds can be established as prototypical for engineering highly tunable Dirac materials. To whom correspondence should be addressed. E-mail:[email protected] ; [email protected] ; [email protected] tions, in a hybrid density functional approximation suitable for including non-local correlations of screened-exchange type, which affect the hybridization between the d and p states. The same functional is able to describe the insulating spin-density wave (SDW) phase at x = 0, driven by local correlations, upon increase of the optimal screened-exchange fraction. These calculations confirm that the Dirac nodes mobility in k-space stems directly from the evolution of the charge transfer gap, i.e. the relative position between d and p on-site energies. These results clearly suggest that BaCo1−xNixS2 is a model system to tailor Dirac states and, more generally, that two archetypal features of correlated systems such as the hybrid d − p bands and the charge-transfer gap constitute a promising playground to engineer Dirac and topological materials using chemical substitution and other macroscopic control parameters.
Observation of Dirac states in BaNiS 2
We begin with the undoped sample BaNiS2. In Fig. 1(d), we represent a three dimensional ARPES map of the Bril-louin zone (BZ) for the high symmetry directions. Along Γ − M , we observe linearly dispersing bands and -within ARPES resolution-gapless nodes at the Fermi level EF . The Fermi surface reveals two pairs of such Dirac-like crossings related to each other by the time-reversal and by the two-fold rotation axis C2 of the C2v little group for the k-vectors along Γ − M . The Dirac nodes lie on the σ d reflection planes and extend along the kz direction piercing the whole BZ, unlike other topological node-line semimetals known to date, like Cu3NPd (46,47), Ca3P2 (48) and ZrSiS (49), where the nodal lines form closed areas around high-symmetry points.
As one can see in
Symmetry analysis of the electronic bands: mechanism of band inversion and formation of Dirac states
To unveil the physical mechanism responsible for the formation of Dirac cones in BaNiS2 we performed a detailed theoretical analysis of the symmetry of the electronic bands. We carried out density functional theory (DFT) calculations, by employing a modified Heyd-Scuseria-Ernzerhof (HSE) functional. The details of the band structure are presented in Methods and SI (see SI Sec. S3, where we also discuss how the inclusion of the spin-orbit coupling (SOC) affects the topological properties). The use of the HSE functional is dictated by non-local correlation effects present in this material. Indeed, a hybrid HSE functional with the optimal screened-exchange fraction α = 7% (see Eq. 1) is needed to account for the Fermi surface renormalization of BaNiS2 seen in quantum oscillations (51).
Previous theoretical calculations (41,52,53) have shown that both S 3p-and Ni 3d-orbitals contribute to the Bloch functions near the Fermi level. We ascribe the electronic states close to the Fermi level mainly to the Ni 3d-orbitals hybridized with the S 3p-orbitals. In this situation, the exchange contribution to the hybridization with the ligands plays a crucial role in determining the topology of the Fermi surface ( Fig. S6(b) illustrates the electronic structure dependence upon α). Hereafter, we consider a Cartesian reference frame where the xand y-axis are parallel to the Ni-S bonds in the tetragonal ab-plane. Neighbouring Ni ions are aligned along the diagonal xy direction ( Fig. 1(b)). In this frame, at the crossing points, located along the (u, u, v) directions, the bands have dominant d z 2 and d x 2 −y 2 character. This multi-orbital nature was confirmed by a polarization dependent laser-ARPES study (see SI, Sec. S4).
As sketched in Fig. 2(a), the crystal structure of BaNiS2 is made of square-lattice layers of staggered, edge-sharing NiS5 pyramids pointing along the out-of-plane [001] c-axis direction (40). The Ni atoms inside the S pyramids probe a crystal field that splits the atomic d-shell into the following levels (in descending energy order): d x 2 −y 2 , d z 2 , the degenerate doublet (dxz, dyz) and dxy. Due to the 3d 8 4s 0 electronic configuration of the Ni 2+ ion, we expect all d-orbitals to be filled, except the two highest ones, d x 2 −y 2 and d z 2 , which are nearly half-filled assuming that the Hund's exchange is sufficiently strong.
D R A F T
The puckering of the BaNiS2 layers gives rise to a tetragonal nonsymmorphic P 4/nmm structure characterized by a horizontal gliding plane which generates two Ni and two apical S positions at (1/4,1/4,z) and (3/4,3/4,−z), separated by a fractional f=(1/2,1/2,0) translation in the plane, Fig. 2(a). The two Ni atoms occupy Wyckoff position 2c, corresponding to the M symmetry, while the two planar S are at the 2a site corresponding to the Γ symmetry.
At M, the energy hierarchy of the atomic orbitals follows closely the crystal field splitting ( Fig. 2(b)). The little group admits the following four 2D irreducible representations (irreps) EMi=1,..., 4 (54), each originating from the same orbitals of the two inequivalent Ni. However, the levels stacking at Γ, whose little group is isomorphic to D 4h , differs from that predicted by the crystal field. This is due to the sizable hybridization of Ni d-orbitals with the S p ligands (see SI, Sec. S5).
Owing to the NSS, each Bloch eigenfunction at Γ is either even or odd upon exchanging the inequivalent Ni and S within each unit cell. Even and odd combinations of identical d-orbitals belonging to inequivalent Ni atoms split in energy since they hybridize differently with the ligands. The even combination of the d x 2 −y 2 Ni orbitals is weakly hybridized with the pzorbitals of the planar S, since the two Ni atoms are out of the basal plane. On the other hand, the odd combination is nonbonding. It follows that the B1g even combination shifts up in energy with respect to the B2u odd one. Seemingly, the A2u odd combination of the d z 2 -orbitals hybridizes substantially with the pz-orbitals of the planar and apical S, thus increasing significantly the energy of the odd combination. Eventually, its energy raises above the B1g and B2u levels, as well as the A1g state (even combination of d z 2 -orbitals). This leads to a reverse of the crystal field order as reported in Fig. 2(c).
Because the irreps at the A and Z k-points are equivalent to those at M and Γ (54), respectively, the orbital hierarchy found at M and Γ must be preserved along the M −A and Γ−Z directions. Thus, for any v along the (0, 0, v) → (1/2, 1/2, v) path, a band inversion between bands with predominant d z 2 and d x 2 −y 2 characters must occur. Therefore, band crossing is allowed without SOC, and leads to two Dirac points at a given kz right at the Fermi energy for kz = 0. Indeed, the crossing bands transform like different irreps of the little group, which is isomorphic to C2v for a k-point (u, u, v) with v = 0, 1/2, and to Cs with v ∈ ]0, 1/2[. These Dirac nodes are massive as a consequence of the SOC, which makes the material a weak topological insulator. The SOC gap is however very small (about 18 meV), and below ARPES resolution. Nevertheless, the focus of the present work is not on these very-low-energy features, but rather on the tunability of the whole Dirac nodal structure. In the family of weak topological insulators having the same P 4/nmm space group and showing SOC gapped Dirac cones along the Γ − M direction (such as ZrSiS, for instance), BaCo1−xNixS2 is a peculiar member. Indeed, the strong local Hund's exchange coupling favors nearly half-filled d x 2 −y 2 and d z 2 orbitals, that explains the proximity of the Dirac nodes to the Fermi level for x = 1, in accordance also to Luttinger's theorem (see SI, Sec. S6). This is another signature of the relevance of electron correlations in this transition metal compound, which manifest themselves in both local and non-local contributions, the former leading eventually to the insulating phase at the Co side of BaCo1−xNixS2, the latter affecting the variation of ∆CT across the series.
ARPES evidence of Dirac states tuned by doping, x
We now turn our attention to the effect of the Co/Ni substitution on the evolution of the band structure, notably the Dirac states. According to the BaCo1−xNixS2 phase diagram, this substitution modifies the strength of the electron-electron correlations and the amplitude of ∆CT . A series of ARPES spectra are given for the x = 0.75 and x = 0.3 compositions. In Fig. 3, we display the evolution of the Fermi surface and the electronic band structure along Γ − M with x. For x = 0.75, the Fermi surface is composed of a four-leaf feature at the Γ point and four hole-like pockets along the Γ − M , Fig. 3(c). These pockets originate from the Dirac states crossing the Fermi level. The Dirac cone is shown in Fig. 3(d) along and perpendicular to the Γ − M direction. At higher substitution levels, for x = 0.30, the Dirac states shift up to lower binding energies, so the size of the hole-like pockets in the kx − ky plane is increased (see Fig. 3(e,f)). The ARPES signal is also broader: since our structural study indicates that the crystalline quality is not affected by Co/Ni substitution (see Sec. S7 and Tab. S2 in SI), this broadening is consistent with the increase in electron-electron correlations while approaching the metal-insulator transition. (39,43,52). On the theoretical ground, this is expected because Co-substitution brings the whole d-manifold closer to fillings where local correlation effects are enhanced, according to the Hund's metals picture (55). Fig. 4(a) schematically illustrates the evolution of the Dirac cone with x; in Table 1 we give the position of the Dirac points determined by extrapolating the band dispersion. In summary, one notes that the Co-substitution moves the Dirac points further beyond the Fermi level and reduces its wave vector.
Evolution of Dirac states with doping
In order to account for the tunability of the Dirac cones detected by ARPES, we carried out extensive ab initio DFT-HSE calculations as a function of the screened-exchange fraction α, which controls the correlation strength in the modified hybrid functional framework. To explicitly include the charge transfer ∆CT variation led by chemical substitution, we computed the two end-members of the BaCo1−xNixS2 series, namely x = 1 (BaNiS2) and x = 0 (BaCoS2). For x = 1 the optimal α = 7%, since it reproduces the frequencies of quantum oscillations in BaNiS2 (51). In order to fix such percentage for x = 0, we performed ab initio calculations assuming the collinear SDW observed experimentally (42) (43). GGA+U correctly predicts an insulating state (Fig. 4(e)). By varying the percentage α of screened exchange in HSE, we find that, while α = 7% gives a metal, α 19% reproduces the main peaks across gap obtained by GGA+U (Fig. 4(e)). This result suggests that HSE can describe BaCo1−xNixS2 only if the percentage of screened exchange α increases from 7% up to around 19% with decreasing x from 1 to 0. Starting from the most correlated Co side, the reduction of the Hubbard repulsion upon electron doping, implied by the α dependence on x, has been found in other strongly correlated compounds, such as La-doped Sr2IrO4 (56).
In BaCoS2, beside the SDW solution compatible with the observed low-temperature state, it is possible to obtain another one once magnetism is not allowed, namely forcing spin SU (2) symmetry. This paramagnetic metallic (PM) phase is metastable at low temperature, and adiabatically connected with the metallic solution at x = 1. Therefore, it hosts Dirac cones; it is metallic and separated by an energy barrier from the stable insulating SDW phase. In Fig. 4(d), we plot the distance of the Dirac node (k Dirac ) from the Γ point as a func-tion of α, for x = 1 and the metallic solution at x = 0. k Dirac strongly depends on both x and α (See sec. S8, Figs. S6(a) and S6(b) plot the band structures where the k Dirac values have been extracted from). By taking the optimal α's for each x, the Dirac node is predicted to drift from k Dirac 0.52 Å −1 at x = 1 down to k Dirac 0.38 Å −1 at x = 0, covering the colored y-axis range in Fig. 4(d), in agreement with the range of variation seen in experiment.
Next, we analyze the 22-bands full d−p tight-binding model derived from the ab initio DFT-HSE for x = 1 (with α = 7%) and for x = 0 (with α = 19%), c.f. Sec. S9, and Fig. S8 The x = 0 state has shifted Dirac cones in both k and energy position with respect to the BaNiS2 parent compound. To underpin the mechanism behind the evolution of the cones, we compared the two tight-binding Hamiltonians for x = 0 and x = 1 The main difference involves the on-site energies and, in particular, the relative position of the p and d states, i.e. the charge transfer gap ∆CT . This proves that the doping x via chemical substitution is indeed an effective control parameter, as it alters the d−p charge transfer gap ∆CT together with the D R A F T correlation strength and, consequently, the d − p hybridization amplitude, which directly affects position and shape of the Dirac nodes.
In the following, we define ∆CT as the energy difference between the average energy position of the full d manifold and the average one of the p manifold. According to our HSE calculations, ∆CT varies from 1.1 eV (x = 1) to 1.6 eV (x = 0). Assuming a linear variation of ∆CT and on-site energies upon Ni-content x, we are able to estimate ∆CT = ∆CT (x) and, thus, predict the evolution of the band structure and Dirac states by interpolating between the BaCoS2 and BaNiS2 TB models. This evolution is reported in Fig. 4(b), while the actual Dirac states dynamics -represented by the behavior of both the k and energy position of the Dirac point as a function of ∆CT -is plotted in Fig. 4(c). This shows that the tunability upon doping found experimentally does not merely consist of a rigid shift of the Dirac cones (19), but it involves the change of both their shape and k-position (see also Fig. S8).
This theoretical prediction is in good agreement with the observed evolution of the Dirac cone with x, as apparent in Fig. 4(a). Such movable Dirac nodes in the k-space have recently attracted a great deal of interest from theory (15,28,57), as well as in the context of optical lattices (25) and photonic crystals (26). The present system offers the opportunity of observing in a real material how a simple experimental parameter -chemical substitution -can be used to tune Dirac states.
Manipulating the shape and position of the Dirac cones is also expected in BaCo1−xNixS2 using pressure in bulk samples or strain in thin films. Specifically, strain can be used to distort the square lattice, thus breaking one of the symmetries that protect the fourfold Dirac nodal lines. Non-trivial phases, such as Weyl semimetals, could then be triggered by time-inversion breaking perturbations, like an external electromagnetic field. A further possibility is the creation of spin-chiral edge states thanks to the proximity of the material to a topological insulator.
Conclusion
In conclusion, we have shown that BaCo1−xNixS2 offers the opportunity of effectively tuning Dirac bands by exploiting a peculiar inversion mechanism of d-electron bands. Namely, the Co/Ni substitution has been found to alter both the charge transfer gap and the strength of the electron-electron correlations that control position and shape of the bands. Remarkably, the same Co/Ni substitution makes it possible to span the electronic phase diagram, with the Dirac states present across its metallic phase. We emphasize the applicability of the present approach to a wide class of materials described by the d − p effective Hamiltonian, thus enabling to forge new Dirac states controlled by chemical substitutions. This opens the perspective of engineering Dirac states in correlated electronic systems by exploiting macroscopically tunable parameters.
Materials and Methods
ARPES measurements. Single crystals of BaCo 1−x NixS 2 were cleaved in-situ, exposing the ab plane under UHV conditions (base pressure better than 10 −11 mbar). Most of the synchrotron radiation ARPES measurements were performed on the Advanced Photoelectric Effect (APE) beamline at the Elettra light source, with linearly polarized beam and different photon energies. The sample temperature was 70 K. The data were collected with a VG-DA30 Scienta hemispherical analyzer that operates in deflection mode and provides high-resolution two-dimensional k-space mapping while the sample geometry is fixed (58). The total measured energy resolution is ∼ 15 meV and the angular resolution is better than 0.2 • . Some of the data were also acquired with a 6.2 eV laser source (59); and some at the Spectromicroscopy beamline (60): the end station hosts two exchangeable multilayer-coated Schwarzschild objectives (SO) designed to focus the radiation at 27 eV and 74 eV to a small spot (∼600 nm). The photoelectrons are collected by an internal movable hemispherical electron energy analyzer that can perform polar and azimuthal angular scans in UHV. The energy and momentum resolutions are ∼33 meV and ∼0.03 Å −1 , respectively.
Ab initio
The screened interaction is written as: V screened (r) = erfc(ωr)/r, where erfc is the complementary error function, and ω = 0.108 in atomic units, i.e. the HSE regular value. In this work, α is instead taken as an adjustable parameter, which depends on the correlation strength of the system.
We used the Quantum Espresso package (63,64) to perform modified HSE calculations for BaNiS 2 (x = 1) and BaCoS 2 (x = 0) in a plane-waves (PW) basis set. The geometry of the cell and the internal coordinates are taken from experiment (45). We replaced the core electrons of the Ni, Co, Ba, and S atoms by norm-conserving pseudopotentials. For the Ni (Co) pseudopotential, we used both fully-and scalar-relativistic versions, with 10 (9) valence electrons and nonlinear core corrections. The Ba pseudopotential includes the semicore states, while the S pseudopotential has 3s 2 3p 4 invalence electrons. We employed a 8 × 8 × 8 electron-momentum grid and a Methfessel-Paxton smearing of 0.01 Ry for the k-point integration. The PW cutoff is 60 Ry for the wave function. The non-local exchange terms of the HSE functional are computed through the fast implementation of the exact Fock energy (64), based on the adaptively compressed exchange scheme (65). In the non-local Fock operator evaluation, the integration over the q-points is downsampled on a 8 × 8 × 2 grid. We applied a half-a-grid shift in the z direction to minimize the number of nonequivalent momenta in the k + q grid. By means of the Wannier90 code (66), we performed a Wannier interpolation of the ab initio bands for x = 1 in the energy window spanned by the d − p manifold, to accurately resolve the band structure, chemical potential, and Fermi surface, and to derive a minimal TB model.
To successfully deal with the most demanding simulations (HSE functional evaluated in a larger cell with spin resolved orbitals), we supplemented the Quantum Espresso calculations with some performed by means of the Crystal17 package(67), particularly suited to efficiently compute the exact exchange operator. In this framework, we used scalar-relativistic Hartree-Fock energyconsistent pseudopotentials by Burkatzki, Filippi, and Dolg (68), and an adapted VTZ Gaussian basis set, for both Ni and Co. In our Crystal17 calculations, the k-grid has been set to a 32 × 32 × 32 dense mesh, with a Fermi smearing of 0.001 Hartree. We crosschecked the Crystal17 and Quantum Espresso band structures for the paramagnetic phase of BaNiS 2 and BaCoS 2 , in order to verify the convergence of all relevant parameters in both PW and Gaussian DFT calculations.
ACKNOWLEDGMENTS. This work was supported by "Investissement d'Avenir" Labex PALM (ANR-10-LABX-0039-PALM), by the Region Ile-de-France (DIM OxyMORE), and by the project CALIP-SOplus under Grant Agreement 730872 from the EU Framework Programme for Research and Innovation HORIZON 2020. We acknowledge Benoît Baptiste for XRD characterization and Imène
D R A F T
Estève for her valuable assistance in the EDS study. M.C. is grateful to GENCI for the allocation of computer resources under the project N. 0906493. M.F. and A.A. acknowledge support by the European Union, under ERC AdG "FIRSTORM", contract N. 692670. | v2 |
2020-05-01T01:00:56.751Z | 2020-04-30T00:00:00.000Z | 216914029 | s2orc/train | You are right. I am ALARMED -- But by Climate Change Counter Movement
The world is facing the challenge of climate crisis. Despite the consensus in scientific community about anthropogenic global warming, the web is flooded with articles spreading climate misinformation. These articles are carefully constructed by climate change counter movement (cccm) organizations to influence the narrative around climate change. We revisit the literature on climate misinformation in social sciences and repackage it to introduce in the community of NLP. Despite considerable work in detection of fake news, there is no misinformation dataset available that is specific to the domain.of climate change. We try to bridge this gap by scraping and releasing articles with known climate change misinformation.
Introduction
Climate change is one of the biggest challenges threatening the world, and we are at the defining moment. Rising sea levels, melting polar ice, changing weather patterns, severe droughts, and extinction of species are just some of the dreadful effects of this crisis. The Intergovernmental Panel on Climate Change (IPCC) in its 5th assessment report categorically concluded that humans are the main culprit and there is a need to limit global warming to less than 2 • C. 1 More recently, anthropogenic climate change has been at the heart of the Australian bushfires (van Oldenborgh et al., 2020), leading to the destruction of 17 million hectares of land and the death of a billion animals. 2 During these times, we see articles with headlines such as Climate Change has caused more rain, helping fight Australian wildfires spreading misinformation to influence the narrative of climate change. 3 1 https://www.ipcc.ch/report/ar5/wg1/ 2 https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/pubs/rp 3 https://www.heartland.org/news-opinion/news/climate-change-has-caused-more-rain-helping-fight-German evolutionary biologist and physiologist Prof. Dr. Ulrich Kutschera told in an interview that CO 2 is a blessing for mankind and that the claimed 97% consensus among scientists is a myth. ... he rejected extremes, among them the climate alarmists who predict a fictitious, imminent earth heat death and thus practice a kind of religious cult.
New Zealand schools to terrify children about the climate crisis. Who cares about education if you believe the world is ending? What will it take for sanity to return? Global cooling? Another Ice Age even? The climate lunatics ... encourage them to wag school to protest for more action. The rise of this misleading information is part of a carefully crafted strategy by climate change counter movement (CCCM) organizations (Dunlap and Jacques, 2013;Boussalis and Coan, 2016;Farrell, 2016;McKie, 2018). These organizations use a formula consisting of a narrative structured around the principle ingredients of disinformation, misinformation, propaganda and hoax, sprinkled with the stylistic elements of sensationalism, melodrama, clickbait and satire, as can be seen in examples in Table 1. Their approach broadly mirrors that seen in fake news in the political arena (Rashkin et al., 2017), but is specifically tailored to the domain of climate change. This motivates the development of applications that can inform users via an automatic detection or alert system, similar to what we have seen for fake news (Rashkin et al., 2017;Pérez-Rosas et al., 2017;Jiang and Wilson, 2018).
The lack of annotated fake news data spurred the creation of misinformation datasets. The first public dataset for fake news detection (Vlachos and Riedel, 2014) and claim/stance verification (Ferreira and Vlachos, 2016) are moderately small with 221 and 300 instances, respectively. More recently, larger datasets have been developed, such as LIAR (Wang, 2017), collected from PolitiFact and labelled with 6 levels of veracity, and FEVER (Thorne et al., 2018), a dataset generated from Wikipedia with supported, refuted and not enough info labels. Extending the task to full articles, FakeNewsNet is a valuable resource (Shu et al., 2017(Shu et al., , 2018. But to the best of our knowledge there is no misinformation dataset that is specific to the domain of climate change. We attempt to fill this gap by releasing a large set of documents with known climate change misinformation.
Related Work
The way the public perceives and reacts to the constant supply of information around climate change is a function of how the facts and narrative are presented to them (Fløttum, 2014;Fløttum et al., 2016). Flottum (2017) emphasizes that language and communication around climate change are significant, as climate is not just the physical science but has political, social and ethical aspects, and involves various stakeholders, interests and voices. A range of corpus linguistic methods have been used to study the topical and stylistic aspects of language around climate change. Tvinnereim and Fløttum (2015) proposed the use of structured topic modelling (Roberts et al., 2014) to derive insights about the public opinion from 2115 open-ended survey responses. Salway et al. (2014) leveraged unsupervised grammar induction and pattern extraction methods to find common phrases in climate change communication. Atanasova and Koteyko (2017) analysed frequently-used metaphors manually in editorials and op-eds, and concluded that the communication in the Guardian (U.K.) was predominantly war based (e.g. threat of climate change), Seuddeutsche (Germany) based on illness (e.g. earth has fever), and the NYTimes (U.S.A) based on the idea of a journey (e.g. many small steps in the right direction).
In linguistics, style broadly refers to the properties of a sentence beyond its content or meaning (Pennebaker and King, 1999), and stylistic vari-ation plays an important role in the identification of misinformation. Biyani et al. (2016) studied stylistic aspects of clickbait and formalised it into 8 different categories ranging from exaggeration to teasing, and proposed a clickbait classifier based on novel informality features. Similarly, Kumar et al. (2016) examined the unique linguistic characteristics of hoax documents in Wikipedia and built a classifier using a range of hand-engineered features. Rashkin et al. (2017) proposed using stylistic lexicons (e.g. Linguistic Inquiry and Word Count (LIWC)), subjective words, and intensifying lexicons for fact checking, and demonstrated that words used to exaggerate like superlatives, subjectives, and modal adverbs are prominent in fake news, whereas trusted sources are dominated by assertive words. Wang (2017) experimented with detecting fake news using meta data features with convolutional neural networks adapted for text (Kim, 2014).
Although articles with misinformation are predominantly human-written, the recent emergence of large pre-trained language models means they can now be automatically generated. Radford et al. (2019) introduced a large auto-regressive model (GPT2) with the ability to generate high-quality synthetic text. One limitation of GPT2 is its inability to perform controlled generation for a specific domain, and Keskar et al. (2019) proposed a model to tackle this. Building on this further, Dathathri et al. (2019) introduced a plug and play language model, where the language model is a pretrained model similar to GPT2 but with controllable components that can be fine-tuned through attribute classifiers.
Climate Change Counter Movement Organisations
Despite the findings of the IPCC's 5th Assesment Report and more than 97 percent consensus in the scientific community to support anthropogenic global warming (Cook et al., 2013), coordinated efforts to tackle the climate crisis are lacking. This can be attributed to the rise in opposing voices including the fossil fuel lobby, conservative thinktanks, big corporations, and digital/print media questioning the science and research around climate change. These organisations are collectively referred to as climate change counter movement ("CCCM") organizations (Oreskes and Conway, 2010;Dunlap and Jacques, 2013;Farrell, 2016; Boussalis and Coan, 2016;McKie, 2018). McKie (2018) argues that the motivation behind these organizations is to maintain the status quo of the hegemony of fossil fuel-based neo-liberal global capitalism. These organizations are found around the globe and can masquerade as philanthropic organizations to fund climate misinformation (Farrell, 2019), hide behind libertarian ideas (McKie, 2018) to question the scientists, and augment scepticism to promote pseudo science or 'alternative facts'. Some of these organizations have catchy names such as carbonsense.com or friendsofscience.org, and organize their own scientific conferences. Oreskes and Conway (2010) concluded in their analysis that the strategies employed by CCCM to construct the narrative to spread misinformation resembles the ones historically used by the tobacco lobby. For instance, targeting researchers and questioning the methodology of their research, and blaming scientific standards are consistent strategies used by both CCCM and tobacco lobby groups (Oreskes and Conway, 2010;McKie, 2018). Dunlap and Brulle (2015); Farrell (2016); Boussalis and Coan (2016) categorized their misinformation arguments into 2 frames: science and policy. Science frame arguments question the scientific facts and deliberately plant a lie to sway the public towards pseudo science, whereas policy frame arguments target issues of cost and economy (e.g. carbon tax) or pass the blame for action to other nations. We present several examples of arguments in the science and policy frames in Table 2 .
We believe the narrative of CCCM articles have two aspects: topical and stylistic. The topical aspects describe common issues discussed in CCCM articles (e.g. carbon tax, fossil fuel, and renewable energy). The stylistic aspects capture how the narrative is presented -e.g. the use of exaggeration and sensationalism, as evident in the examples in Table 1 -and bear similar characteristics to fake news.
Dataset
To construct our dataset, we scrape articles with known climate change misinformation from 15 different CCCM organizations. These organizations are selected from three sources: (1) McKie (2018), (2) desmogblog.com, 4 a website that maintains a database of individuals and organizations that have been identified to perpetuate climate disinformation; and (3) an organizations cited on the website selected from above 2 sources. A number of considerations were made when developing the dataset: • We only scrape articles from organizations active in English-speaking countries: the United States, Canada, United Kingdom, Australia, and New Zealand.
• A considerable number of organizations are either dormant or have a very low level of activity. To make sure our dataset is up to date, we only scrape articles from organizations with a reasonable level of activity, e.g. they publish at least 1 article every month, and their latest publication is in 2020.
• We set a minimum and maximum threshold of 10 and 400 articles respectively for each organization. We set a maximum threshold so as to avoid bias towards one organization. Note, however, that there is a considerable variance in the article length for different organizations. For instance, one organization with only 10 articles has an average length of 342.1 words, while another organization with 400 articles has an average length of 85.8 words.
• As explained in Section 3, counter climate arguments can be broadly categorised into the science and policy frames. As organizations generally prefer one type of frame in their narrative, we manually identify frames associated with organizations, and select a set of organizations that produces a balanced representation of both frames in the dataset.
We split the documents into training and test partitions at the organization level, where the training set comprises 12 organizations and the test set We present some statistics for the training documents in Table 3, and the list of organizations is provided in the supplementary file.
Test data
We extend our test set to include documents that do not have climate change misinformation, to create a standard evaluation dataset for climate change misinformation detection. We collect documents from reputable sources, some of which are not climate-related, and some are satirical in nature. Sources of the full test documents are as follows: • Guardian: A trusted source for independent journalism. We scrape articles under the category of climate change from both its U.K. and Australian editions. These articles test whether a detection system can correctly identify these articles as not having climate change misinformation.
• BBC: Similar to Guardian, we scrape articles from their website under the category of climate change.
• Newsroom: This is a dataset released by Grusky et al. (2018) and consists of articles and summaries compiled from 38 different publications. 5 We take a random sample of articles which are not climate related. These articles test whether a detection system is able to identify non-climate-related articles as not having climate change misinformation.
• Beetota Advocate: This is an Australian satirical website which publishes articles on current affairs happening locally and internationally; 6 we scrape articles related to climate PM Meets With Cricket Side To Discuss The 1.7m Hectares Of NSW Forests Destroyed By Bushfires. Not even six months after being officially elected as the Australian Prime Minister with absolutely no policies, let alone any acknowledgement of his government's denialismled inaction on climate change, Scott Morrison has today had the opportunity to meet some more sportsmen! While the drought-stricken communities of rural Australian continue to burn at the hands of record-breaking and outof-control bushfires, ScoMo has today met with the Australian cricket side for his ideal media appearance.. ...... The cricketers appeared distressed while also having to pose for goofy photos with the Prime Minister, ...... planet's temperature that will result in the certain deaths of the billions of people that haven't been given permission to join Gina Rinehart and her Liberal Party employees in the spaceship. change. Although there is a tone of sensationalism in the writing, the articles are created with the intent of humour. An example of a Beetota Advocate article is given in Table 4. These articles test whether detection models are able to distinguish them from CCCM articles, as both have similar stylistic characteristics.
• Sceptical Science Arguments (SSA) and Sceptical Science Blogs (SSB): This resource focuses on explaining what science says about climate change. 7 It publishes general climate blogs and counters common climate myths by putting forth arguments backed by peer-reviewed research.
• CCCM: These are articles from the 3 CCCM organizations, as detailed in Section 4. These documents are the only documents with climate change misinformation in the test data.
Conclusion
We introduced climate misinformation to the domain of fake news and in the community of NLP. We explored the literature around emergence of CCCM organizations, the strategies and linguistic elements used by these organizations to construct a narrative of climate misinformation. To help in countering its spread, we scrape articles with known sources of misinformation and release it to the community. | v2 |
2012-10-19T02:45:00.000Z | 2012-05-07T00:00:00.000Z | 42207489 | s2orc/train | The fate of non-trivial entanglement under gravitational collapse
We analyse the evolution of the entanglement of a non-trivial initial quantum field state (which, for simplicity, has been taken to be a bipartite state made out of vacuum and the first excited state) when it undergoes a gravitational collapse. We carry out this analysis by generalising the tools developed to study entanglement behaviour in stationary scenarios and making them suitable to deal with dynamical spacetimes. We also discuss what kind of problems can be tackled using the formalism spelled out here as well as single out future avenues of research.
Introduction
The question of how entanglement behaves in non-inertial frames and in curved spacetimes has been around for already a fairly long time. There are many works that centre in the study of uniformly accelerated observers (among many others [1,2,3,4,5,6,7,8,9,10]), or in the background of a stationary eternal black hole [11]. There are also some studies involving entanglement dynamics in expanding universe scenarios which have shown that the interaction with the gravitational field can produce entanglement between quantum field modes [12,13].
Focusing on the problem of gravitational collapse, previous works in the literature analysed the correlations between the outgoing and infalling modes in a gravitational collapse when the initial state is the vacuum (see for example [14,15,16,17,13] again among many others).
In this work we consider the following more involved but central issue as far as the behaviour of entanglement in a dynamical spacetime is concerned. In the asymptotic past the field lives in a flat spacetime and its state has some degree of quantum entanglement between two of its modes. Then at some point, gravitational collapse occurs. The collapse makes the observers of the field unable to access the full state due to the formation of an event horizon. This has an impact on the entanglement that any observer of the field state can acknowledge.
Studying this sort of problems is interesting from many perspectives apart from understanding how quantum correlations behave in dynamical curved spacetimes. Quantum entanglement plays a key role in black hole thermodynamics and the fate of information in the presence of horizons. Also, the study of the behaviour of nontrivial quantum entanglement in gravitational collapse may be useful for analog gravity proposals that aim at making use of this entanglement as a resource to check genuinely quantum effects derived form the formation of a horizon [17]. In general, this will arguably constitute a rather difficult exercise. However, inspired by tools developed to study the effect of accelerations on quantum entanglement, it may be possible to shed some light on this problem.
In the study of quantum entanglement from non-inertial perspectives, i.e. in the context of relativistic quantum information, it was not until relatively recently that the physical meaning of the so-called 'single mode approximation' was analysed in detail. This approximation was introduced in [1,18]. It consisted in assuming that the Bogoliubov transformations between Minkowski and Rindler modes did not mix frequencies. In 2010 appropriate procedures to construct inertial modes which transform to monochromatic Rindler modes were introduced in the context of Relativistic Quantum Information for the accelerated scenario [7] as well as in the stationary Schwarzschild scenario [11].Nowadays we are taking steps towards the analysis of localised states [19] and entanglement behaviour in modes contained in cavities [9,10]. Although there are still a number of open questions about the analysis of localised field states and their possible experimental implementability, the two milestones [1,18] and [7] have enabled us to understand better the way in which entanglement behaves from non-inertial perspectives. And so, we analyse here the fundamental and qualitative effect of a dynamical gravitational collapse on bipartite entanglement contained in non-trivial quantum field states (that involve vacuum and excited states) prior to the collapse.
In Sec. 2 we introduce the basic spacetime and quantum-field ingredients and tools to analyse the fate of entanglement in a gravitational collapse scenario. Section 3 is devoted to the study of the evolution of the entanglement of a specific bipartite quantum field state made out of vacuum and an excited state (to our knowledge, this is the first time that this kind of non-trivial entanglement in a dynamical spacetime is analysed). Section 4 contains the conclusion and some lines of future research in this context.
Gravitational Collapse
We will consider a certain maximally entangled state of two modes of the field. This entangled state lives in a spacetime that is originally flat. At some point, a perturbation is produced causing the spacetime to undergo a process of gravitational collapse. This scenario would very well describe the process of an astrophysical stellar collapse: Prior to the collapse the density of a star is small enough to consider that the spacetime is approximately flat. At some point, the internal forces that kept the star from collapsing fail to counter the gravitational interaction and the star collapses. If nothing stops the collapse, it will reach a point in which an event horizon is formed.
Let us consider the following metric written in terms of ingoing Eddington-Finkelstein coordinates as where r is the radial coordinate, v is the ingoing null coordinate, and M (v) = mθ(v − v 0 ). For v 0 < v this is nothing but the ingoing Eddington-Finkelstein representation for the Schwarzschild metric whereas for v < v 0 it is just Minkowski spacetime. This metric represents a radial ingoing collapsing shockwave of radiation and it is called Vaidya metric (described schematically in Fig. 1). This metric is a solution to the Einstein equations (see for instance. Ref. [15]) that, in spite of its simplicity, describes very well the gravitational collapse scenario that we want to analyse. Refinements of the model to make it more realistic only introduce subleading corrections. In particular this model captures, up to subleading corrections, the more realistic collapse of a matter cloud. Let v h = v 0 − 4m be the coordinate of the last null ray that escapes to the future null infinity I + and hence that will eventually form the event horizon (see Fig. 1). Consider now a state of a scalar quantum field. We need to introduce convenient bases of solutions to the Klein-Gordon equation determined by their behaviour in the different relevant regions of this collapsing spacetime. For this, we will follow a standard procedure (see e.g. [15]).
We first define the 'in' basis of ingoing positive frequency modes, associated with the time parameter v at the null past infinity I − : Second, we define another basis in a Cauchy surface in the future. In this case, the asymptotic future I + is not a Cauchy surface in itself, so we need to consider also the future event horizon H + . Let us begin with the 'out' modes defined as being outgoing positive-frequency in terms of the natural time parameter η out at I + , which are where η out = v − 2r * out and r * out is the radial tortoise coordinate in the Schwarzschild region. At early times, these modes u out ω concentrate near v h at I − and behave in the following way: having support only in the region v < v h , since only the rays of light that depart from v < v h will reach the asymptotic region I + . The rest will fall down into the horizon. Finally, we use an analytical continuation argument to define the 'hor' modes at H + : These will be modes that behave as u out ω in the asymptotic past I − , but for v > v h . In other words, we define them as modes that leave the asymptotic past to fall into the horizon, never reaching the asymptotic future. Near the Cauchy surface I − , these modes behave as By expanding the field in terms of the two sets of modes ('in' on the one hand and 'hor-out' on the other) we can relate the two sets of solutions via the corresponding Bogoliubov coefficients. We will provide more details below, but for now let us refer to the extensive literature on this topic (see for instance [15]) and directly give the expression of the bosonic annihilation field operators in the asymptotic past in terms of the creation and annihilation operators of 'out' and 'hor' modes which, in the notation of [13], read: where tanh r ω = e −4πmω . The values of ϕ and α ωω will be given below. The 'in' vacuum, given by a in ω |0 in = 0 for all positive frequencies ω, can be readily rewritten in the 'out-hor' basis as where |n ω denotes a mode with occupation number n and frequency ω. The next task is to write the one-particle state in the past |1 ω in as a linear combination of the 'out-hor' basis modes. If we have a monochromatic excitation in the asymptotic past, equation (6) tells us that it will become a highly non-monochromatic linear combination of 'hor' and 'out' modes. The standard well-known procedure (which we briefly summarise in what follows, see e.g. [15,7]) is to construct another basis of 'in' modes with positive-frequency in the past such that its Bogoliubov transformation into 'hor' and 'out' modes is diagonal in frequencies. Let us call those modes u R Ω and u L Ω in order to keep the notation of Ref. [7]. Note that, for all the reasons discussed above, these modes are intrinsically non-monochromatic in the asymptotic past, i.e. in the basis of modes u in ω , and that Ω is labelling the frequency of such modes u R Ω and u L Ω with respect to the time in the asymptotic future region 'out'. As suggested by the fact that |0 in is a two-mode squeezed vacuum of 'out' and 'hor' modes, let us define new positive-norm 'R-L' modes by the following diagonal Bogoliubov transformation from 'out-hor' basis with the form of a two-mode squeezing operation: Taking Klein-Gordon inner products with the 'in' modes we find the following form of u R Ω and u L Ω in terms of the u in ω modes: where α ωΩ = u in ω , u out Ω , β ωΩ = u in ω , u out Ω * , γ ωΩ = u in ω , u hor Ω and δ ωΩ = u in ω , u hor Ω * are the following Bogoliubov Therefore we see that the 'R-L' modes are purely positive frequency linear combinations of the 'in' modes and that they also form a complete set of solutions of the field equation in the asymptotic past. This relationship between the modes directly translates into a relation between the particle operators associated with them Obviously, these operators annihilate the 'in' vacuum: a R Ω |0 in = a L Ω |0 in = 0. To summarise, these modes have the following features: (i) They share the same vacuum |0 in as the monochromatic modes a in ω . (ii) They form a complete basis of solutions to the field equation which are positive-frequency in the asymptotic past.
(iii) They translate into a single frequency mode when expressed in the future basis. Properties (i) and (ii) allow us to decompose any physical state as some combination of these modes, which makes them worth studying as an intermediate stage of more general cases. The third feature (diagonal Bogoliubov transformations) greatly simplifies the formalism, enabling us to use all the artillery already deployed in other simpler scenarios also in the case of stellar collapse, providing us with a nice and clear interpretation to the analysis of the entanglement in the asymptotic future.
Repeating an analogous reasoning as in [7], we can still introduce a more general annihilation operator with this properties, which will be a linear combination of the two annihilation operators a R Ω and a L Ω defined above: where q L and q R are real parameters satisfying q L = 1 − q 2 R and 2 −1/2 ≤ q R ≤ 1.
Entanglement behaviour
Let us consider the following maximally entangled bipartite state in the asymptotic past, "prepared" long before collapse starts where the excited modes for Bob are chosen to be those generated by (15), namely, whereas Alice's mode can be chosen arbitrarily (that is why it is not labelled with Ω). This initial state (16) will be observed by two observers Alice and Bob. While we will consider that Alice has unrestricted access to her partial state, we will assume that Bob does not because at some point the process of gravitational collapse will generate an event horizon, preventing him from accessing the full state. Although perhaps the most natural scenario would be that in which both subsystems are in the proximities of a stellar collapse and, hence, both observers would undergo similar processes, let us consider for simplicity that only one of the subsystems is going to be affected by the stellar collapse. One can think of Alice's state prepared such that it is a localized state living far away from the collapsing star. Alternatively we could consider that Alice can measure her subsystem prior to the formation of the horizon so that it cannot hinder her ability to obtain information about her partial state. On the other hand, Bob's knowledge about his subsystem is going to be limited because between the time when the state was created and the time in which he will be able to measure it, an event horizon appears, preventing him from accessing the full state in the future. This scenario is not devoid of physical interest. As we will show later on, it will allow us to focus on questions regarding quantum correlations between modes in the past and modes falling into the horizon.
In these circumstances, Alice measures in the 'in' basis whereas Bob measures in the 'out' basis, having lost all the information contained in the modes 'hor' that are bound to fall into the forming black hole. Now, to describe the effective state to which Alice and Bob can access requires that we trace out the modes that become causally disconnected from Bob due to the formation of the horizon, i.e. the 'hor' modes: This state is non-separable and we can compute its negativity [20] as a convenient quantifier of quantum entanglement. In simple words, when we compute the negativity of the density matrix ρ A−out we obtain a measurement of the correlations between Alice's fully accessible state and the modes that reach the asymptotic future escaping the collapse. It is legit then to ask about what would be the quantum entanglement between Alice's state and those modes that will not make it to the asymptotic future because they will fall into the incipient horizon, becoming trapped into the black hole. The state whose separability we would have to analyse would then be ρ A−hor = tr out (|Ψ Ψ|). (19) The procedure to compute the negativity is the following: first we express |Ψ in the basis of 'hor-out' modes for Bob, which yields Then the density matrices ρ A−hor and ρ A−out are obtained after a simple but lengthy algebra exercise by tracing out the 'out' and 'hor' modes respectively from |Ψ , where T r = tanh r Ω and C r = cosh r Ω . The expression for ρ A−hor is obtained by exchanging q R and q L and 'out' by 'hor' in the equation above.
The entanglement monotone that we will compute, the negativity [20], is the sum of the negative eigenvalues of the partial transposed density matrix of the quantum state for which we want to evaluate its degree of distillable entanglement. To compute it we first take partial transposes in ρ A−out and ρ A−hor (which is the transpose only with respect to Alice's indices). This yields where, as above, the expression for ρ TA A−hor is obtained by exchanging q R and q L and 'out' by 'hor'. As it is the case of the accelerated-observer scenario, the diagonalisation of the infinite-dimensional partial transposed density matrices ρ TA A−hor and ρ TA A−hor can be carried out only numerically since, with the exception of the case q R = 1, no block-diagonalisation can be performed. Figure 2 shows the result of the calculations. The negativity 'A-out' as a function of the mass of the forming black hole and the frequency of the probed 'out' mode is shown as solid blue lines. The negativity 'A-hor' is plotted in the same figure as dashed red lines.
We see that varying the parameter q R we are basically controlling whether Alice's state will have more quantum correlations with the modes that will reach the asymptotic future or the modes that will fall into the horizon. This is best seen in the large black hole mass limit, where there is an exact trade-off between the correlations that Alice's mode has with infalling and outgoing modes.
We can also see that for very small black holes, all entanglement is completely degraded: when m → 0 no entanglement survives either with the modes that fall into the horizon or the modes that will reach the future. As the mass of the black hole (or the frequency of the probed mode) is increased, the correlations quickly become insensitive to the presence of the horizon, as one can expect taking a look at the quantum effects induced by gravity in the presence of a black hole: They become stronger as the mass of the black hole is closer to zero. This suggests that this can be understood in a pictorial way as a limit in which the Hawking-like radiation spoils all correlations contained in the state.
As it is well known [21], if Bob can only measure modes in the asymptotic future, he will see the vacuum state |0 in as a thermal state. Indeed, if we compute how the 'in' vacuum is seen by observers in the asymptotic future we obtain that ρ |0 in out = tr hor (|0 in 0|) = ω ρ out,ω , where This is a thermal radiation state whose temperature is T H = (8πm) −1 . So if that happens with the vacuum state it would be a reasonable hand-waving argument that, if instead of the vacuum, we consider a pre-existing non-trivial entangled state such as that of equation (16), the thermal-like noise could impair the ability of the observers of acknowledging quantum correlations in the system. However one has to be very careful when thinking to what extent this behaviour can be naively associated to the Hawking thermal noise. To begin with, we are not considering the vacuum state, but rather an entangled state of field excitations. The process of change of basis and tracing out of the modes that fall into the event horizon is not as trivial as for the vacuum case. In fact, it has been shown that in the Rindler scenario [22] and beyond the so-called single mode approximation with some choices of the state, the accessible entanglement for an accelerated observer may behave in a non-monotonic way, as opposed to the first results reported in [2,3,7]. This is due to inaccessible correlations in the initial states becoming accessible to the accelerated observer when his proper Fock basis changes as acceleration varies. While this phenomenon was highlighted in [22] for the Rindler case, for an analogous choice of the modes (15), a similar behaviour would be expected in the dynamic scenario analysed here.
Let us conclude with a note of warning: The modes analysed here as tools to study entanglement behaviour in gravitational collapse have very nice properties but due to their highly non-monochromatic nature and non-localisation they are modes that can arguably be difficult to prepare and measure in an hypothetical experiment. This said, this tool will allow us to simplify the calculations so that we can extract fundamental results in settings in which other techniques have proven not operational, much in the same fashion as the introduction of Unruh modes [7] has allowed progress in our understanding of quantum correlations from non-inertial perspectives. One has to keep in mind that the modes used here share some fundamental properties with the standard monochromatic 'in' modes, and that they form a complete basis of solutions to the field equations. This means that any physically conceivable state can be expressed as a superposition of modes as the ones studied here.
Conclusion and Future research
We have analysed the behaviour of quantum entanglement present in some initial field state when a gravitational collapse occurs in the background. The quantum correlations in that state are perturbed by the formation of the event horizon, mixing the quantum state originally prepared, and therefore, degrading the original correlations. We have done so by adapting the tools developed in the analysis of entanglement in the context of the Unruh-Hawking effect [2,11] to go beyond what was known as 'single mode approximation' [7].
We have shown, that similarly to the infinite acceleration limit in accelerated scenarios [2,6,7] and similar to the stationary eternal blackhole scenario [11], entanglement is completely degraded when we consider singular black holes (m → 0) for which the Hawking temperature diverges.
A trivial extension of the results obtained here is considering that the appearance of an event horizon affects the ability to access the full state for both Alice and Bob. In these cases, and for maximally entangled states of a scalar field, entanglement will be arguably degraded more quickly than in the case where only Bob is affected by the collapse, much in a similar way as it happens in the acceleration scenario [23]. Extending this result to the case where both observers measure after the horizon is created is somewhat straightforward with the tools developed here, being mainly a matter of a more complicated calculation, whereas the results are arguably going to be qualitatively the same.
The next natural step is to introduce localised measurements that will endow the entanglement degradation phenomena reported here with operational meaning. Using for example, localised projective measurements as in [19] more physical scenarios can be analysed. | v2 |
2018-04-03T02:07:15.781Z | 2016-11-03T00:00:00.000Z | 3147169 | s2orc/train | Duplicates, redundancies and inconsistencies in the primary nucleotide databases: a descriptive study
GenBank, the EMBL European Nucleotide Archive and the DNA DataBank of Japan, known collectively as the International Nucleotide Sequence Database Collaboration or INSDC, are the three most significant nucleotide sequence databases. Their records are derived from laboratory work undertaken by different individuals, by different teams, with a range of technologies and assumptions and over a period of decades. As a consequence, they contain a great many duplicates, redundancies and inconsistencies, but neither the prevalence nor the characteristics of various types of duplicates have been rigorously assessed. Existing duplicate detection methods in bioinformatics only address specific duplicate types, with inconsistent assumptions; and the impact of duplicates in bioinformatics databases has not been carefully assessed, making it difficult to judge the value of such methods. Our goal is to assess the scale, kinds and impact of duplicates in bioinformatics databases, through a retrospective analysis of merged groups in INSDC databases. Our outcomes are threefold: (1) We analyse a benchmark dataset consisting of duplicates manually identified in INSDC—a dataset of 67 888 merged groups with 111 823 duplicate pairs across 21 organisms from INSDC databases – in terms of the prevalence, types and impacts of duplicates. (2) We categorize duplicates at both sequence and annotation level, with supporting quantitative statistics, showing that different organisms have different prevalence of distinct kinds of duplicate. (3) We show that the presence of duplicates has practical impact via a simple case study on duplicates, in terms of GC content and melting temperature. We demonstrate that duplicates not only introduce redundancy, but can lead to inconsistent results for certain tasks. Our findings lead to a better understanding of the problem of duplication in biological databases. Database URL: the merged records are available at https://cloudstor.aarnet.edu.au/plus/index.php/s/Xef2fvsebBEAv9w
Introduction
Many kinds of database contain multiple instances of records. These instances may be identical, or may be similar but with inconsistencies; in traditional database contexts, this means that the same entity may be described in conflicting ways. In this paper, as elsewhere in the literature, we refer to such repetitions-whether redundant or inconsistent-as duplicates. The presence of any of these kinds of duplicate has the potential to confound analysis that aggregates or reasons from the data. Thus, it is valuable to understand the extent and kind of duplication, and to have methods for managing it.
We regard two records as duplicates if, in the context of a particular task, the presence of one means that the other is not required. Duplicates are an ongoing data quality problem reported in diverse domains, including business (1), health care (2) and molecular biology (3). The five most severe data quality issues in general domains have been identified as redundancy, inconsistency, inaccuracy, incompleteness and untimeliness (4). We must consider whether these issues also occur in nucleotide sequence databases.
GenBank, the EMBL European Nucleotide Archive (ENA) and the DNA DataBank of Japan (DDBJ), the three most significant nucleotide sequence databases, together form the International Nucleotide Sequence Database Collaboration (INSDC) (5). The problem of duplication in the bioinformatics domain is in some respects more acute than in general databases, as the underlying entities being modelled are imperfectly defined, and scientific understanding of them is changing over time. As early as 1996, data quality problems in sequence databases were observed, and concerns were raised that these errors may affect the interpretation (6). However, data quality problems persist, and current strategies for cleansing do not scale (7). Technological advances have led to rapid generation of genomic data. Data is exchanged between repositories that have different standards for inclusion. Ontologies are changing over time, as are data generation and validation methodologies. Data from different individual organisms, with genomic variations, may be conflated, while some data that is apparently duplicated-such as identical sequences from different individuals, or even different species-may in fact not be redundant at all. The same gene may be stored multiple times with flanking regions of different length, or, more perniciously, with different annotations. In the absence of a thorough study of the prevalence and kind of such issues, it is not known what impact they might have in practical biological investigations.
A range of duplicate detection methods for biological databases have been proposed (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18). However, this existing work has defined duplicates in inconsistent ways, usually in the context of a specific method for duplicate detection. For example, some define duplicates solely on the basis of gene sequence identity, while others also consider metadata. These studies addressed only some of the kinds of duplication, and neither the prevalence nor the characteristics of different kinds of duplicate were measured.
A further, fundamental issue is that duplication (redundancy or inconsistency) cannot be defined purely in terms of the content of a database. A pair of records might only be regarded as duplicates in the context of a particular application. For example, two records that report the coding sequence for a protein may be redundant for tasks that concern RNA expression, but not redundant for tasks that seek to identify their (different) locations in the genome. Methods that seek to de-duplicate databases based on specific assumptions about how the data is to be used will have unquantified, potentially deleterious, impact on other uses of the same data.
Thus definitions of duplicates, redundancy and inconsistency depend on context. In standard databases, a duplicate occurs when a unique entity is represented multiple times. In bioinformatics databases, duplicates have different representations, and the definition of 'entity' may be unclear. Also, duplicates arise in a variety of ways. The same data can be submitted by different research groups to a database multiple times, or to different databases without cross-reference. An updated version of a record can be entered while the old version still remains. Or there may be records representing the same entity, but with different sequences or different annotations.
Duplication can affect use of INSDC databases in a variety of ways. A simple example is that redundancy (such as records with near-identical sequences and consistent annotations) creates inefficiency, both in automatic processes such as search, and in manual assessment of the results of search.
More significantly, sequences or annotations that are inconsistent can affect analyses such as quantification of the correlation between coding and non-coding sequences (19), or finding of repeat sequence markers (20). Inconsistencies in functional annotations (21) have the potential to be confusing; despite this, an assessment of 37 North American branchiobdellidans records concluded that nearly half are inconsistent with the latest taxonomy (22). Function assignments may rely on the assumption that similar sequences have similar function (23), but repeated sequences may bias the output sequences from the database searches (24).
Why care about duplicates?
Research in other disciplines has emphasized the importance of studying duplicates. Here we assemble comments on the impacts of duplicates in biological databases, derived from public or published material and curator interviews: 1. Duplicates lead to redundancies: 'Automated analyses contain a significant amount of redundant data and therefore violate the principles of normalization. . . In a typical Illumina Genomestudio results file 63% of the output file is composed of unnecessarily redundant data' (25). 'High redundancy led to an increase in the size of UniProtKB (TrEMBL), and thus to the amount of data to be processed internally and by our users, but also to repetitive results in BLAST searches . . . 46.9 million (redundant) entries were removed (in 2015)' (http://www.uniprot.org/help/proteome_redundancy.) We explain the TrEMBL redundancy issue in detail below. 2. Duplicates lead to inconsistencies: 'Duplicated samples might provide a false sense of confidence in a result, which is in fact only supported by one experimental data point' (26), 'two genes are present in the duplicated syntenic regions, but not listed as duplicates (true duplicates but are not labelled). This might be due to local sequence rearrangements that can influence the results of global synteny analysis' (25). 3. Duplicates waste curation effort and impair data quality: 'for UniProtKB/SwissProt, as everything is checked manually, duplication has impacts in terms of curation time. For UniProtKB/TrEMBL, as it (duplication) is not manually curated, it will impact quality of the dataset'. (Quoted from Sylvain Poux, leader of manual curation and quality control in SwissProt.) 4. Duplicates have propagated impacts even after being detected or removed: 'Highlighting and resolving missing, duplicate or inconsistent fields . . . 20% of (these) errors require additional rebuild time and effort from both developer and biologist' (27), 'The removal of bacterial redundancy in UniProtKB (and normal flux in protein) would have meant that nearly all (>90%) of Pfam (a highly curated protein family database using UniProtKB data) seed alignments would have needed manual verification (and potential modification) . . .This imposes a significant manual biocuration burden' (28).
The presence of duplicates is not always problematic, however. For instance, the purpose of the INSDC databases is mainly to archive nucleotide records. Arguably, duplicates are not a significant concern from an archival perspective; indeed the presence of a duplicate may indicate that a result has been reproduced and should be viewed as confident. That is, duplicates can be evidence for correctness. Recognition of such duplicates supports record linkage and helps researchers to verify their sequencing and annotation processes. However, there is an implicit assumption that those duplicates have been labelled accurately. Without labelling, those duplicates may confuse users, whether or not the records represent the same entities.
To summarize, the question of duplication is contextdependent, and its significance varies in these contexts: different biological databases, different biocuration processes and different biological tasks. However, it is clear that we should still be concerned about duplicates in INSDC. Over 95% of UniProtKB data are from INSDC and parts of UniProtKB are heavily curated; hence duplicates in INSDC would delay the curation time and waste curation effort in this case. Furthermore, its archival nature does not limit the potential uses of the data; other uses may be impacted by duplicates. Thus, it remains important to understand the nature of duplication in INSDC.
In this paper, we analyse the scale, kind and impacts of duplicates in nucleotide databases, to seek better understanding of the problem of duplication. We focus on INSDC records that have been reported as duplicates by manual processes and then merged. As advised to us by database staff, submitters spot duplicates and are the major means of quality checking in these databases; sequencing projects may also merge records once the genome construction is complete; other curated databases using INSDC records such as RefSeq may also merge records. Revision histories of records track the merges of duplicates. Based on an investigation of the revision history, we collected and analysed 67 888 merged groups containing 111 823 duplicate pairs, across 21 major organisms. This is one of three benchmarks of duplicates that we have constructed (53). While it is the smallest and most narrowly defined of the three benchmarks, it allows us to investigate the nature of duplication in INSDC as it arises during generation and submission of biological sequences, and facilitates understanding the value of later curation.
Our analysis demonstrates that various duplicate types are present, and that their prevalence varies between organisms. We also consider how different duplicate types may impact biological studies. We provide a case study, an assessment of sequence GC content and of melting point, to demonstrate the potential impact of various kinds of duplicates. We show that the presence of duplicates can alter the results, and thus demonstrate the need for accurate recognition and management of duplicates in genomic databases.
Background
While the task of detecting duplicate records in biological databases has been explored, previous studies have made a range of inconsistent assumptions about duplicates. Here, we review and compare these prior studies.
Definitions of duplication
In the introduction, we described repeated, redundant and inconsistent records as duplicates. We use a broad definition of duplicates because no precise technical definition will be valid in all contexts. 'Duplicate' is often used to mean that two (or more) records refer to the same entity, but this leads to two further definitional problems: determining what 'entities' are and what 'same' means. Considering a simple example, if two records have the same nucleotide sequences, are they duplicates? Some people may argue that they are, because they have exactly the same sequences, but others may disagree because they could come from different organisms.
These kinds of variation in perspective have led to a great deal of inconsistency. Table 1 shows a list of biological databases from 2009 to 2015 and their corresponding definitions of duplicates. We extracted the definition of duplicates, if clearly provided; alternatively, we interpreted the definition based on the examples of duplicates or other related descriptions from the database documentation. It can be observed that the definition dramatically varies between databases, even those in the same domain. Therefore, we reflectively use a broader definition of duplicates rather than an explicit or narrow one. In this work, we consider records that have been merged during a manual or semi-automatic review as duplicates. We explain the characteristics of the merged record dataset in detail later.
A pragmatic definition for duplication is that a pair of records A and B are duplicates if the presence of A means that B is not required, that is, B is redundant in the context of a specific task or is superseded by A. This is, after all, the basis of much record merging, and encompasses many of the forms of duplicate we have observed in the literature. Such a definition provides a basis for exploring alternative technical definitions of what constitutes a duplicate and provides a conceptual basis for exploring duplicate detection mechanisms. We recognize that (counterintuitively) this definition is asymmetric, but it reflects the in-practice treatment of duplicates in the INSDC databases. We also recognize that the definition is imperfect, but the aim of our work is to establish a shared understanding of the problem, and it is our view that a definition of this kind provides a valuable first step.
Duplicates based on a simple similarity threshold (redundancies)
In some previous work, a single sequence similarity threshold is used to find duplicates (8,9,11,14,16,18). In this work, duplicates are typically defined as records with sequence similarity over a certain threshold, and other factors are not considered. These kinds of duplicates are often referred to as approximate duplicates or near duplicates (37), and are interchangeable with redundancies. For instance, one study located all records with over 90% mutual sequence identity (11). (A definition that allows efficient repeated interactions between protein to protein, protein to DNA, gene to gene; same interactions but in different organism-specific files (30) gene annotation (near) identical genes; fragments; incomplete gene duplication; and different stages of gene duplication (31) gene annotation near or identical coding genes (32) gene annotation same measurements on different tissues for gene expression (33) genome characterization records with same meta data; same records with inconsistent meta data; same or inconsistent record submissions (34) genome characterization create a new record with the configuration of a selected record (35) ligand for drug discovery records with multiple synonyms; for example, same entries for TR4 (Testicular Receptor 4) but some used a synonym TAK1 (a shared name) rather than TR4 (36) peptidase cleavages cleavages being mapped into wrong residues or sequences Databases in the same domain, for example gene annotation, may be specialized for different perspectives, such as annotations on genes in different organisms or different functions, but they arguably belong to the same broad domain.
implementation, but is clearly poor from the point of view of the meaning of the data; an argument that 90% similar sequences are duplicated, but that 89% similar sequences are not, does not reflect biological reality.) A sequence identity threshold also applies in the CD-HIT method for sequence clustering, where it is assumed that duplicates have over 90% sequence identity (38). The sequence-based approach also forms the basis of the non-redundant database used for BLAST (39).
Methods based on the assumption that duplication is equivalent to high sequence similarity usually share two characteristics. First, efficiency is the highest priority; the goal is to handle large datasets. While some of these methods also consider sensitivity (40), efficiency is still the major concern. Second, in order to achieve efficiency, many methods apply heuristics to eliminate unnecessary pairwise comparisons. For example, CD-HIT estimates the sequence identity by word (short substring) counting and only applies sequence alignment if the pair is expected to have high identity.
However, duplication is not simply redundancy. Records with similar sequences are not necessarily duplicates and vice versa. As we will show later, some of the duplicates we study are records with close to exactly identical sequences, but other types also exist. Thus, use of a simple similarity threshold may mistakenly merge distinct records with similar sequences (false positives) and likewise may fail to merge duplicates with different sequences (false negatives). Both are problematic in specific studies (41,42).
Duplicates based on expert labelling
A simple threshold can find only one kind of duplicate, while others are ignored. Previous work on duplicate detection has acknowledged that expert curation is the best strategy for determining duplicates, due to the rich experience, human intuition and the possibility of checking external resources that experts bring (43)(44)(45). Methods using human-generated labels aim to detect duplicates precisely, either to build models to mimic expert curation behaviour (44), or to use expert curated datasets to quantify method performance (46).They can find more diverse types than using a simple threshold, but are still not able to capture the diversity of duplication in biological databases. The prevalence and characteristics of each duplicate type are still not clear. This lack of identified scope introduces restrictions that, as we will demonstrate, impair duplicate detection.
Korning et al. (13) identified two types of duplicates: the same gene submitted multiple times (near-identical sequences), and different genes belonging to the same family.
In the latter case, the authors argue that, since such genes are highly related, one of them is sufficient to represent the others. However, this assumption that only one version is required is task-dependent; as noted in the introduction, for other tasks the existence of multiple versions is significant. To the best of our knowledge, this is the first published work that identified different kinds of duplicates in bioinformatics databases, but the impact, prevalence and characteristics of the types of duplicates they identify is not discussed.
Koh et al. (12) separated the fields of each gene record, such as species and sequences, and measured the similarities among these fields. They then applied association rule mining to pairs of duplicates using the values of these fields as features. In this way, they characterized duplicates in terms of specific attributes and their combination. The classes of duplicates considered were broader than Korning et al.'s, but are primarily records containing the same sequence, specifically: (1) the same sequence submitted to different databases; (2) the same sequence submitted to the same database multiple times; (3) the same sequence with different annotations; and (4) partial records. This means that the (near-)identity of the sequence dominates the mined rules. Indeed, the top ten rules generated from Koh et al.'s analysis share the feature that the sequences have exact (100%) sequence identity.
This classification is also used in other work (10,15,17), which therefore has the same limitation. This work again does not consider the prevalence and characteristics of the various duplicate types. While Koh has a more detailed classification in her thesis (47), the problem of characterization of duplicates remains.
In this previous work, the potential impact on bioinformatics analysis caused by duplicates in gene databases is not quantified. Many refer to the work of Muller et al. (7) on data quality, but Muller et al. do not encourage the study of duplicates; indeed, they claim that duplicates do not interfere with interpretation, and even suggest that duplicates may in fact have a positive impact, by 'providing evidence of correctness'. However, the paper does not provide definitions or examples of duplicates, nor does it provide case studies to justify these claims.
Duplication persists due to its complexity
De-duplication is a key early step in curated databases. Amongst biological databases, UniProt databases are wellknown to have high quality data and detailed curation processes (48). Uniprot use four de-duplication processes depending on the requirements of using specific databases: 'one record for 100% identical full-length sequences in one species'; 'one record per gene in one species'; 'one record for 100% identical sequences over the entire length, regardless of the species'; and 'one record for 100% identical sequences, including fragments, regardless of the species', for UniProtKB/TrEMBL, UniProtKB/SwissProt, UniParc and UniRef100, respectively (http://www.uniprot.org/help/ redundancy). We note the emphasis on sequence identity in these requirements.
Each database has its specific design and purpose, so the assumptions made about duplication differ. One community may consider a given pair to be a duplicate whereas other communities may not. The definition of duplication varies between biologists, database staff and computer scientists. In different curated biological databases, deduplication is handled in different ways. It is far more complex than a simple similarity threshold; we want to analyse duplicates that are labelled based on human judgements rather than using a single threshold. Therefore, we created three benchmarks of nucleotide duplicates from different perspectives (53). In this work, we focus on analysing one of these benchmarks, containing records directly merged in INSDC. Merging of records is a way to address data duplication. Examination of merged records facilitates understanding of what constitutes duplication.
Recently, in TrEMBL, UniProt staff observed that it had a high prevalence of redundancy. A typical example is that 1692 strains of Mycobacterium tuberculosis have been represented in 5.97 million entries, because strains of this same species have been sequenced and submitted multiple times. UniProt staff have expressed concern that such high redundancy will lead to repetitive results in BLAST searches. Hence, they used a mix of manual and automatic approaches to de-duplicate bacterial proteome records, and removed 46.9 million entries in April 2015 (http:// www.uniprot.org/help/proteome_redundancy). A 'duplicate' proteome is selected by identifying: (a) two proteomes under the same taxonomic species group, (b) having over 90% identity and (c) selecting the proteome of the pair with the highest number of similar proteomes for removal; specifically, all protein records in TrEMBL belonging to the proteome will be removed (http://insideuniprot.blog spot.com.au/2015/05/uniprot-knowledgebase-just-gotsmaller.html). If proteome A and B satisfy criteria (a) and (b), and proteome A has 5 other proteomes with over 90% identity, whereas proteome B only has one, A will be removed rather than B. This notion of a duplicate differs from those above, emphasizing the context dependency of the definition of a 'duplicate'. This de-duplication strategy is incomplete as it removes only one kind of duplicate, and is limited in application to full proteome sequences; the accuracy and sensitivity of the strategy is unknown. Nevertheless, removing one duplicate type already significantly reduces the size of TrEMBL. This not only benefits database search, but also affects studies or other databases using TrEMBL records.
This de-duplication is considered to be one of the two significant changes in UniProtKB database in 2015 (the other change being the establishment of a comprehensive reference proteome set) (28). It clearly illustrates that duplication in biological databases is not a fully solved problem and that de-duplication is necessary.
Overall, we can see that foundational work on the problem of duplication in biological sequence databases has not previously been undertaken. There is no prior thorough analysis of the presence, kind and impact of duplicates in these databases.
Data and methods
Exploration of duplication and its impacts requires data. We have collected and analysed duplicates from INSDC databases to create a benchmark set, as we now discuss.
Collection of duplicates
Some of the duplicates in INSDC databases have been found and then merged into one representative record. We call this record the exemplar, that is, the current record retained as a proxy for a set of records. Staff working at EMBL ENA advised us (by personal communication) that a merge may be initiated by original record submitter, database staff or occasionally in other ways. We further explain the characteristics of the merged dataset below, but note that records are merged for different reasons, showing that diverse causes can lead to duplication. The merged records are documented in the revision history. For instance, GenBank record AC011662.1 is the complete sequence of both BACR01G10 and BACR05I08 clones for chromosome 2 in Drosophila melanogaster. Its revision history (http://www.ncbi.nlm.nih.gov/nuccore/6017069?re port¼girevhist) shows that it has replaced two records AC007180.20 and AC006941.18, because they are 'SEQUENCING IN PROGRESS' records with 57 and 21 unordered pieces for BACR01G10 and BACR05I08 clones, respectively. As explained in the supplementary ma terials, the groups of records can readily be fetched using NCBI tools.
For our analysis, we collected 67 888 groups (during 15-27 July 2015), which contained 111 823 duplicates (a given group can contain more than one record merge) across the 21 popular organisms used in molecular research listed in the NCBI Taxonomy web page (http:// www.ncbi.nlm.nih.gov/Taxonomy/taxonomyhome.html/). The data collection is summarized in Supplementary Table S1, and, the details of the collection procedure underlying the data are elaborated in the Supplementary file Details of the record collection procedure. As an example, the Xenopus laevis organism has 35 544 directly related records. Of these, 1,690 have merged accession IDs; 1620 merged groups for 1660 duplicate pairs can be identified in the revision history.
Characteristics of the duplicate collection
As explained in 'Background' section, we use a broad definition of duplicates. This data collection reflects the broad definition, and in our view is representative of an aspect of duplication: these are records that are regarded as similar or related enough to merit removal, that is, are redundant. The records were merged for different reasons, including: • Changes to data submission policies. Before 2003, the sequence submission length limit was 350 kb. After releasing the limit, the shorter sequence submissions were merged into a single comprehensive sequence record. • Updates of sequencing projects. Research groups may deposit current draft records; later records will merge the earlier ones. Also, records having overlapping clones are merged when the construction of a genome is close to complete (49). • Merges from other data sources. For example, RefSeq uses INSDC records as a main source for genome assembly (50). The assembly is made according to different organism models and updated periodically and the records may be merged or split during each update (51). The predicted transcript records we discuss later are from RefSeq (still searchable via INSDC but with RefSeq label). • Merges by record submitters or database staff occur when they notice multiple submissions of the same record.
While the records were merged due to different reasons, they can all be considered duplicates. The various reasons for merging records represent the diversity. If those records above had not been merged, they would cause data redundancy and inconsistency.
These merged records are illustrations of the problem of duplicates rather than current instances to be cleaned. Once the records are merged, they are no longer active or directly available to database users. However, the obsolete records are still of value. For example, even though over 45 million duplicate records were removed from UniProt, the key database staff who were involved in this activity are still interested in investigating their characteristics. (Ramona Britto and Benoit Bely, the key staff who removed over 45 million duplicate records from UniProtKB.)They would like to understand the similarity of duplicates for more rapid and accurate duplicate identification in future, and to understand their impacts, such as how their removal affects database search.
From the perspective of a submitter, those records removed from UniProtKB may not be duplicates, since they may represent different entities, have different annotations, and serve different applications. However, from a database perspective, they challenge database storage, searches and curation (48). 'Most of the growth in sequences is due to the increased submission of complete genomes to the nucleotide sequence databases' (48). This also indicates that records in one data source may not be considered as duplicates, but do impact other data sources.
To the best of our knowledge, our collection is the largest set of duplicate records merged in INSDC considered to date. Note that we have collected even larger datasets based on other strategies, including expert and automatic curation (52). We focus on this collection here, to analyse how submitters understand duplicates as one perspective. This duplicate dataset is based on duplicates identified by those closest to the data itself, the original data submitters, and is therefore of high quality.
We acknowledge that the data set is by its nature incomplete; the number of duplicates that we have collected is likely to be a vast undercounting of the exact or real prevalence of duplicates in the INSDC databases. There are various reasons for this that we detail here.
First, as mentioned above, both database staff and submitters can request merges. However, for submitters, records can only be modified or updated if they are the record owner. Other parties who want to update records that they did not themselves submit must get permission from at least one original submitter (http://www.ncbi.nlm. nih.gov/books/NBK53704/). In EMBL ENA, it is suggested to contact the original submitter first, but there is an additional process for reporting errors to the database staff (http://www.ebi.ac.uk/ena/submit/sequence-submis sion#how_to_update). Due to the effort required for these procedures, the probability that there are duplicates that have not been merged or labelled is very high.
Additionally, as the documentation shows, submitterbased updates or correction are the main quality control mechanisms in these databases. Hence, the full collections of duplicates listed in Supplementary Table S1 presented in this work are limited to those identified by (some) submitters. Our other duplicate benchmarks, derived from mapping INSDC to Swiss-Prot and TrEMBL, contain many more duplicates (53). This implies that many more potential duplicates remain in INSDC.
The impact of curation on marking of duplicates can be observed in some organisms. The total number of records in Bos taurus is about 14% and 1.9% of the number of records in Mus musculus and Homo sapiens, respectively, yet Bos taurus has a disproportionately high number of duplicates in the benchmark: >20 000 duplicate pairs, which is close (in absolute terms) to the number of duplicates identified in the other two species. Another example is Schizosaccharomyces pombe, which only has around 4000 records but a relatively large number (545) of duplicate pairs have been found.
An organism may have many more duplicates if its lower taxonomies are considered. The records counted in the table are directly associated to the listed organism; we did not include records belonging to taxonomy below the species level in this study. An example of the impact of this is record AE005174.2, which replaced 500 records in 2004 (http://www.ncbi.nlm.nih.gov/nuccore/56384585). This record belongs to Escherichia coli O157:H7 strain EDL933, which is not directly associated to Escherichia coli and therefore not counted here. The collection statistics also demonstrate that 13 organisms contain at least some merged records for which the original records have different submitters. This is particularly evident in Caenorhabditis elegans and Schizosaccharomyces pombe (where 92.4 and 81.8%, respectively, of duplicate records are from different submitters). A possible explanation is that there are requests by different members from the same consortium. While in most cases the same submitters (or consortiums) can merge the records, the merges cumulatively involve many submitters or different consortiums.
This benchmark is the only resource currently available for duplicates directly merged in INSDC. Staff have also advised that there is currently no automatic process for collecting such duplicates.
Categorization of duplicates
Observing the duplicates in the collection, we find that some of them share the same sequences, whereas others have sequences with varied lengths. Some have been annotated by submitters with notes such as 'WORKING DRAFT'. We therefore categorized records at both sequence level and annotation level. For sequence level, we identified five categories: Exact sequences, Similar sequences, Exact fragments, Similar fragments and Lowidentity sequences. For annotation level, we identified three categories: Working draft, Sequencing-in-progress and Predicted. We do not restrict a duplicate instance to be in only one category.
This categorization represents diverse types of duplicates in nucleotide databases, and each distinct kind has different characteristics. As discussed previously, there is no existing categorization of duplicates with supporting measures or quantities in prior work. Hence, we adopt this categorization and quantify the prevalence and characteristics of each kind, as a starting point for understanding the nature of duplicates in INSDC databases more deeply.
The detailed criteria and description of each category are as follows. For sequence level, we measured local sequence identity using BLAST (9). This measures whether two sequences share similar subsequences. We also calculated the local alignment proportion (the number of identical bases in BLAST divided by the length of the longer sequence of the pair) to estimate the possible coverage of the pair globally without performing a complete (expensive) global alignment. Details, including formulas, are provided in the supplementary materials Details of measuring submitter similarity and Details of measuring sequence similarities.
Category 1, sequence level
Exact sequences. This category consists of records that share exact sequences. We require that the local identity and local alignment proportion must both be 100%. While this cannot guarantee that the two sequences are exactly identical without a full global alignment, having both local identity and alignment coverage of 100% strongly implies that two records have the same sequences.
Category 2, sequence level Similar sequences. This category consists of records that have near-identical sequences, where the local identity and local alignment proportion are <100% but no < 90%.
Category 3, sequence level
Exact fragments. This category consists of records that have identical subsequences, where the local identity is 100% and the alignment proportion is < 90%, implying that the duplicate is identical to a fragment of its replacement.
Category 4, sequence level Similar fragments. By correspondence with the relationship between Categories 1 and 2, this category relaxes the constraints of Category 3. It has the same criteria of alignment proportion as Category 3, but reduces the requirement for local identity to no < 90%.
Category 5, sequence level
Low-identity sequences. This category corresponds to duplicate pairs that exhibit weak or no sequence similarity. This category has three tests: first, the local sequence identity is < 90%; second, BLAST output is 'NO HIT', that is, no significant similarity has been found; third, the expected value of the BLAST score is > 0.001, that is, the found match is not significant enough.
Categories based on annotations
The categories at the annotation level are identified based on record submitters' annotations in the 'DEFINITION' field. Some annotations are consistently used across the organisms, so we used them to categorize records.
If at least one record of the pair contains the words 'WORKING DRAFT', it will be classified as Working draft, and similarly for Sequencing-in-progress and Predicted, containing 'SEQUENCING IN PROGRESS' and 'PREDICTED', respectively.
A more detailed categorization could be developed based on this information. For instance, there are cases where both a duplicate and its replacement are working drafts, and other cases where the duplicate is a working draft while the replacement is the finalized record. It might also be appropriate to merge Working draft and Sequencing-in-progress into one category, since they seem to capture the same meaning. However, to respect the original distinctions made by submitters, we have retained it. Table 2 shows distribution of duplicate types in selected organisms. The distribution of all the organisms is summarized in Supplementary Table S2. Example records for each category are also summarized in Supplementary Table S3.
Presence of different duplicate types
Recall that existing work mainly focuses on duplicates with similar or identical sequences. However, based on the duplicates in our collection, we observe that duplicates under the Exact sequence and Similar sequence categories only represent a fraction of the known duplicates. Only nine of the 21 organisms have Exact sequence as the most common duplicate type, and six organisms have small numbers of this type. Thus, the general applicability of prior proposals for identifying duplicates is questionable.
Additionally, it is apparent that the prevalence of duplicate types is different across the organisms. For sequence-based categorization, for nine organisms the highest prevalence is Exact sequence (as mentioned above), for two organisms it is Similar sequences, for eight organisms it is Exact fragments, and for three organisms it is Similar fragments (one organism has been counted twice since Exact sequence and Similar fragments have the same count). It also shows that ten organisms have duplicates that have relatively low sequence identity.
Overall, even this simple initial categorization illustrates the diversity and complexity of known duplicates in the primary nucleotide databases. In other work (53), we reproduced a representative duplicate detection method using association rule mining (12) and evaluated it with a sample of 3498 merged groups from Homo sapiens. The performance of this method was extremely poor. The major underlying issues were that the original dataset only contains duplicates with identical sequences and that the method did not consider diverse duplicate types.
Thus, it is necessary to categorize and quantify duplicates to find out distinct characteristics held by different categories and organisms; we suggest that these different duplicate types must be separately addressed in any duplicate detection strategy.
The melting temperature of a DNA sequence is the temperature at which half of the molecules of the sequence form double strands, while another half are singlestranded, a key sequence property that is commonly used in molecular studies (55). Accurate prediction of the melting temperature is an important factor in experimental success (56). The GC content and the melting temperature are correlated, as the former is used in determination of the latter. The details of calculations of GC content and melting temperature are provided in the supplementary Details of formulas in the case study.
We computed and compared these two characteristics in two settings: by comparing exemplars with the original group, which contains the exemplars along with their duplicates; and by comparing exemplars with their corresponding duplicates, but with the exemplar removed.
Selected results are in Table 3 (visually represented in Figures 1 and 2) and Table 4 (visually represented in Table 1; mdiff and std: the mean and standard deviation of absolute value of the difference between each exemplar and the mean of the original group, respectively. Categories are the same as Table 1; mdiff and std: the mean and standard deviation of absolute value of the difference between each exemplar and the mean of the original group, respectively; Tb, Ts, Ta: melting temperature calculated using basic, salted and advanced formula in supplement respectively. The values illustrating larger distinctions with experimental tolerances have been made bold. Figures 3 and 4), respectively (full results in Supplementary Tables S4 and S5). First, it is obvious that the existence of duplicates introduces much redundancy. After deduplication, the size of original duplicate set is reduced by 50% or more for all the organisms shown in the table. This follows from the structure of the data collection.
Critically, it is also evident that all the categories of duplicates except Exact sequences introduce differences for Categories are the same as Table 1; mdiff and std: the mean and standard deviation of absolute value of the difference between each exemplar and the mean of the duplicates group, respectively; Tb, Ts, Ta: melting temperature calculated using basic, salted and advanced formula in supplement respectively. The values illustrating larger distinctions with experimental tolerances have been made bold. the calculation of GC content and melting temperature. These mdiff (mean of difference) values are significant, as they exceed other experimental tolerances, as we explain below. (The values illustrating larger distinctions have been made bold in the table.) Table 2 already shows that exemplars have distinctions with their original groups. When examining exemplars with their specific pairs, the differences become even larger as shown in Table 3. Their mean differences and standard deviations are different, meaning that exemplars have distinct characteristics compared to their duplicates.
These differences are significant and can impact interpretation of the analysis. It has been argued in the context of a wet-lab experiment exploring GC content that welldefined species fall within a 3% range of variation in GC percentage (57). Here, duplicates under specific categories could introduce variation of close to or > 3%. For melting temperatures, dimethyl sulphoxide (DMSO), an external chemical factor, is commonly used to facilitate the amplification process of determining the temperature. An additional 1% DMSO leads to a temperature difference ranging from 0.5 C to 0.75 C (55). However, six of our Table 1; mdiff and std: the mean and standard deviation of absolute value of the difference between each exemplar and the mean of the duplicates group, respectively. measurements in Homo sapiens have differences of over 0.5 C and four of them are 0.75 C or more, showing that duplicates alone can have the same or more impact as external factors.
Overall, other than the Exact fragments and Similar fragments categories, the majority of the remainder has differences of GC content and melting temperature of over 0.1 C. Many studies report these values to three digits of precision, or even more (58)(59)(60)(61)(62)(63). The presence of duplicates means that these values in fact have considerable uncertainty. The impact depends on which duplicate type is considered. In this study, duplicates under the Exact fragments, Similar fragments and Low-identity categories have comparatively higher differences than other categories. In contrast, Exact sequences and Similar sequences have only small differences. The impact of duplicates is also dependent on the specific organism: some have specific duplicate types with relatively large differences, and the overall difference is large as well; some only differ in specific duplicate types, and the overall difference is smaller; and so on. Thus it is valuable to be aware of the prevalence of different duplicate types in specific organisms.
In general, we find that duplicates bring much redundancy; this is certainly disadvantageous for studies such as sequence searching. Also, exemplars have distinct characteristics from their original groups such that sequence-based measurement involving duplicates may have biased results. The differences are more obvious for specific duplicate pairs within the groups. For studies that randomly select the records or have dataset with limited size, the results may be affected, due to possible considerable differences. Together they show that why de-duplication is necessary. Note that the purpose of our case study is not to argue that previous studies are wrong or try to better estimate melting temperatures. Our aim is only to show that the presence of duplicates, and of specific types of duplicates, can have a meaningful impact on biological studies based on sequence analysis. Furthermore, it provides evidence for the value of expert curation of sequence databases (64).
Our case study illustrates that different kinds of duplicates can have distinct impacts on biological studies. As described, the Exact sequences records have only a minor impact under the context of the case study. Such duplicates can be regarded as redundant. Redundancy increases the database size and slows down the database search, but may have no impact on biological studies.
In contrast, some duplicates can be defined as inconsistent. Their characteristics are substantially different to the 'primary' sequence record to which they correspond, so they can mislead sequence analysis. We need to be aware of the presence of such duplicates, and consider whether it they must be detected and managed.
In addition, we observe that the impact of these different duplicate types, and whether they should be considered to be redundant or inconsistent, is task-dependent. In the case of GC content analysis, duplicates under Similar fragments may have severe impact. For other tasks, there may be different effects; consider for example exploration of the correlation between non-coding and coding sequences (19) and the task of finding repeat sequence markers (20). We should measure the impact of duplicates in the context of such activities and then respond appropriately.
Duplicates can have impacts in other ways. Machine learning is a popular technique and effective technique for analysis of large sets of records. The presence of duplicates, however, may bias the performance of learning techniques because they can affect the inferred statistical distribution of data features. For example, it was found that much duplication existed in a popular dataset that has been widely used for evaluating machine learning methods used to detect anomalies (65); its training dataset has over 78% redundancy with 1 074 992 records over-represented into 4 898 431 records. Removal of the duplicates significantly changed reported performance, and behaviour, of methods developed on that data.
In bioinformatics, we also observe this problem. In earlier work we reproduced and evaluated a duplicate detection method (12) and found that it has poor generalization performance because the training and testing dataset consists of only one duplicate type (53). Thus, it is important to be aware of constructing the training and testing datasets based on representative instances. In general, two strategies for addressing this issue: one using different candidate selection techniques (66); another is using largescale validated benchmarks (67). In particular, duplicate detection surveys point out the importance of the latter: as different individuals have different definitions or assumptions on what duplicates are, this often leads to the corresponding methods working only in narrow datasets (67).
Conclusion
Duplication, redundancy and inconsistency have the potential to undermine the accuracy of analyses undertaken on bioinformatics databases, particularly if the analyses involve any form of summary or aggregation. We have undertaken a foundational analysis to understand the scale, kinds and impacts of duplicates. For this work, we analysed a benchmark consisting of duplicates spotted by INSDC record submitters, one of the benchmarks we collected in (53). We have shown that the prevalence of duplicates in the broad nucleotide databases is potentially high. The study also illustrates the presence of diverse duplicate types and that different organisms have different prevalence of duplicates, making the situation even more complex. Our investigation suggests that different or even simplified definitions of duplicates, such as those in previous studies, may not be valuable in practice.
The quantitative measurement of these duplicate records showed that they can vary substantially from other records, and that different kinds of duplicates have distinct features that imply that they require different approaches for detection. As a preliminary case study, we considered the impact of these duplicates on measurements that depend on quantitative information in sequence databases (GC content and melting temperature analysis), which demonstrated that the presence of duplicates introduces error.
Our analysis illustrates that some duplicates only introduce redundancy, whereas other types lead to inconsistency. The impact of duplicates is also task-dependent; it is a fallacy to suppose that a database can be fully deduplicated, as one task's duplicate can be valuable information in another context.
The work we have presented based on the merge-based benchmark as a source of duplication, may not be fully representative of duplicates overall. Nevertheless, the collected data and the conclusions derived from them are reliable. Although records were merged due to different reasons, these reasons reflect the diversity and complexity of duplication. It is far from clear how the overall prevalence of duplication might be more comprehensively assessed. This would require a discovery method, which would inherently be biased by the assumptions of the method. We therefore present this work as a contribution to understanding what assumptions might be valid.
Supplementary data
Supplementary data are available at Database Online. | v2 |
2013-02-05T09:29:40.000Z | 2013-02-05T00:00:00.000Z | 119219669 | s2orc/train | Search for correlations between solar flares and decay rate of radioactive nuclei
The deacay rate of three different radioactive sources 40K, 137Cs and natTh has been measured with NaI and Ge detectors. Data have been analyzed to search for possible variations in coincidence with the two strongest solar flares of the years 2011 and 2012. No significant deviations from standard expectation have been observed, with a few 10-4 sensitivity. As a consequence, we could not find any effect like that recently reported by Jenkins and Fischbach: a few per mil decrease in the decay rate of 54Mn during solar flares in December 2006.
Introduction
In the past years, a correlation between the Sun and the decay rate of radioactive isotopes has been proposed. In particular, two effects have been considered: the annual modulation due to the seasonal variation of the Earth-Sun distance [1] and the decrease of the decay rate during a solar flare [2]. In this letter we are interested in the latter phenomenon.
Briefly, solar flares are explosions on the surface of the Sun near sunspots. They are powered by the release of magnetic energy stored in the corona, up to one hundredth of the solar luminosity, and they affect all layers of solar atmosphere, from the photosphere to the corona. On the Sun this amount of energy is released within a few minutes to tens of minutes. In this interval the plasma is heated to tens of millions of degrees with a strong X-ray emission and electron and proton acceleration (up to several tens and hundreds of MeV, respectively).
In particular, the 2006 flares from December 2nd 2006 to January 1st 2007 gave rise to X-ray fluxes which, measured on the the Geostationary Operational Environmental Satellites (GOES), were of a few times 10 −4 W/m 2 at the peak (see Fig. 1 of reference [2] for details). At that time the activity of a ∼1 µCi source of 54 Mn was being measured by Jenkins and Fischbach [2] * Corresponding author with a 2x2 inch NaI crystal detecting the 835 keV γ-ray emitted after the electron capture decay. A significant dip (up to 4·10 −3 , ∼7 σ effect), in the count rate, averaged on a time interval of 4 hours, has been observed in coincidence with the solar flares. On the other hand, a different experiment with a ∼10 −3 sensitivity, carried out by Parkhomov [3], did not observe any deviation in the activity of 60 Co, 90 Sr-Y and 239 Pu sources in coincidence with the same flare.
After a few years of quiet Sun, solar activity is now increasing, as shown both by the increase of the steady X-ray flux as well as of X-flares and of other typical solar phenomena. As a matter of fact, we are approaching the maximum of the 11 year solar cycle which is predicted to take place in Fall 2013. In our analysis we focus on the two most intense flares of the last years, namely those that occurred on August 2011 and March 2012: X6.9 on August 9th 2011 @ 08:08 UTC and X5.4 on March 7th 2012 @ 00:24 UTC [4]. Solar flares are classified according to the power of the X-ray flux peak near the Earth as measured by the GOES-15 geostationary satellite: X identifies the class of the most powerful ones, with a power at the peak larger than 10 −4 W/m 2 (within the X-class there is then a linear scale). The two flares were well defined in time (a few minutes) and they illuminated the entire Earth. Their intensities are comparable, or even larger, than those observed in December 2006. During the 2011 flare the activity of the 137 Cs and nat Th sources were being measured with a Ge and with a NaI detector, re- spectively. On the other hand, during the 2012 flare the 137 Cs and 40 K sources were being studied with the same Ge detector and with a different NaI detector, as described in the next section. These different nuclides gives the possibility to search for possible effects correlated with solar flares in three different decay processes: alpha, beta and electron capture. Table 1 summarizes the information on the experimental set-ups we are running to search for modulations in the decay rates of different radioactive sources (period from few days to one year). In particular, in this letter we only consider an interval of ±10 days around the time of the 2 solar flares in order to search for any significant deviation (positive or negative) in the decay rate correlated with the flares. The choice of this time window is quite arbitrary, since there is no model, to our knowledge, that correlates the flare intensity with the activity of a radioactive source. On the other hand, we note that, according to data shown in Fig. 2 of [2], the alleged influence of the flare on the source activity lasts for a few days around the occurrence of the flare.
Potassium source
A 3x3 inch NaI crystal is surrounded by about 16 kg of potassium bicarbonate powder. The set-up, installed above ground, is shielded by at least 10 cm of lead. The total count rate in the 17-3400 keV energy window is about 800 Hz, to be compared to the background of less than 3 Hz when the source is removed. The energy spectrum is dominated by the full energy peak at 1461 keV energy due to the electron-capture decay of 40 K to 40 Ar. The peak position and the energy resolution ( 90 keV at 1461 keV) are fairly constant over months.
Cesium source
The activity of a 3 kBq 137 Cs source is being measured since June 2011. The set-up is installed in the low background facility STELLA (SubTErranean Low Level Assay) located in the underground laboratories of Laboratori Nazionali del Gran Sasso (LNGS). The detector is a p-type High Purity Germanium (96% efficiency) with the source firmly fixed to its copper endcap and it is surrounded by at least 5 cm of copper followed by 25 cm of lead to suppress the laboratory gamma ray background. Finally, shielding and detector are housed in a polymethylmetacrilate box flushed with nitrogen at slight overpressure and which is working as an anti-radon shield. The total count rate above the 7 keV threshold is of 680 Hz. The intrinsic background, i.e shielded detector without Cs source, has been measured during a period of 70 days: thanks to the underground environment and to the detector shielding, it is very low, down to about 40 counts/hour above the threshold (0.01 Hz). The spectrum is dominated by the 661.6 keV line due to the isomeric transition of 137m Ba from the beta decay of 137 Cs.
Details of the experiment and the results obtained in the first 210 days of running to search for an annual modulation of the 137 Cs decay constant are given in [5]. Briefly, a limit of 8.5·10 −5 at 95% C.L. is set on the maximum allowed amplitude independently of the phase.
Thorium source
The activity of a sample of natural Thorium is measured with a 3x3 inch NaI crystal installed underground in the same laboratory as the Germanium experiment with the 137 Cs source. The sample is an optical lens, made by special glass heavily doped with Thorium Oxide. Note that this technique, used for improving the optical properties of glass, was quite common until the seventies. The lens is placed close to the crystal housing and both the lens and the NaI detector are shielded with at least 15 cm of lead. The total count rate above the threshold of 10 keV is of about 3200 Hz (gammas from 228 Ac , 212 Bi, 212 Pb, 208 Tl), with a background of 2.3 Hz (due to 40 K, thorium and uranium chains and lead X-rays). The energy spectrum is acquired once a day, with a corrected dead time of 2.63%. Even if the chain is not at the equilibrium, the total count rate increases by only 1.7 · 10 −4 over a time period of 1 month.
Results
We consider separately the two largest solar flares occurred in the data taking period, i.e. X6.9 August 9th 2011 and X5.4 March 7th 2012 [4]. For each of them only two of the set-ups given in Table 1 were running. As a matter of fact, the nat Th set-up went out of order in February 2012, due to a failure in the DAQ system, whereas the 40 K set-up started taking data in November 2011. On the contrary, the 137 Cs set-up is continuously running since June 2011. Figure 1 shows the data collected in a 20 day window centered on the August 9th 2011 flare (the day is given in terms of the Modified Julian Date). The X-ray peak flux is plotted in linear scale and given in W/m 2 , in the 0.1-0.8 nm band measured by the GOES-15 satellite [4]. Inside the two bands are plotted the residuals of the normalized count rate of the nat Th and 137 Cs sources (i.e. the difference between the measured and expected count rate divided by the measured one), averaged over a period of 1 day.
The error bars are purely statistical. Systematic errors are negligible as compared to the statistical ones during a data taking period of a few days only. For the nat Th data a linear trend (5.7 ppm/day), due to the recovering of the secular equilibrium, is subtracted, while the 137 Cs data are corrected for the exponential decay of the source, using the nominal mean life value of 43.38 y. This latter correction amounts to 63 ppm/day.
From the data we can conclude that the 137 Cs source does not show any significant dip or excess in correspondence with the X-ray main peak. On the other hand, the nat Th source shows a questionable dip in the count rate, starting 1.5 days before the X-flare. However, the dip is well compatible with a statistical fluctuation. As a matter of fact, fluctuations of the same order of magnitude can be seen at different times during the data taking, uncorrelated with X-ray flux peaks. In any case, the existence in our data of an effect as large as the one reported in [2], of the order of a few per mil per day and lasting several days, can be excluded. The maximum effect compatible with our data is smaller than 3 · 10 −4 per day at 95% confidence level for the X6.9 flare. Such a limit is obtained by adding the double of the error to the value of the dip.
In Figure 2 similar data for the March 7th 2012 flare are presented. Also in this case, no significant effect can be seen related to the occurrence of the X5.4 flare, both in the 137 Cs and in the 40 K data. An upper limit similar to the one given above can be issued by taking twice the statistical error. Note that no effect can be seen also in correspondence with the arrival on earth of the two CME (coronal mass emission) related to this earth facing flare, respectively on March 8th and 11th 2012 (55994 and 55997 MJD).
During this March 7th 2012 flare we were taking the 137 Cs data also with a fully digital list mode data acqui-sition system, recording the time of each event. As a consequence, we can have the source count rate averaged over shorter time than the day or the hour. In particular, Figure 3 shows the 137 Cs source residuals averaged over 10 minutes in a 24 hour time window containing the X-ray peak. Again, no fast occurring effect incompatible with a statistical fluctuation and larger than 3 · 10 −3 (at 95%C.L.) can be seen (we note that the sensitivity scales linearly with the square root of the averaging time). We cannot repeat this analysis for the 2011 flare because at the very timing of the flare the data acquisition has been stopped for an hour due to liquid nitrogen refilling of the Germanium detector.
Conclusion
The gamma activity of three different sources, 40 K (electron capture) , 137 Cs (beta decay) and nat Th (alpha The right-hand vertical scale gives the X-ray flux measured in W/m 2 . The shaded band is drawn at ±3 · 10 −3 from the expected value. and beta decays) have been measured during the occurrence of at least one of the two strongest solar flares of the years 2011 and 2012. No significant deviations from expectations have been observed. Up to now there are no quantitative models able to correlate the flare intensity with the decay constant of radioactive isotopes. However, from our data it is possible to conclude that a universal deviation of decay rate (alpha or beta or electron capture decay) is less than 3·10 −4 per day for a flare of 7·10 −4 W/m 2 flux at the peak. By 'universal' we mean a deviation in the count rate affecting in the same way all radioactive isotopes decaying through the same basic mechanism. We are now continuing the life-time measurements to search for modulations in the decay rate of different nuclides. This way we will have the opportunity to further investigate the issue of the solar flare correlation with nuclear decay in case stronger flares should happen closer in time to the expected 2013 solar maximum.
Acknowledgments
The Director of the LNGS and the staff of the Laboratory are warmly acknowledged for their support. We want to thank also Prof. Roberto Battiston for his continuous encouragement. | v2 |
2014-10-01T00:00:00.000Z | 2013-07-01T00:00:00.000Z | 14823179 | s2orc/train | Induction of autophagy by cystatin C: a potential mechanism for prevention of cerebral vasospasm after experimental subarachnoid hemorrhage
Background Studies have demonstrated that autophagy pathways are activated in the brain after experimental subarachnoid hemorrhage (SAH) and this may play a protective role in early brain injury. However, the contribution of autophagy in the pathogenesis of cerebral vasospasm (CVS) following SAH, and whether up-regulated autophagy may contribute to aggravate or release CVS, remain unknown. Cystatin C (CysC) is a cysteine protease inhibitor that induces autophagy under conditions of neuronal challenge. This study investigated the expression of autophagy proteins in the walls of basilar arteries (BA), and the effects of CysC on CVS and autophagy pathways following experimental SAH in rats. Methods All SAH animals were subjected to injection of 0.3 mL fresh arterial, non-heparinized blood into the cisterna magna. Fifty rats were assigned randomly to five groups: control group (n = 10), SAH group (n = 10), SAH + vehicle group (n = 10), SAH + low dose of CysC group (n = 10), and SAH + high dose of CysC group (n = 10). We measured proteins by western blot analysis, CVS by H&E staining method, morphological changes by electron microscopy, and recorded neuro-behavior scores. Results Microtubule-associated protein light chain-3, an autophagosome biomarker, and beclin-1, a Bcl-2-interacting protein required for autophagy, were significantly increased in the BA wall 48 h after SAH. In the CysC-handled group, the degree of CVS, measured as the inner BA perimeter and BA wall thickness, was significantly ameliorated in comparison with vehicle-treated SAH rats. This effect paralleled the intensity of autophagy in the BA wall induced by CysC. Conclusions These results suggest that the autophagy pathway is activated in the BA wall after SAH and CysC-induced autophagy may play a beneficial role in preventing SAH-induced CVS.
Background
Cerebral vasospasm (CVS) is a frequent and devastating complication in patients with cisternal subarachnoid hemorrhage (SAH) and represents a significant cause of morbidity and mortality in neurosurgical patients [1]. Despite promising therapeutic approaches, such as triple-H therapy, calcium channel blockades, sodium nitroprusside, and endothelin-receptor antagonists, successful treatment after SAH remains inadequate and the underlying pathogenic mechanisms of CVS remain unidentified.
Autophagy is a cellular process of "self-digestion". When cells encounter stress conditions, such as nutrient limitation, heat, oxidative stress, and/or the accumulation of damaged or excess organelles and abnormal cellular components, autophagy is induced as a degradative pathway. The elimination of potentially toxic components coupled with the recycling of nutrients aids in cell survival [2]. Autophagy pathway activation may play an important role in several central nervous system (CNS) diseases, such as cerebral ischemia [3], hypoxia-ischemia induced brain injury [4], traumatic brain injury [5], intracerebral hemorrhage [6], and SAH [7].
A previous report from our group [8] showed that autophagy was significantly increased in the cerebral cortex of rats and expression peaked at 24 h after induction of SAH. Early brain injury (EBI), seen as brain edema, blood-brain barrier impairment, cortical apoptosis, and clinical behavior changes, were significantly ameliorated by intracerebroventricular infusion of rapamycin (RAP, an autophagy activator). However, 3-methyladenine (an autophagy inhibitor) decreased expression of light chain-3 (LC3) and beclin-1, and aggravated the EBI, suggesting that the autophagy pathway may play a beneficial role in EBI development after SAH. Nevertheless, a literature review produced no studies that investigate the potential contribution of autophagy to CVS following SAH. Previous reports suggested that autophagy may suppress inflammation, oxidant activity and apoptosis, which had been shown to play a vital role in arterial wall thickening and vasculature stiffening following SAH [7,8]. The aim of the current study was to evaluate the expression of the autophagy pathway in the basilar artery (BA) wall in an experimental rat model of SAH and determine the potential role of autophagy induced by CysC in the development of CVS.
Animals
The animal use and care protocols, including all operation procedures, were approved by the Animal Care and Use Committee of Soochow University and conformed to the Guide for the Care and Use of Laboratory Animals by the National Institute of Health, China. Fifty male Sprague-Dawley rats weighing from 300 to 350 g were purchased from the Animal Center of the Chinese Academy of Sciences (Shanghai, China). They were acclimated in a humidified room and maintained on a standard pellet diet at the Animal Center of Soochow University for at least 10 days. The temperature in both the feeding room and the operation room was maintained at 25°C.
Subarachnoid hemorrhage (SAH) model
SAH was induced by the single-hemorrhage injection model in rats as previously described [9]. Briefly, after the animals were anesthetized with 4% chloral hydrate (400 mg/kg body weight) a small suboccipital incision was made, exposing the arch of the atlas, the occipital bone, and the atlanto-occipital membrane. The cisterna magna was tapped using a 27-gauge needle, and 0.3 mL of cerebral spinal fluid were gently aspirated. Nonheparinized freshly autologous blood (0.3 mL) from the femoral artery was then injected aseptically into the cisterna magna over a period of 2 min. Immediately after the injection of blood, the hole was sealed with glue to prevent fistula formation. The animals were tilted at a 30°a ngle for 30 min with their heads down, in a prone position, to permit pooling of blood around the BA. Afterwards, the rats were returned to their cages, the room temperature was kept at 23±1°C, and 20 mL of 0.9% NaCl was injected subcutaneously to prevent dehydration.
Experimental design
Fifty rats were assigned randomly to five groups: control group (n = 10), SAH group (n = 10), SAH + vehicle group (n = 10), SAH + low dose of CysC group (n = 10), and SAH + high dose of CysC group (n = 10). CysC was dissolved in normal sodium, and the final concentrations were 2 μg/0.1 mL (low concentration) and 10 μg/0.1 mL (high concentration), respectively. A volume of 0.1 mL of the CysC dissolved in normal saline (NS) was administered directly into the cisterna magna 30 min before the blood injection as a means of prevention and treatment, while vehicle animals received an equal volume of NS only into the cisterna magna.
The rats were re-anesthetized and euthanized 48 h after blood injection by means of transthoracic cannulation of the left ventricle; they were perfused with 300 mL of phosphate-buffered saline solution under a pressure of 120 cmH 2 O. The BAs were immediately removed and 5 of 10 specimens in each group were placed in the fixative solution (a mixture of 4% paraformaldehyde and 2.5% glutaraldehyde in 0.1 M phosphate buffer, pH 7.4) for 24 h for histopathological examination and morphometric analysis, and another five specimens were frozen in liquid nitrogen for Western blot analysis.
Morphometric measurements
The BA luminal perimeter and wall thickness for each specimen was measured using a digitized image analysis system with Image-pro Plus software. The specimens for light microscopy study were dehydrated in graded ethanol, embedded in paraffin, sectioned, and stained with hematoxylin and eosin. Light microscopic sections of arteries were projected as digitized video images. The inner perimeters of the vessels were measured by tracing the luminal surface of the intima. The thickness of the vessel wall was determined by taking four measurements of each artery that extended from the luminal surface of the intima to the outer limit of the media, to avoid inclusion of the adventitia. The four measurements were averaged.
Western blotting analysis
The frozen brain samples were mechanically lysed in 20 mM Tris, pH 7.6, containing 0.2% sodium dodecyl sulfate (SDS), 1% Triton X-100, 1% deoxycholate, 1 mM phenylmethylsulphonyl fluoride, and 0.11 IU/mL aprotinin (all purchased from Sigma-Aldrich). Lysates were centrifuged at 12,000 ×g for 20 min at 4°C. The protein concentration was estimated by the Bradford method using the Nanjing Jiancheng protein assay kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). The samples (60 μg per lane) were separated by 8% SDS polyacrylamide gel electrophoresis and electro-transferred onto a polyvinylidene-difluoride membrane (Bio-Rad Lab, Hercules, CA, USA). The membrane was blocked with 5% skimmed milk for 2 h at room temperature, incubated overnight at 4°C with primary antibodies directed against LC-3 and beclin-1 (Santa Cruz Biotechnology, Santa Cruz, CA, USA) at the dilutions of 1:200 and 1:150, respectively. Glyceraldehyde-3-phosphate dehydrogenase (diluted in 1:6,000, Sigma-Aldrich) was used as a loading control. After the membrane was washed six times, for 10 min each time, in PBS plus Tween 20 (PBST), it was incubated in the appropriate HRP-conjugated secondary antibody (diluted 1:400 in PBST) for 2 h. The blotted protein bands were visualized by enhanced chemiluminescence Western blot detection reagents (Amersham, Arlington Heights, IL, USA) and were exposed to X-ray film. Developed films were digitized using an Epson Perfection 2480 scanner (Seiko Corp, Nagano, Japan). Optical densities were obtained using Glyko Bandscan software (Glyko, Novato, CA, USA). The tissue of five animals in each group was used for Western blot analysis at 48 h after SAH.
Neurologic scoring
Three behavioral activity examinations (Table 1) were performed at 48 h after SAH using the scoring system reported previously to record appetite, activity, and neurological deficits [10].
Transmission electron microscopy
The brain tissue adjacent to the clotted blood was analyzed in this experiment. Samples for electron microscopy were fixed in phosphate-buffered glutaraldehyde (2.5%) and osmium tetroxide (1%). Dehydration of the cortex was accomplished in acetone solutions at increasing concentrations. The tissue was embedded in an epoxy resin.
Semi-thin (1 μm) sections through the sample were then made and stained with toluidine blue; 600 Å-thin sections were made from a selected area of tissue defined by the semi-thin section, and these were stained with lead citrate and uranyl acetate. Brain ultrastructure was observed under a transmission electron microscope (JEM-1200X).
Statistical analysis
All values are expressed as means ± SEM. Statistical differences between the groups were compared using one-way ANOVA and Mann-Whitney U test. P values <0.05 were considered significant.
General observations
There were no significant differences in body weight, temperature, or injected arterial blood gas data among the experimental groups (data not shown). After induction of SAH, all animals stopped breathing for about 15 s. The mortality rate of rats was 0% (0/10 rats) in the control group and 11% (5/45 rats) in the remaining groups. Widespread distribution of blood was seen in the basal cisterns, circle of Willis, and along the ventral brainstem 48 h after SAH. There were no blood clots in the control group ( Figure 1).
Morphometric vasospasm
The inner perimeter of BAs in the SAH group and vehicle group became smaller, and the BA wall thickness became thicker than in the control group (P <0.01). We observed moderate arterial narrowing and reduction of the intima in the above two groups. Compared with SAH and vehicle groups, the inner perimeter of the BA in the treatment group was expanded and thickness of BA walls decreased with a statistically significant (Figures 2 and 3).
Western blot analysis for detecting autophagy activation after SAH
Western blot analysis showed that the level of LC3 and beclin-1 in the BA wall was low in the control group. The expression of LC3 and beclin-1 was significantly increased at 48 h after blood injection in the SAH group and SAH + vehicle group (P <0.05). There was no statistically significant difference between the SAH group and SAH + vehicle group (P >0.05). After CysC injection, the level of LC3 and beclin-1 was markedly upregulated in animals of SAH + CysC group, especially in SAH + high concentration of CysC group (P <0.01) (Figure 4).
Behavior and activity scores
As compared with the control group, clinical behavior function impairment caused by SAH was evident in SAH subjects (P <0.01). No significant difference was seen between the SAH group and SAH + vehicle group (P >0.05). CysC-treated rats showed better performance in this scale system than vehicle-treated rats 48 h after SAH, and the difference was statistically significant (P <0.01). There was no statistically significant difference between low and high concentration of CysC groups (P >0.05) ( Table 2).
Transmission electron microscopy observations
As shown in Figure 5, neurons and glial cells in the controls appeared healthy with normal endoplasmic reticulum, mitochondria, lysosomes, and nucleus. In contrast, diverse morphological changes were found in the cortex 48 h following SAH induction. Superficial neuroglial cells showed severe damage, such as cell harboring, multiple cytoplasmic vacuoles, cells completely lacking cytoplasmic contents, and shrunken nuclei with condensed chromatin. Numerous neurons displayed multiple vacuole-related structures containing electron-dense material or double membranous material. These pathological states were significantly ameliorated by CysC administration.
Discussion
CVS is a common and potentially devastating complication in patients who have sustained SAH and is the most significant cause of morbidity and mortality in these patients [11]. In this present study, we investigated the role of CysC on CVS following SAH in rats, and explored possible mechanisms behind its actions. We made the following novel observations: 1) The pathological changes, including morphological changes, arterynarrowing, and thickening of BA wall, suggest that CVS occurs after SAH; 2) The level of expression of autophagy related proteins, LC-3, and Beclin-1, were low in the normal control group; 3) Autophagy was expressed in the BA wall during early stage after SAH in rats, suggesting that autophagy may participate in the pathological course of CVS; and 4) In CysC-handled group, the degree of CVS (inner perimeter of BA, BA wall thickness, and the clinical behavior function) was significantly ameliorated and this effect was paralleled with the intensity of autophagy in the BA wall induced by CysC. These findings suggest, for the first time, that SAH may induce vascular autophagy in the spasmed artery and might play a role in the pathogenesis of CVS. The therapeutic benefit of post-SAH CysC administration might be due to its salutary effect on modulating the autophagy signaling pathway. CysC is an endogenous cysteine protease inhibitor, ubiquitously expressed and secreted in body fluids [12]. By inhibiting cysteine proteases such as cathepsins B, H, K, L, and S, it has a broad spectrum of biological roles in numerous cellular systems, with growth promoting activity, inflammation down-regulating function, and anti-viral and anti-bacterial properties [13]. It is involved in numerous and varied processes such as cancer, renal diseases, diabetes, and epilepsy, and neurodegenerative diseases such as Alzheimer's disease.
Previous reports have shown that CysC plays a protective role in CNS diseases, such as Alzheimer's disease [14], focal brain ischemia [15], and progressive myoclonic epilepsy type 1, but did not elucidate the mechanism(s) of neuroprotection. Recently, Tizon et al. [14] demonstrated that CysC plays a protective role under conditions of neuronal challenge by inducing autophagy via mTOR inhibition. This neuroprotective function was prevented by inhibiting autophagy with beclin-1 siRNA or 3-methyladenine.
Accumulating evidence shows that the autophagy pathway plays an important role in the pathogenesis of different diseases in the CNS, such as cerebral ischemia [3], traumatic brain injury [5], experimental intracerebral hemorrhage [6], and hypoxia-ischemia brain injury [4]. In the SAH field, Lee et al. [7] demonstrated a significantly increased autophagic activity in the cortex in EBI after SAH. Our previous study [8] indicated that autophagy was significantly increased in the cortex of Sprague-Dawley rats and their expressions peaked 24 h after SAH. EBI such as brain edema, blood-brain barrier impairment, Figure 4 Expressions of LC3 and beclin-1 in the BA walls in the control (n = 5, Lane 1), SAH (n = 5, Lane 2), SAH + vehicle (n = 5, Lane 3), SAH + low dose of CysC (n = 5, Lane 4), and SAH + high dose of CysC (n = 5, Lane 5) groups. Upper: Representative autoradiograph showing protein expression following SAH by western blot. We detected LC3 at 16 kDa, beclin-1 at 52 kDa, and the loading control glyceraldehyde-3-phosphate dehydrogenase at 36 kD. Bottom: Quantitative analysis of the western blot results for the levels of LC3 and beclin-1. The expression of autophagyrelated proteins was low in the control group. The expression of autophagy proteins was significantly increased in the SAH and SAH + vehicle experimental groups compared with controls (P <0.05). The increased expression was further markedly upregulated by CysC treatment (P <0.01). *P <0.05 compared with control group; **P <0.01 compared with control group; ns P >0.05 compared with SAH + vehicle group; #P <0.05 compared with SAH + vehicle group; ##P <0.01 compared with SAH + vehicle group. cortical apoptosis, and clinical behavior scale were significantly ameliorated by intracerebroventricular infusion of rapamycin (RAP, autophagy activator), while 3-methyladenine decreased expression of LC3 and beclin-1, and aggravated the EBI, suggesting that the autophagy pathway may play a beneficial role in EBI development after SAH. However, until now, no study has been found in the literature investigating the potential contribution of autophagy to CVS following SAH.
Autophagy also plays a housekeeping role in removing misfolded or aggregated proteins, clearing damaged organelles, such as mitochondria, endoplasmic reticulum and peroxisomes, and eliminating intracellular pathogens [16]. Failure of autophagy induces pleiotropic phenotypes leading to cell death, impaired differentiation, oxidative stress, toxic protein and organelle accumulation and persistence, tissue damage, inflammation, and mortality in mammals. This can lead to tissue dysfunction, inflammatory conditions, and cancer [17]. Previous publications suggested that autophagy can suppress inflammation [17,18], antioxidant activity [19][20][21], and anti-apoptosis [22,23] to maintain cellular homeostasis. Inflammation, oxidative stress, and apoptosis were con-sidered to be a major component of SAH, and may contribute to the pathophysiology of both CVS and EBI [24]. It is indicated that the activation of autophagy may also have a beneficial role in the development of CVS following SAH. In this study, our data demonstrated that there is a significant increase of autophagy proteins in the BA wall 48 h following SAH induction, and the expression of autophagy was even higher after administration of CysC. In CysC-handled group, the degree of CVS, such as the inner perimeter of the BA and the BA wall thickness, was significantly ameliorated in comparison with vehicle-treated SAH rats, and this effect was paralleled with the intensity of autophagy in the BA wall induced by CysC.
Conclusions
To the best of our knowledge, this is the first study to demonstrate the protective contribution of autophagy to CVS in the experimental SAH model, which suggests that the autophagy pathway may in fact play a significant role in CVS following SAH. Activation of autophagy induced by CysC resulted in attenuation of CVS in SAH models. Further studies evaluating the exact mechanism of autophagy pathway within CVS are warranted. In the control group, the glial cells were normal with integrated nuclear membrane. No swelling was found in endoplasmic reticulum and mitochondria. The electron density was normal in cytoplasm. (B) In the SAH group, the nuclear membrane was not integrated and the cytoplasm component entered the cell nucleus pushing the nuclear membrane. (C) In the SAH group, the nuclear membrane dissolved and shrank with nucleolar margination and chromatic agglutination in the glia cells. The staining was uneven with more endolysosome in the cytoplasm. (D) In the vehicle group, the endotheliocyte swelled in the capillary with apoptotic neurons and glial cells. (E) In the low dose CysC group, the nuclear membrane was more integrated than the SAH group with a little chromatic agglutination at the border of the nuclear membrane. In the cytoplasm, some of the mitochondria were swollen. (F) In the low dose CysC group, mild demyelination was found with the myelin sheath and mitochondria morphology was better than in the vehicle treated group. (G-H) In the high dose group, the myelin sheath was better than that in the SAH group surrounded with some stromal cells. | v2 |
2021-08-27T06:16:20.970Z | 2021-08-26T00:00:00.000Z | 237305899 | s2orc/train | Chronic ∆-9-tetrahydrocannabinol administration delays acquisition of schedule-induced drinking in rats and retains long-lasting effects
Rationale Schedule-induced drinking (SID) is a behavioural phenomenon characterized by an excessive and repetitive drinking pattern with a distinctive temporal distribution that has been proposed as a robust and replicable animal model of compulsivity. Despite cannabis currently being the most widely consumed illicit drug, with growing interest in its clinical applications, little is known about the effects of ∆-9-tetrahydrocannabinol (THC) on SID. Objectives The effects of chronic and acute THC administration on SID acquisition, maintenance and extinction were studied, as were the effects of such administrations on the distinctive temporal distribution pattern of SID. Methods THC (5 mg/kg i.p.), or the corresponding vehicle, was administered to adult Wistar rats for 14 days in a row. Subsequently, THC effects on SID acquisition were tested during 21 sessions using a 1-h fixed-time 60-s food delivery schedule. Acute effects of THC were also evaluated after SID development. Finally, two extinction sessions were conducted to assess behavioural persistence. Results The results showed that previous chronic THC treatment delayed SID acquisition and altered the distinctive behavioural temporal distribution pattern during sessions. Moreover, acute THC administration after SID development decreased SID performance in animals chronically pre-treated with the drug. No great persistence effects were observed during extinction in animals pre-treated with THC. Conclusions These results suggest that chronic THC affects SID development, confirming that it can disrupt learning, possibly causing alterations in time estimation, and also leads to animals being sensitized when they are re-exposed to the drug after long periods without drug exposure. Supplementary Information The online version contains supplementary material available at 10.1007/s00213-021-05952-2.
Introduction
Cannabis plant derivatives are the most widely used illegal substances with the percentage of users per year estimated to be 3.8% worldwide and 5.2% in Europe (around 180 and 28 million users, respectively, aged 15-64 years), according to the World Drug Report of United Nations Offices on Drugs and Crime (2017). ∆-9-Tetrahydrocannabinol (THC) is the main component responsible for the psychoactive effects of cannabis. The harmful effects of prolonged THC consumption on both brain and behaviour-particularly when such consumption starts at an early developmental stage-are well documented in human and animal studies (for reviews, see Higuera-Matas et al. 2015;Volkow et al. 2016). The psychoactive properties of THC are mostly mediated by the activation of the type 1 cannabinoid receptor, which is expressed by different neuronal subpopulations in the central nervous system but also in peripheral tissues (Devane et al. 1992;Matsuda et al. 1990). THC acts on different aspects of behaviour such as learning, memory, motor activity, nociception and food intake (Calabrese and Rubio-Casillas 2018;Irimia et al. 2015;Iversen 2003;Javadi-Paydar et al. 2018). Another documented effect is the alteration of time perception. In this regard, it has been reported that cannabinoid users consistently overestimate the duration of time intervals (Lieving et al. 2006;Perez-Reyes et al. 1991;Sewell et al. 2013). This alteration in time estimation was also reported in non-human subjects (Conrad et al. 1972;Crystal et al. 2003;Han and Robinson 2001). In addition, it has been reported that some of the THC effects can be sensitized after repeated drug administration, as has the possibility that THC causes cross-sensitization when animals are exposed to other substances, which has led to the suggestion that cannabis can facilitate the use of other drugs of abuse (Cadoni et al. 2001;Panlilio et al. 2013). Schedule-induced drinking (SID) is characterized by the development of repetitive excessive drinking in fooddeprived animals that are exposed to intermittent food-reinforcement schedules with free access to a bottle of water in the experimental chamber. Once SID was characterized, it was included in an extensive behavioural category called adjunctive behaviour (Falk 1971; and it is well documented that SID presents a distinctive temporal pattern where most drinking occurs early in the inter-food interval, just after food delivery (Falk 1971;López-Crespo et al. 2004;Staddon 1977). In this respect, it has been suggested that adjunctive behaviours could play an important role in time estimation (Harper and Bizo 2000;Killeen et al. 1997) operating as a behavioural clock through collateral behaviour chains that precede each other until reinforcer presentation occurs (Lejeune et al. 2006;Richelle and Lejeune 1984;Richelle et al. 2013). In addition, SID could also serve as a cue for organisms to discriminate time (Killeen and Fetterman 1988), and it may be that in this way SID expedites the learning of different time estimation tasks (Ruiz et al. 2016;Segal and Holloway 1963).
The excessiveness and persistence of SID may share common features with compulsive behaviour in humans, and for this reason, it has been proposed as a useful and validated animal model to study several disorders related to the compulsive spectrum (Moreno and Flores 2012;Woods et al. 1993). Alterations in neural substrates involved in the development and execution of habits contribute to compulsive behaviour (Fineberg et al. 2010;Gillan et al. 2016). Recent studies have suggested increased habit formation in rats with high drinking rates (Merchán et al. 2019). Moreover, rats with a preference for response-learning strategies are more susceptible to developing SID and show increased neuronal activation in frontal cortical regions associated with habit formation and compulsion (Gregory et al. 2015). Furthermore, in our laboratory, we have reported that SID is associated with increased dendritic spine density in dorsolateral striatum neurons (Íbias et al. 2015)-a region that appears to be involved in habit formation (Yin et al. 2004;. It is interesting to note that cannabinoids are also involved in the transition from volitional behaviour to habit formation and they induce structural plasticity alterations in regions related to this kind of behaviour (Goodman and Packard 2015).
The studies that consider SID as a model of compulsivity focus on testing the efficacy of drugs normally used to treat the symptoms of different disorders (Moreno and Flores 2012), including obsessive-compulsive disorder (OCD; Platt et al. 2008), mood disorders (Martin et al. 1998;Rosenzweig-Lipson et al. 2007;Woods et al. 1993), anxiety (Snodgrass and Allen 1989), schizophrenia (Hawken and Beninger 2014), and ADHD (Íbias et al. 2016). For example, benzodiazepine agonists (Mittleman et al. 1988), and different types of antipsychotics, decreased SID after their acquisition (Didriksen et al. 1993;Snodgrass and Allen 1989;Todd et al. 1992); antidepressants (Martin et al. 1998;Rosenzweig-Lipson et al. 2007;Woods et al. 1993) produced a dose-dependent reduction (Dwyer et al. 2010); and dopamine agents, such as methylphenidate and d-amphetamine, also reduce SID behaviour in a dose-dependent manner although observations varied according to the rat strain (Íbias et al. 2016).
Furthermore, some psychoactive recreational drugs were tested for their effects on SID. Amphetamines have been used to study their differential effects on operant and adjunctive behaviours (Flores and Pellón 1995;Pellón et al. 1992;Smith and Clark 1975;Wayner et al. 1973b). Scopolamine and high doses of methamphetamine dose-dependently reduced compulsive drinking, but no relevant effects were found using ketamine, AM404 or the cannabinoids cannabidiol and WIN 55,212-2 (Martín-González et al. 2018). It should be noted that one previous study did test the effects of THC on SID, showing that THC enhanced drinking behaviour; however, this study was limited both by the small number of animals evaluated, as by the doses employed, which were too low compared to those associated with recreational or therapeutic use (Wayner et al. 1973a). Moreover, it only assessed acute effects of THC on SID, while it is more common in cannabinoids users to develop complications due to habitual consumption (Leung et al. 2020). Thus, the interest of this work was to study acute and chronic THC effects on SID.
Finally, it is important to consider that despite the illicit status and harmful effects of cannabinoids, there is growing interest in their therapeutic use in several psychiatric disorders, such as post-traumatic stress disorder, anxiety, depression and-of particular relevance to compulsivity-Tourette syndrome (Curtis et al. 2009;Fraser 2009;Moreira et al. 2009;Robson 2001;Tambaro and Bortolato 2012). Additionally, clinical cases with OCD who are also cannabis users report that it improves their symptoms (Müller-Vahl 2013;Schindler et al. 2008). In this regard, it was recently suggested that the administration of nabilone, a synthetic cannabinoid that mimics THC effects, in combination with exposure and response prevention therapy, resulted in a significant decrease in OCD severity (Patel et al. 2021). All of these findings have led to the endocannabinoid system (ECS) being considered a target for novel medications for OCD symptoms (reviewed in Kayser et al. 2019), and the SID procedure provides the opportunity to assess the effects of THC on a behaviour validated as an animal model of compulsivity.
In the present study, the effects of chronic and acute THC administration on SID acquisition and maintenance were evaluated. The effects of such administrations on the distinctive temporal distribution pattern of SID were also studied.
Subjects
A total of 20 naïve male Wistar rats obtained from Charles River Laboratories (Lyon, France) were used in these experiments. On arrival, the rats were 8 weeks old. They were initially housed in groups of four, and once habituated to the animal facility for a week, the animals were singly housed in transparent polycarbonate cages (18 cm × 32 cm × 20.5 cm) with a metal grid roof, and food and water freely available. The room environment was maintained with a 12-h light/12h dark cycle (light from 8:00 to 20:00 h), an ambient temperature of 20 ± 2 °C, and approximately 55% relative humidity. Ten of these rats were randomly assigned to the group treated with THC (THC group), while the other 10 served as vehicle controls (vehicle group).
At the start of the experiment, the animals were 12 weeks old, and their mean (± SEM) weights were 369 ± 17 g and 369 ± 20 g for the vehicle group and THC group, respectively. Animal weights were maintained by controlled feeding to 100% of their free-feeding body weights with reference to a standard growth curve for the Wistar strain during the chronic THC (or vehicle) treatment phase but were then gradually reduced to 85% before starting the SID acquisition procedure. This reduced weight was maintained by controlled dieting throughout the different experimental phases of the study. The rats were weighed daily before the experimental sessions and fed at least 20 min after their completion. All animal care procedures were conformed to the European Union Council Directive 2010/6 and the Spanish Royal Decree 53/2013 for minimizing stress and discomfort in animals with the corresponding authorization from the Community of Madrid (PROEX 077/18) and UNED bioethics committee.
Drug preparation
THC was obtained from THC Pharm Gmbh (Frankfurt/ Main, Germany) and was prepared daily in aliquots for a final concentration of 5 mg/ml, in a vehicle of absolute ethanol (Emsure Merck KGaA; Darmstadt, Germany), cremophor (KolliphorEL; Sigma Aldrich Co.; St. Louis, MO, USA) and saline (0.9% sodium chloride) in a ratio of 1:1:18. This ratio is commonly used for the solubilization of cannabinoids (Cha et al. 2006;Rubino et al. 2009). The ethanol concentration in the THC and vehicle solutions was 5%, resulting in ethanol doses of 0.02 g/kg. The drug was stored in an N 2 atmosphere to avoid the oxidation process and was kept refrigerated (− 35 °C) in darkness until just prior to administration.
Apparatus
Eight Letica Li-836 (Letica Instruments, Barcelona, Spain) conditioning chambers (29 cm × 24.5 cm × 35.5 cm) were used, each of which was enclosed inside a soundproof wooden box with a window on the front. The conditioning chambers walls were made of aluminium and polycarbonate. The right wall had an aperture (3.2 cm × 3.9 cm) where a bottle of water was set 7 cm above the grid floor. The contact between the spout of the bottle and the grid floor closed an electric circuit which allowed the automatic record of licks when an animal touched the spout with its tongue. A food dispenser delivered food pellets of 45 mg (Bio-Serv, Frenchtown, NJ, USA) in an aperture in the frontal aluminium wall situated 3.7 cm from the floor, between two levers, retracted during the experiment. Magazine entries were sensed by a photocell beam at the entrance of the aperture that provided access to the food magazine. The chambers had a 3-W lamp above each lever that remained off, and another 25-W lightinstalled in the interior of the soundproof boxes-that was kept on throughout the experimental sessions. A sawdust tray was placed under the grid floor. Inside the soundproof boxes, a ventilation system with a fan produced a background noise of 60 dB which masked the exterior noises. Licks and magazine entries were registered with a MED-PC-IV application in the operating system Windows 7.
THC chronic administration
Chronic treatment with either THC (n = 10) or the corresponding vehicle (n = 10) lasted 14 days. During this period, rats received one daily intraperitoneal (i.p.) injection at the same time of day (15:00 h). Doses of 5 mg/ml were estimated for i.p. injections in a volume of 1 ml/kg body weight to give a final THC concentration of 5 mg/kg in the THC group, while an equivalent injection volume of the vehicle was administered to control rats. This dose was chosen because it has been demonstrated that behavioural effects normally appear at doses of 2 or 5 mg/kg (Fadda et al. 2004) and this level of dosing has been used before in chronic administration procedures (Cha et al. 2006). To rule out the presence of active metabolites and withdrawal effects after chronic THC treatment, a clearance period of 7 days was allowed before behavioural testing began.
SID acquisition
Both groups of rats (THC pre-treated group and their vehicle controls, n = 10 in each) were subjected to a fixed-time (FT) schedule in which food pellets were regularly delivered regardless of animal behaviour to develop SID. A total of 21 daily sessions were conducted to study SID acquisition and maintenance. The fixed time for food delivery was 60 s and each session lasted 1 h. A bottle of water was available in the conditioning chambers over the course of the experimental sessions.
Acute THC administration tested on SID
The acute THC administration test was divided into two experimental sessions after SID acquisition and stable baselines of drinking were established. The first session consisted of a control session where an i.p. injection of the vehicle solution was administered to both groups to rule out indirect effects on behavioural testing. In the second session, all animals were i.p. injected with THC (5 mg/kg in a volume of 1 ml/kg body weight) to assess acute effects on SID in rats previously treated chronically with THC and in rats that had no prior exposure to the drug. In both sessions, the rats received one i.p. injection 1 h before starting in the conditioning chambers, following the same procedure of SID as in the acquisition phase.
After a clearance period of 7 days, five sessions with the same fixed-time food schedule were carried out to restore SID behaviour to a stable drinking level.
SID extinction
The extinction test was conducted in two sessions of 1 h each, but the food pellets were removed from the food dispenser before the start of the session. The food dispenser worked according to the same FT-60 s schedule as in previous phases, which produced a clicking sound not accompanied by delivery of pellets. A bottle containing water was also available in the conditioning chambers.
The sequence and timing in each experimental phase is illustrated in Fig. 1.
Statistical analysis
A data normalization was carried out prior to the analysis. The total number of licks was transformed into a percentage for each subject with respect to the average of the last five acquisition sessions (Fig. 2a), both for acquisition and acute drug administration. For extinction, data were normalized to the average of the five sessions conducted prior to extinction (Fig. 2b). During these sessions, the rats reached an asymptotic and stable level of licking and therefore, their mean was used as a reference to calculate the percentage of change. Animals with less than 250 licks during the last 5 sessions did not develop the characteristic temporal distribution of licking normally observed in SID procedures, and were removed from the analysis (this was the case for one subject for each group, leaving final groups of n = 9). Magazine entries were also recorded and analyzed, but no significant differences were found (for more information, see supplementary material).
Analyses were conducted using IBM SPSS statistical software package (version 24 for Windows). The differences in percentage of licks in all the phases between the vehicle and THC groups were analyzed using a mixed analysis of variance (ANOVAs) with the between-subjects factor 'pretreatment' with two levels, vehicle and THC, and the withinsubjects repeated measures factor 'sessions' with one level for each experimental session. Post hoc comparisons were carried out using pairwise comparisons with Bonferroni correction, with statistically significant p values of α < 0.05. Effect size was calculated using partial eta squared (η 2 p ). Sphericity principle violations were evaluated with the Mauchly Sphericity test and significant deviations from this principle were corrected using Greenhouse-Geisser (GG) epsilon (ε) to adjust the degrees of freedom with α = 0.05. SID temporal distribution was studied through the descriptive parameters obtained with the area under the curve using the trapezoidal rule; these are the highest point of the x-axis which represents the peak time, the highest point of the y-axis which represents the peak percentage of licks, and the total area of the temporal distribution in function of time and percentage of licks. Fig. 2 Total number of licks after SID acquisition and maintenance. Total number of licks (mean ± SEM) for vehicle (white squares, n = 9) or THC pre-treated animals (black squares, n = 9) during the last five acquisition sessions, established as the baseline to transform the data in acquisition and drug test phases (a), and during the five sessions conducted before extinction used as the baseline to transform the data of this phase (b) Fig. 3 Chronic THC administration delayed SID acquisition. Percentage of licks with respect to the average level reached during the last five sessions of the acquisition phase. The percentage of licks is represented over the course of 21 SID sessions (mean ± SEM). White squares represent vehicle pre-treated rats (n = 9); black squares represent THC (mg/kg) pre-treated rats (n = 9). ***p < 0.001, **p < 0.01, *p < 0.05 using Bonferroni post-test
Chronic THC administration delayed SID acquisition
Lick acquisition curves for the group pre-treated with THC and its vehicle control are represented in Fig. 3. The data (mean ± SEM) are shown as percentages with respect to the 5 last sessions (Fig. 2a) over the course of 21 sessions. The mixed ANOVA revealed statistically significant effects for both session (F 4,69 = 37.326; GG (ε) = 0.217; p < 0.0001; η 2 p = 0.7) and pre-treatment (F 1,16 = 4.632; p < 0.05; η 2 p = 0.224). These results indicated, firstly, that the exposure to the fixed-time 60-s food schedule increased the licks over the course of the acquisition sessions and, secondly, that the rats pre-treated with THC licked fewer than their vehicle controls. Moreover, the statistically significant pretreatment x session interaction effect (F 4,69 = 2.779; p < 0.05; η 2 p = 0.148) revealed after post hoc analysis that SID acquisition was delayed in the rats pre-treated with THC due to an effect of the drug in the sessions after the very initial ones (sessions with post hoc differences shown in Fig. 3). Figure 4 shows lick percentage (mean ± SEM) with respect to the mean of the last five acquisition sessions (Fig. 2a), as indicated previously, in animals that were chronically pre-treated with vehicle (Fig. 4a) and in animals that were chronically pre-treated with THC (Fig. 4b). These figures include data for the last acquisition session, the session where all animals were i.p. injected with the vehicle solution, and the test session where all animals were i.p. injected with a single 5-mg/kg dose of THC. The ANOVA revealed a statistically significant session effect (F 1,19 = 8.61; GG (ε) = 0.607; p < 0.01; η 2 p = 0.843) and a pre-treatment x session interaction effect (F 1,19 = 5.201; p < 0.05; η 2 p = 0.631). No main effects were found in the between-subjects factor pre-treatment (F 1,16 = 390.15; p = 0.283, ns; η 2 p = 0.072). Acute i.p. administration of the vehicle did not modify the lick percentage in any group, confirming that there are no indirect vehicle or injectionrelated effects. However, as indicated by the statistically significant effects for session and the pre-treatment x session interaction, acute i.p. administration of THC resulted in a reduction in licking in animals pre-treated with THC shown by post hoc analysis (p < 0.05), but this was not the case with the vehicle. The main effect in the betweensubjects factor pre-treatment did not show differences between groups during the session which THC was administered, but an increased variability in drinking behaviour in animals that had not had prior exposure to the drug was observed as a result of the acute effect of THC. Subsequent descriptive analysis of this variability revealed an increased percentage of the SD-from ± 13.67% obtained in the last acquisition session or ± 11.25% obtained in the control session with vehicle, to ± 76.17% as a result of the effects of THC in this group. Figure 5 shows the results obtained in SID extinction sessions in vehicle and THC-pre-treated rats. Data (mean ± SEM) are represented as the percentage of licks with respect to the mean of the five previous SID sessions (Fig. 2b). The ANOVA revealed a statistically significant session effect (F 2,32 = 52.575; p < 0.0001; η 2 p = 0.767) observed between the last session and the extinction sessions. No statistically significant effects were found for Fig. 4 Acute THC administration (5 mg/kg i.p.) reduced SID only in animals previously pre-treated with THC. The percentage of licks with respect to the previous five acquisition sessions (baseline level) is represented in the group pre-treated with vehicle (a) and the group pre-treated with THC (b). The data represent the mean ± SEM in the last session, the control session with a preceding vehicle i.p. injection, and the test session with a preceding THC i.p. injection (n = 9 in each group). *p < 0.05 using Bonferroni post-test pre-treatment (F 1,16 = 227.293; p = 0.078, ns; η 2 p = 0.182) or for session x pre-treatment interaction (F 2,32 = 0.599; p = 0.599, ns; η 2 p = 0.555). These results show that the simple intermittent activation of the dispenser without delivering food pellets reduced SID in both groups at a similar rate.
Chronic and acute effects of THC on the temporal distribution of SID
Prolonged temporal distribution of SID after prior chronic THC exposure Figure 6 displays THC pre-treatment effects on licking behaviour-compared with the vehicle controls-in successive 3-s bins during the inter-food interval (60 s) in separate sets of 3 sessions ( Fig. 6a to g) to observe temporal SID distribution features over the course of acquisition. The data were calculated as percentages (mean ± SEM) with respect to the total number of licks performed in the inter-food interval for each rat. The parameters studied with the area under the curve were the highest point of the x-axis (peak time), the highest point of the y-axis (peak percentage of licks) and the total area of the temporal distribution as a function of time and percentage of licks (data in Table 1).
During the first 3 sessions, neither of the two groups showed the distinctive post-pellet drinking pattern of SID (Fig. 6a); the licks remained similar throughout the entire inter-food interval. From sessions 4 to 21 (Fig. 6b to g), the temporal distribution of licking progressively acquired the typical SID pattern, showing a maximal response close to the previous food delivery in both groups. In sessions 4-6 ( Fig. 6b), 10-12 (Fig. 6c) and 13-15 (Fig. 6d), the peak time was lower and shifted 3 s to the right in THC-pre-treated animals compared to the vehicle group (Table 1). Although the THC group showed a lower peak for percentage of licks, the duration of their licking was longer (from second 18 onwards; Fig. 6d to g). Thus, during the last acquisition sessions, THC-pre-treated animals showed similar total licking levels overall, but with a different response pattern.
Non-temporal distribution pattern found during extinction
The reduction of licks in the extinction phase occurred rapidly in both groups and there was no sign of the characteristic temporal pattern of SID, which is why no data are presented here. Figure 7 displays the acute effects of THC administration on the temporal distribution of licking over the course of the inter-food interval (60 s) in animals chronically pretreated with vehicle (Fig. 7a) and in animals previously pre-treated with THC (Fig. 7b). It shows the data of the last acquisition session, the session where all animals were i.p. treated with the vehicle solution, and the acute THC administration test session using a 5-mg/kg dose of THC in both groups. The data showed that the peak time in animals pre-treated with vehicle was 6 s later ( Fig. 7a and Table 2), comparing the results of the acute THC administration test (peak time in 15 s) with the last acquisition session and the vehicle control session (with peak times reached around the second 9 in both cases). The peak percentage of licks was similar during the three sessions in this group (Table 2), which is consistent with the results shown in Fig. 3. Figure 7b shows the results of animals pre-treated with THC. The percentage of licks was considerably lower in this group (Fig. 4), but their temporal distribution did not show any appreciable change. Peak times occurred at different time points compared with those in control sessions (Table 2), but not markedly different in that the peak of the curve was maintained from seconds 9 to 21 with a similar percentage of licks (Fig. 7b). The data showed that the most affected parameter was the kurtosis of the curve, but not the symmetry. The peak of the percentage of licks remained at lower levels compared with control sessions and the same occurred with the total area ( Fig. 7b and Table 2), which was consistent with the reduction of licks shown on the test day with THC (see Fig. 4).
Discussion
The results of the present study showed that chronic THC administration (5 mg/kg for 14 days) delayed SID acquisition and resulted in a flattening and shifting to the right of licking behaviour temporal distribution over the course of the sessions. Moreover, acute THC administration after SID acquisition resulted in an overall decrease in licking only in animals that were previously chronically treated with THC, pointing to a sensitization effect. However, no significant THC-related effects were observed during SID extinction. The number of magazine entries did not show significant differences among groups-either during SID acquisition after chronic pre-treatment with THC or as an acute effect of the drug before a SID session (see supplementary material), suggesting that THC effects on SID were not driven by general locomotor suppression or lack of motivation. Similar results with regard to magazine entries were also reported when the acute effects of WIN 55,212-2 were assessed (Martín-González et al. 2018). It is also important to note that motor suppression usually occurs at higher doses of cannabinoids (de Fonseca et al. 1998) than the 5 mg/kg used in the present study.
THC and vehicle chronically pre-treated animals developed SID over the course of 21 sessions of a FT 60-s food schedule (Fig. 3). Nevertheless, in the case of the THC-pretreated animals, SID acquisition was delayed and developed more slowly, requiring 14 sessions to reach the same level as the vehicle control group, which acquired SID earlier (from session 4 onwards). The reason for this delay could be associated with the fact that THC pre-treatment alters learning mechanisms involved in the performance of SID as well as in the experimental tasks that follow. The acquisition of the reinforcement of low-rate responding task was delayed in rats chronically pre-treated with THC (Stiglick and Kalant 1982;1983). Moreover, as occurred in our study, control and THC-pre-treated animals reached similar levels at the end of the acquisition phase. In the aforementioned studies, rats were treated firstly in adolescence, and when the procedure was replicated in adult rats, no effects were found, suggesting the existence of vulnerable periods during which THC impairs learning (Scallet 1991;Stiglick and Kalant 1985). A worsening performance was also observed in the object recognition task or in progressive ratio reinforcement schedules in adolescent-but not in adult-rats pre-treated with the synthetic cannabinoid agonist CP 55,940 (O'shea et al. 2004;Schneider and Koch 2003). The relevance of THC administration during sensitive developmental periods has been documented in studies where rats, which were exposed to cannabinoids before being born, in perinatal periods, or during adolescence, showed later alterations in learning and Fig. 7 Acute effects of THC on the temporal distribution of licking in animals pre-treated with THC or vehicle. The data show the percentage of licks with respect to the total number of licks performed in the inter-food interval (mean ± SEM) in animals pre-treated with THC (a) or vehicle (b) throughout successive 3-s bins of the 60-s inter-food interval during the last acquisition session, the control session with a preceding vehicle i.p. injection, and the test session with a preceding THC i.p. injection (n = 9 in each group) (Campolongo et al. 2007;Rubino et al. 2009). However, our results showed that the effects of a previous chronic THC administration can also alter learning in adult subjects. Even so, it would be necessary to determine if there are differential effects on the acquisition of SID when THC is administered during vulnerable periods of development.
Once SID was established (from sessions 10 to 14 until the last session), the temporal distribution of licks in animals that were chronically pre-treated with THC showed lower peaks of licks percentage than those of the vehicle control group (see Fig. 6 and Table 1). If we compare the peaks observed in these sessions with the results obtained for all sessions (Fig. 3), the differences were smaller at the end of the procedure, but regarding the temporal distribution, the peak of licks percentage remained lower in the THC pre-treated group, even in these final sessions. THC-pretreated rats reached lower peaks of licks, but the animals in this group kept drinking longer during the FT interval. This explains why there were no differences in the overall number of licks between groups during the last sessions and why the areas under the curve were comparable. Analyzing the total set of licks, the rats of this group drank similar amounts of water but with a different temporal pattern. The time point at which the peak occurred was also shifted to the rightor delayed-in most of the sessions (Fig. 6 and Table 1), but this effect lessened as sessions went on-and was particularly diminished in the last sessions. On the other hand, acute THC administration after SID acquisition delayed the appearance of the peak in the SID inter-food interval ( Fig. 7 and Table 2). The animals that had not received drug previously showed a 6-s delay in their peak time compared to control sessions, but with similar lick percentage levels at the highest point. However, the animals that had been pre-treated with THC previously showed a lower peak percentage of licks, but the difference about when this peak happened was less pronounced. Their temporal pattern was affected regarding the height of the curve that remained at a similar level from seconds 12 to 21, which represents a longer peak duration, but with lower kurtosis compared to the control group. This effect of licking for longer during the FT interval was already seen in the acquisition sessions. These results showed the way in which THC disrupts the temporal distribution pattern of SID, that is, decreasing and postponing the distinctive 'burst' of licks in this procedure (Falk 1971). This effect has already been demonstrated in humans, where THC induces overestimation of time (Lieving et al. 2006;Perez-Reyes et al. 1991;Sewell et al. 2013), mainly at short intervals (McDonald et al. 2003)-and it has also been shown in timing procedures in animals. In this regard, Han and Robinson (2001) studied the acute effect of cannabinoid agonists THC and WIN 55,212-2 on the peak procedure in rats; however, in contrast with our findings, they reported a reduction in the peak time. In another study, Crystal et al. (2003) also explored the acute effect of the cannabinoid agonist WIN 55,212-2 in a bisection timing task, resulting in a dose-related decrease in sensitivity to time. Therefore, it seems that THC induces alterations in time estimation, but these effects depend on the period evaluated, the pattern of drug administration (acute vs chronic), their residual effects and the nature of the task.
Both the SID phenomenon and maladaptive habits with excessive behaviour features are models of disorders related to the compulsive spectrum (Everitt et al. 2001;Gillan and Robbins 2014;Gillan et al. 2016;Moreno and Flores 2012). Cannabinoids influence habit or stimulus-response memory mediated by the dorsal striatum (Goodman and Packard 2015). One study, which employed different tasks involving habit and goal-directed learning processes, showed that reinforcer devaluation reduced the response slower in animals treated with THC (Nazzaro et al. 2012). Likewise, recent studies have also characterized habit formation in rats with the SID procedure (Gregory et al. 2015;Merchán et al. 2019). Given the behaviour repetitiveness produced by the SID procedure and its relation to habit-like behaviour, THC might be expected to facilitate the development of SID and to result in behaviour persistence during extinction. However, we did not find an incremental influence of the preceding chronic administration of THC on SID acquisition; quite the opposite, a clear impairment of rapid learning was observed. However, a persistence pattern in the temporal distribution of SID was seen only in animals pre-treated with THC, which kept licking longer during the inter-food interval. It may be that this reflects the habitual aspect of behaviour seen in persistent action. Furthermore, even though during the first extinction session (Fig. 5) the percentage of licks was higher in animals pre-treated with THC, the differences between groups were not sufficient to reach statistical significance, although they did come close. The slightly higher resistance to extinction of SID in THC-pretreated animals might again reflect a habit-like behavioural characteristic.
Once SID had developed, we also evaluated the possible effects that acute THC administration could cause in subjects previously treated with this drug and in subjects that had no prior THC exposure. Our results showed that acute administration of a 5-mg/kg dose of THC decreased SID performance only in animals chronically pre-treated with THC, while in animals that had not been previously treated with the drug, SID was not affected. These results differ from those reported by Wayner et al. (1973a, b), who observed an increase in licks-but it should be noted that the THC doses employed were lower (1-3 mg/kg). This acute effect found only in the performance of pre-treated animals could be a sensitization-like effect, which leads to suggest that the prior experience with THC makes to develop a vulnerability to the effects of the drug after certain time without contact with it. Behavioural sensitization to THC effects was previously reported in animal studies (Cadoni et al. 2001), in which its subsequent single administration after a preceding prolonged exposure resulted in elevated locomotor activity, sniffing, gnawing and motor stereotypes. Furthermore, cross-sensitization effects were also reported with morphine (Cadoni et al. 2001) and nicotine (Panlilio et al. 2013), suggesting that THC can facilitate the use of other drugs of abuse. Our work provides support for sensitization effects derived from the consumption of cannabinoids and the propensity to potentiate their effects later.
THC is known to activate dopamine transmission through its action on the type 1 cannabinoid receptor (Laviolette and Grace 2006) that are co-localized with dopamine D2 receptor in GABAergic medium spiny neuron terminals, and cannabinoid agonists increased the interactions of these two types of receptors (Bagher et al. 2017). However, differential effects of THC on the dopamine system have been reported to depend on whether administration is acute or chronic. Acute THC administration increased dopamine release and neuron activity, whereas chronic THC administration altered dopamine D2/3 receptor signalling in nucleus accumbens and caudate/putamen Ginovart et al. 2012), and causes increased sensitivity to the presynaptic actions of dopamine D2 receptor agonists (Moreno et al. 2003). Several studies have evidenced the involvement of the dopaminergic system in SID. Both dopaminergic neuron lesions and the administration of dopamine antagonistshaloperidol, clozapine and pimozide-reduced already acquired SID and affected SID development (Didriksen et al. 1993;Mittleman et al. 1994;Mittleman and Valenstein 1986;Snodgrass and Allen 1989), whereas the dopamine D2/3 receptors agonist quinpirole increased this nonregulatory drinking behaviour in rats (Schepisi et al. 2014). It has been also demonstrated that high drinker rats showed higher dopamine D2 receptor binding than low drinkers in the SID procedure (Pellón et al. 2011). All these data together seem to suggest that alterations in the dopamine system may be involved in delayed SID acquisition after chronic THC administration.
Nonetheless, the reduction in licking observed after THC acute administration may also point to potential therapeutic use for compulsive behaviour. In this regard, marble burying in rodents-a behaviour which is also considered to reflect symptoms of OCD (Londei et al. 1998)-decreased after administration of different cannabinoids such as WIN 55,212-2 or cannabidiol (Gomes et al. 2011;Nardo et al. 2014). However, in order to clarify the therapeutic potential of THC in the SID procedure, future research is needed to ascertain the involvement of several relevant issues such as dose-response, sex and genetic background. Given the differences in dose-response effects of cannabinoids on behaviour (reviewed in Mechoulam and Parker 2013), it would be appropriate to test different chronic doses over time before SID acquisition. Moreover, several studies have demonstrated that female rats are more sensitive to cannabinoids than males (Fattore et al. 2007; 2010)-a finding that highlights the importance of the identification of sex-specific factors to guide the development of treatments more accurately. Finally, there is ample evidence that genetic background plays an important role in the individual vulnerability to psychiatric disorders (Adriani et al. 2003;Cadoni 2016;Driscoll 1982). Several different rat strains have shown deficits in inhibitory control responses, impulsivity or vulnerability to drug use related to the level of drinking on SID (Flores et al. 2014;Íbias and Pellón 2014). Moreover, Fisher 344 rats exhibit differences in the endocannabinoid system compared to Lewis rats (Brand et al. 2012;Coria et al. 2014;Rivera et al. 2013), which are considered an animal model for the study of genetic vulnerability to drug addiction (Cadoni 2016;Kosten and Ambrosio 2002). These two points lead us to hypothesize that genetic background could be a relevant variable in the effects of THC on SID.
In summary, the results of the experiment conducted show that prior chronic treatment with THC delays acquisition of adjunctive behaviour, confirming that cannabinoid consumption can disrupt learning, possibly causing alterations in time estimation. In addition, THC effects can be amplified later after an acute consumption reflecting a sensitization-like effect.
Acknowledgements
The authors wish to thank Antonio Rey for the technical assistance.
Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Financial support for the research, authorship and publication of this article was received through Spanish Government grant PSI2016-80082-P (Ministerio de Economía, Industria y Competitividad, Secretaría de Estado de Investigación, Desarrollo e Innovación).
Conflict of interest The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | v2 |
2019-04-27T13:09:54.550Z | 2018-05-28T00:00:00.000Z | 133693229 | s2orc/train | The implications of high-speed railways on air passenger flows in China
The High-speed Railway (HSR) network in China is the largest in the world, competing intensively with airlines for inter-city travel. Panel data from 2007 to 2013 for 138 routes with HSR-air competition were used to identify the ex-post impacts of the entry of HSR services, the duration of operating HSR services since entry, and the specific impacts of HSR transportation variables such as travel time, frequency, and ticket fares on air passenger flows in China. The findings show that the entry of new HSR services in general leads to a 27% reduction in air travel demand. After two years of operating HSR services, however, the negative impact of HSR services on air passenger flows tends to further increase. The variations of the frequency in the temporal dimension and the travel time in the spatial dimension significantly affect air passenger flows. Neither in the temporal nor spatial dimensions are HSR fares strongly related to air passenger flows in China, due to the government regulation of HSR ticket prices during the period of analysis. The impacts of different transportation variables found in this paper are valuable to consider by operational HSR companies in terms of scheduling and planning of new routes to increase their competitiveness relative to airlines.
Introduction
Efficiently operated High-speed railways (HSRs) offer advantages in punctual departure/arrival time, comfortable travel experience, and less CO2 emission in comparison to air travel (Givoni, 2007;Hall, 2009). The first HSR corridor was inaugurated in Japan in 1964. Then the first European HSR, TGV Sud-Est, between Paris and Lyon was opened in 1981 in France. Thereafter, many HSR lines have been constructed in other Western European countries, including ICE in Germany and AGV in Spain (Givoni, 2006). Although inaugurated in a later stage, Chinese HSR networks have expanded at an exponential growth rate because of a substantial financial support from the central government. Especially after 2008, a 4 trillion RMB stimulus package to mitigate the impact of the global financial crisis has more than doubled the investment capital for HSR construction (Amos, Bullock, & Sondhi, 2010). From the end of 2003, when the first HSR between Shenyang and Qinhuangdao was opened, until 2015, the Chinese HSR networks increased to 19,000 km, accounting for more than 60% of global HSR networks. Chinese HSR networks were constructed in only 12 years, and on a scale larger than in the rest of the world. Regarding the fast development of HSR networks in China, a large volume of literature has reported the impacts of HSR services on local and regional economy (Chen & Haynes, 2017;Ke, Chen, Hong, & Hsiao, 2017), urban specialization pattern (Lin, 2016) and urban service industry agglomeration (Shuai, Tian, & Yang, 2017). However, the focus on the interaction between HSR and air travel is still limited in the context of China.
Different from the European HSR networks, which were developed in a relative mature aviation market with modest growth rates, the development of Chinese HSR networks parallels a fast-growing and partially deregulated aviation market (Wang et al., 2016). After two decades of air deregulation in China, China's air transportation has experienced rapid growth, especially from the start of the economic reform in 1980s due to the rapid increase in air travel demand (Wang et al., 2016). Between 1997 and 2015, domestic air passenger traffic in China grew from 5.6 million passengers to 436 million. The annual airline growth rate was almost 10%, particularly after 2000. However, the annual growth rate of air travel is prone to be affected by unexpected social events, such as the 2003 outbreak of Severe Acute Respiratory Syndrome (SARS) and the 2008 financial crisis. In addition, after HSR operations started, first the D train services with an average operational speed 200 km/h in 2007 and then G train services with an average operational speed of 300 km/h in 2009, the airline's annual growth began to drop progressively to reach stable growth after 2012 when there were remaining regulation, limited investment, and poor overall national policy on the aviation industry. Fig. 1 shows that China's aviation market has been in a stage of fast growth in parallel to the expansion of HSR networks. While the volume of both air and HSR traffic increased between 2010 and 2015, HSR did so at a higher growth rate. This reflects the potential competition that HSR services offer for passenger transportation in China. Apart from unexpected socio-economic events, the operation of HSR services has absorbed the demand growth for airline travel to a certain extent. In addition, HSR network expansion triggered loosening of regulations on airlines by the Civil Aviation Administration of China (CAAC), such as partially flexibile air fares and more operator licenses for private and low cost airline companies (Zhang, Yang, Wang, & Zhang, 2014).
Ex-ante studies of HSR and aviation demand have been conducted intensively, primarily predicting the intermodal market share and focusing on a handful of major corridors where HSR development has occurred (Gonzalez-Savignat, 2004;Mao, 2010;Park & Ha, 2006;Román, Espino, & Martín, 2007). In contrast, very often a lot of ex-post studies such as reports, white papers conducted or commissioned by transportation companies are unavailable to the public due to due to the confidentiality of the operational data from the transportation companies (Dobruszkes, Dehon, & Givoni, 2014;Li & Loo, 2016). Expost research is further relatively limited in academia, especially in China with a strong governmental control on the railway and aviation industry and the application of relevant HSR geo-economic and transportation variables is rather crude in the data and model application.
This paper aims to fill this gap by conducting an ex-post study on the impact of HSR on air travel demand in the context of China using balanced and unbalanced panel data analysis. Firstly, using a balanced panel data set collected for 270 cross-sections over seven years, we examine the relationship between HSR services and air passenger demand using variance component models. The analysis takes into account city pairs with and without HSR-air competition over the period 2007-2013 to understand the impact of geo-economic HSR variables (such as HSR entry and duration of operating HSR services) on air travel demand. Secondly, we employ within-between models Nieuwenhuis, Hooimeijer, van Ham, & Meeus, 2016), using an unbalanced dataset containing only 138 city pairs with HSR-air competition from 2007 to 2013, to specify how HSR transportation variables are specifically interacted with the air travel demand in the two geographic (temporal and spatial) dimensions. We do so because the transportation variables, such as frequency, travel time, and fare, vary both in terms of the duration of operation of the HSR services (temporal dimension) and between different HSR routes (spatial dimension). Previous research has focused mainly on one of the dimensions.
In the next section, we present a literature review on the competition between HSR and air transportation. Following this is the research design, which discusses the variables used and data collection, as well as methodologies for the panel data analysis. The subsequent sections present the empirical results of the balanced and unbalanced panel data analysis. Finally, we discuss our main findings and their policy implications.
Literature review
Although there is cooperation between airlines and HSRs by means of feeding passengers from HSR spokes to hub airports (if booking systems between airlines and HSR companies have been coordinated) (Givoni & Banister, 2006), HSR has substantial competitive effects on air transportation, especially on point-to-point city pair markets. Research has confirmed that after the opening of new HSR services, the HSR will have substitution effects on air travel by means of diverting original air passenger flows into the HSR. The first study is from Janić (1993), who claimed that HSR transportation in Europe competes with air transportation over a relatively large range of distances, between 400 and 2000 km. A broad range of ex-ante academic literature then emerged, focusing on the impacts of HSR on predicted demand for airline travel in different contexts. In France, Haynes (1997) found that after a few years of HSR operation, air traffic dropped by 50% between Paris and Lyon. In Spain, González-Savignat (2004), based on a stated preference experimental design, predicted the HSR's impacts on the reduced market share of airlines (50%) between Madrid and Barcelona. In Korea, Park and Ha (2006), relying on the stated preference model calibration, examined the effects of HSR on domestic air transportation demand in Korea and estimated a demand reduction between 34% and 75% between Seoul and Daegu. In Germany, to describe the consumer selection behavior between HSR and airlines, Ivaldi and Vibes (2005) used a theoretical simulation model to analyze the intermodal competition in the Cologne-Berlin route, finding that the entry of HSR reduces the fares and the airline flight frequency.
With the fast development of HSR, especially in China and Europe, a few ex-post studies of HSR impacts on air travel have been carried out. The advantage of ex-post research is the accuracy of reflecting the actual effect of intermodal competition rather than the relatively poor performance of prediction embedded in ex-ante research (Givoni & Dobruszkes, 2013). Dobruszkes (2011) andFu, Zhang, andLei (2012) used aggregated data and observed impacts of HSR-air competition in Europe and China, but did not implement econometric analysis on a large set of routes. That type of observed ex-post research has raised the issues of the unclear causal relationship of HSR-relevant factors and the lack of representativeness. Recently, studies have used econometric analysis to overcome this deficiency by focusing on the cases of Europe and China (Albalate, Bel, & Fageda, 2015;Chen, 2017;Fu, Lei, Wang, & Yan, 2015). However, the indicators for the HSR are dummy variables. These are unable to accurately reflect the influence of HSR related to geographic transportation factors such as travel time, frequency, and ticket fare. Other researchers have used transportation variables of HSR such as travel time, the frequency of trains (Clewlow, Sussman, & Balakrishnan, 2014;Dobruszkes et al., 2014;Zhang, Yang, & Wang, 2017) and the length of railway networks (Li & Loo, 2016) to specify the influence of HSR on airlines using either time series (temporal dimension) or cross-section (spatial dimension) analysis.
Our review of the literature shows that studies regarding the competition between HSR and airlines are largely based on a European context and interpret the transportation variables of HSR only in either the "temporal" or the "spatial" dimension. This means that the variations in transportation variables in the other geographic dimension are not taken into account simultaneously (Table 1). Hence, our first hypothesis is that the influence of the transportation variables varying in the temporal dimension differs from those varying in the spatial dimension. Our panel data set allows for including both dimensions in the analysis. Second, we hypothesize that the entry of HSR services with respect to the growth rate of air travel demand may not be as significant as in Europe until a certain year of operating HSR services. The fast economic growth in Chinese cities and the increasing purchasing power of urban citizens have resulted in a fast-growing potential market for both air travel and HSR travel in China. Although some air passengers divert to HSR, there still exists a high demand for air travel even after opening new HSR services.
Overview of variables and data
The analysis is at the city-pair level where the competition between HSR and air is taking place for intercity city travel in China. Data are combined for each city for those cities with multiple airports and/or HSR stations. China has two types of high-speed trains (HST) for intercity city travel that compete with airlines: the G train with an average operational speed of 300 km Table 2 lists the variables and gives the descriptive results of dependent and independent variables.
In this research, our main focus is the annual air travel passengers. This variable includes annual origin-destination passengers traveling between a pair of cities, which reflects the demand side of the aviation market. It should be noticed that our air data sets from CAAC are actual O/D passenger flows between city pairs which omit any connections and layovers in each passenger's journey. For the explanatory variables, two types of variables are entered in the model: geo-economic and transportation. 1 With regard to geo-economic variables, the primary explanatory variables of most econometric demand models of air transportation are typically socio-demographic variables, such as gross domestic product (GDP) per capita and population size (Clewlow et al., 2014). We tested the summed and multiplied GDP per capita and population for the two ends of each city pair, respectively. The multiplied formats of those socio-economic variables yield the greatest explanatory power when incorporated. Furthermore, the impacts of HSR entry on the demand for air travel has different short-term and longer-term impacts (Givoni & Dobruszkes, 2013). To isolate these temporal changes in the relationships between HSR and airlines in China, in addition to the dummy variable of the entry of HSR services, the duration of operating HSR services should be considered as a core determinant to reflect how the airline passenger flow generally changes from before to after the presence of HSR services. With regard to the fast development of Chinese HSR networks in parallel with the fast growth of the aviation market in China, the competitive effects of HSR on air travel may not be high enough just after HSR entry on city pair markets that still have a high demand for air travel. In other words, the interaction between the duration of operating HSR services and air passenger travel may be nonlinear. Hence, we included a quadratic term of duration in the model. Furthermore, events such as the global financial crisis of 2008 influence air travel (Wang, Bonilla, & Banister, 2016), which means that the year dummy variables need to be considered for controlling unexpected influences on the air travel demand.
For transportation-related variables, travel time is the most important determinant for the market share of HSR versus air transportation, according to the current body of knowledge (Behrens & Pels, 2012;Givoni & Dobruszkes, 2013;González-Savignat, 2004). Furthermore, ticket price for HSR and air transportation (González-Savignat, 2004) and the frequency of HST (Dobruszkes et al., 2014;González-Savignat, 2004;Pels, Nijkamp, & Rietveld, 2000;Raturi, Srinivasan, Narulkar, Chandrashekharaiah, & Gupta, 2013) are crucial variables as well. Also, the spatial layout of cities and the locations of HSR stations and airports influence the access time to get to/from the HSR station or airport (Adler, Pels, & Nash, 2010;Behrens & Pels, 2012). It could be expected that the spatial scale of China is largely different from that of other HSR countries. We still can expect those transportation-related variables mentioned above are still applicable in the context of China (Zhang et al., 2017). Moreover, in China, most HSR stations are located in suburbs. Therefore, the access and egress time to and from stations is longer compared to European cases. To some extent, total travel time, instead of line haul time, from origin to destination actually decides the modal choice for intercity travel. Therefore, whether the location of terminals (stations and airports) influences the intermodal competition between HSR and airlines needs to be tested. The HSR transportation variables have been collected for each city pair, both for G and D trains.
It is important to reflect the elastic relationship between HSR and airlines rather than the absolute value of air passenger growth. We use the natural logarithm transformation on the dependent and independent variables to reflect the elasticity relationships. It should be noticed that regarding the causality between the HSR travel and air travel, by controlling other major influencing factors of air passenger numbers, the analysis in this paper focuses on association rather than direct causality (Li & Loo, 2016).
Methodology
Bresusch-Pagan tests indicate that ordinary least squares (OLS) are inefficient due to heteroskedasticity and therefore, we have used three variance components models for the analysis. The first one is used for the balanced panel data analysis, the second for the unbalanced panel data analysis and the third for subgroups of unbalanced panel data according to flight distance.
The first model is a balanced panel data set taking into account 138 city pairs 2 still with HSR-air intermodal competition after the entry of HSR services and 132 city pairs without the entry of HSR services between 2007 and 2013. 3 Our initial set of independent variables is a mix of geo-economic and general HSR variables as well as air transportation variables introduced in model 1 in Table 1. The aim of this balanced model is to isolate the general impacts of the entry of HSR services and the influence of the duration of operating HSR services after HSR entry on the overall air passengers in China without taking into account specific transportation variables of HSR, such as travel time, frequency and ticket fare. The first balanced panel data model is formulated as follows: where i and t represent entity i and year t, respectively. Y it is the dependent variable, being the number of air passengers on city pair i in year t. X it and β denote the vector of independent time-variant variables and corresponding coefficients. Z i and δ denote the vector of independent time-invariant variables and corresponding coefficients. U i is the specific intercept for each entity and represents all unobservable time-invariant characteristics of entity i that influence the dependent variable. φ it is the random error term. The coefficients for the timeinvariant variables will be omitted within the fixed effect (FE) estimator. With the second model, we analyze impacts of variations in specific HSR transportation variables on air passengers from the temporal and spatial dimensions, rather than estimate the impact of the entry of HSR services on overall air passengers. In this analysis, we replace the two Gfare ( (Bell, Johnston, & Jones, 2015;Nieuwenhuis et al., 2016) to separate the variations of independent variables into two levels. It can be written as: where β 1 is the within effect and β 3 is the between effect of a series of time-variant variables x it . Rather than assuming heterogeneity away with the FE, the BW method estimates how the effects of within and between city pair variations in independent variables vary over time and space on the dependent variable. Any timeinvariant characteristics (both observed and unobserved) are automatically controlled for, as the sum of their change will always be zero. Therefore, the estimations for time-varying variables in BW models are identical to the estimations in FE models. Additionally, a BW model includes random effects and allows for the inclusion of time-invariant variables β 2 , providing additional information on differences between city pairs that could not be estimated using a fixed effects model (for a more detailed description of the method, see . The third model is an extension of the BW model by distinguishing between various distance categories. In Fig. 3, it is found that in the travel distance less than 600 km, the number of traveling HSR passengers is much larger than that of air passengers and the market share is dominant by HSR travel, whereas in the distance longer than 1100 km, the number of air passengers is much larger than that of HSR passengers and the market share is dominant by air travel. Therefore, we aim to delve into how the elasticity of temporal and spatial transportation variables differs in the two thresholds (600 and 1100 km) for three competitive distance categories (dominant by HSR, shared between HSR and air, dominated by airlines) to shed light on operational and planning strategies for companies. Table 3 shows the results for the balanced panel model. After the Hausman test, we reject the null hypothesis at the p = 0.05 significance level that the coefficients estimated by the efficient random effects model are the same as the coefficients of the FE model; thus, the FE model is more appropriate here for the analysis. From the FE model, the presence of HSR services has a negative relationship with the air passenger flows. The entry of HSR services will lead to a 27% (100 * (EXP (−0.3284)−1)) decrease in the air travel passengers. Furthermore, the duration of operating HSR services since the moment of HSR entry reflects a lag effect of HSR services on airline passenger flows. The negative coefficients of the duration variables are significant except for the duration variable of one year's operation of HSR services, which means that after two year's operation of HSR services, the airline passenger flows start to decrease more due to the substitution effect from HSR. This is because at the initial stage of HSR development, HSR networks had not yet been formed and travelers' awareness of the HSR alternative was not so high that the substitution between HSR and air was still limited. However, with the gradual and fast extension of HSR networks in China, its substitution effect on air travel demand has been increasing. Moreover, the long-term impact of operating HSR services on air travel is not linear, especially after 6 year's operation of HSR services there is an inversed growth trend compared to the previous years. This indicates that in the long run, the air companies try to adapt Standard errors in parentheses * p < .1, ** p < 0.05, *** p < 0.01.
General effects of HSR on air passengers
their operational strategy to weaken the competitive pressure from the HSR.
As to the control variables, although the coefficient of population is not statistically significant, the sign of this coefficient is as we expected. In addition, the coefficient associated with the GDP variable is negative though not statistically significant. This finding is the same as in some other European research, such as and Dobruszkes et al. (2014), in which the coefficient of GDP per capita is negative and not significant. Moreover, an increase in air fare is negatively related to the number of air passenger flows, as expected. This reasonably suggests that in the Chinese aviation market from 2007 to 2013, if airline companies increased the airline fare, people were less willing to travel. Furthermore, as expected, the year dummy variables 2008 and 2009 have a negative relationship with the number of air passenger flows due to the Asian financial crisis in 2008. The positive coefficients of dummies from 2010 to 2013 are explained by the recovering economy and aviation industry after the global financial crisis.
In sum, the opening of HSR services initially has minor influence on the number of air passenger flows in China compared to Europe . However, with 2 years' operation after the entry of HSR services, the negative impacts of operating HSR services starts to increase. Table 4 shows the results of unbalanced panel data analysis, focusing on only city pairs with HSR services between 2007 and 2013. Given that there may exist autocorrelation and heteroscedasticity issues, we still include the year dummy variables by controlling for any changes over time and cluster the standard errors with each city pair, which could account for serial correlation and heteroskedasticity. The within effects in the temporal dimension show the impacts of variations of HSR transportation variables on air travel passengers over time within a certain city pair, while the between effects in the spatial dimension show the impacts of the variations of HSR transportation variables on air travel passengers between different city pairs.
Specific effects of HSR transportation variables on air travel passengers in the temporal and spatial dimensions
Based on the results of within effects, we observe that a 10% increase in the frequency of HSR services within a city pair leads to a 5.2% reduction in air travel passengers. The coefficients of ticket fare and travel time are not significant. This is reasonable because the ticket fare and travel time of HSR services in the temporal dimension for a city pair hardly vary after the start of HSR services. The National Development and Reform Commission (NDRC) instead of the China Railway Corporation (CRC) has the authority to decide the ticket fare for each city pair regarding travel distance. To increase the use of HSRs, the NDRC did not allow the CRC to adjust the HSR ticket fare according to market mechanisms before 2016. Therefore, the ticket fare of HSR services for a city pair after a few years of operation is almost the same as the one at the beginning stage of operation. In addition, with the duration of operating HSR services increasing, similar to the case of ticket fares, the travel time in the temporal dimension cannot be reduced to a large extent without new technology breakthroughs on the operational speed of HSR services. Note that a national speed reduction of HSR services occurred after the accident involving a D train crash in 2011 when the government decided to reduce the operational speeds of both D and G trains. Even though the travel speed of G trains decreased from 350 to 300 km/h and that of D trains from 250 to 200 km/h, the influence of travel time in the temporal dimension is still rather limited.
With regard to the control variables, a 10% increase in population will respond with an 55% increase in air passenger flows, which is remarkable compared to European countries (Clewlow et al., 2014). This is reasonable because with the fast urbanisation process in the last ten years in China, 4 more and more people have migrated from rural areas into urban areas, which leads to a potential market for the induced air passengers; furthermore, the cities connected by HSR and air are also the major nodes in China with fast-growing diverted air passengers from other low-speed transportation modes. The coefficient of the ticket fare of airlines is not significant anymore in this model compared to the previous balanced panel model, which means that facing competition from HSR, the strategy of lowering air ticket fare in the long run will not contribute to the improved competitiveness of airlines.
Furthermore, from the results of between effects, most importantly, a 10% increase in the travel time of HSR services between city pairs will lead to a 30% increase in air passenger flows. Our research confirms that variations of travel time between city pairs in the spatial dimension, instead of variations of travel time within a city pair in the temporal dimension, are important to explain differences in air passenger flows between city pairs. This means that regarding a specific travel speed of HST decreasing the travel time by limiting the intermediate stops between city pairs will be efficient to reduce air travel passengers. Interestingly, a positive relationship exists between the frequency of HSR services and airline travel passengers in the spatial dimension. City pairs with a higher frequency of HSR services tend to have more airline travel passengers. It is likely that the city pairs with a larger frequency of HSR services are normally the ones with higher GDP per capita and population, which creates higher passengers for intercity travel (Dobruszkes et al., 2014). Due to the potential correlation between the frequency of HSR and socio-economic status of city pairs in the spatial dimension, rather than the variations of the frequency of HST in the spatial dimension, the variations of the frequency in the temporal dimension actually influences air travel passengers. In addition, we observe that the coefficient of the HSR ticket fare is not significant as this effect may already have been captured by the travel time variable as a result of the fixed HSR ticket fare mechanism. With regard to the control variables, city pairs with higher GDP per capita and population attract more air travel passengers. The coefficient of air ticket fare is still negative since city pairs with higher air fares have lower air travel passengers. The flight time of city pairs is positively related to air passenger flows. This is interesting because it indicates that the competitiveness of air travel relative to HSR travel increases with increasing distance (flight time) between origin and destination, as time savings using air transportation become larger. The coefficients of both access/egress time to/from stations and airports are not significant. This is reasonable since most HSR stations in China are located in the suburbs of cities, similar to locations of airports. Wang et al. (2016) confirm that HSR stations located on average 23.2 km away from the city center had a little shorter travel time by road transportation than airports (32.6 km). Although it is not significant in this aggregate research, the difference in the access and egress time to/ from terminals likely might be significant in disaggregate research.
Overall, the growing urbanized population in cities with increasing passengers for long-distance travel contributes to the fast growth in air travel passengers in China. Among HSR transportation variables, the frequency of HSR services in the temporal dimension and the travel time of HSRs in the spatial dimension are the crucial factors for the competition between HSR and airlines in China. Table 5 only shows the results of variables of our main interest, namely, the frequency of general HSR services in the temporal dimension and the travel time of general HSR services in the spatial dimension. We further separate the general HSR services into D and G train services.
The specific impact of HSR services according to travel distances
On short-haul city pair markets of flight distance less than 600 km, an increase of general HSR frequency in the temporal dimension has a negative impact on air passenger flows. Also, the travel time of HSR services in the spatial dimension is elastic to air passenger flows. We also find that the coefficient of the frequency of D trains is significant, whereas the coefficients of the frequency and travel time of G trains and the coefficients of the travel time of D trains are not significant. This indicates that HSR operators are better off increasing the total number of HST frequencies, especially D trains, for the city pairs with a flight distance of less than 600 km, than increasing the frequency of G trains.
On medium-haul city pair markets of distance between 600 and 1100 km, the travel time of general HSR and those of G trains in the spatial dimension are elastic to air passenger flows. The coefficient of the frequency of G trains in the temporal dimension is significantly negative to the air passenger flows. This means that on medium-haul markets, an increase in the frequency and a reduction of intermediate stops of G trains service will be more efficient than that of both G and D train services for improving the overall competitiveness of HSR services.
For long-haul travel city pair market of flight distance over 1100 km, the sample sizes of both D and G trains are not large enough for the analysis so here we only report the results of general HSR services. Neither the frequency in the temporal dimension nor the travel time in the spatial dimension is elastic to air passenger flows. Thus, we can conclude that within this travel distance, neither a reduction of travel time in the spatial dimension nor an increase in frequency in the temporal dimension will improve the competitiveness of HSR services.
Conclusions
This study explores the ex-post intermodal relationship between HSR and air transportation at the route level in China. By means of balanced and unbalanced panel data from 2007 until 2013, we explain HSR's general potential to reduce air passenger flows and the relevant specific transportation variables influencing air passenger flows in two geographic dimensions: temporal and spatial.
First, by focusing on the impact of the entry of HSR services and the duration of operating HSR services on air passenger flows since entry, our research shows that after the control of the socio-economic impacts on the air travel demand, the entry of HSR services in general leads to only a 27% reduction in air travel passengers for the route with the intermodal competition between HSR and air, which is similar to the finding of (Chen, 2017;Zhang et al., 2017). This is not a significant negative impact, compared with studies such as in Spain, who report a more than 50% airline seat reduction after the entry of HSR services. However, after two-years' operation of HSR services in China, the negative impact of HSR services on air passenger flows tends to increase. This reflects the typical case of China that the substitution effect of HSR networks in a fast-growing aviation market (in contrast to the more mature European market) is not significant on the growth of air passenger travel at least at the initial stage until more HSR routes have been opened and the awareness of the new service among travelers grows.
Second, our research confirms that the variations of the frequency in the temporal dimension and the travel time in the spatial dimension are significant factors in explaining air passenger flows on city pair markets where both modes compete. The frequency component in the temporal dimension indicates that HSR can improve its competitive position with respect to airlines if the HSR frequency on the route is increased. The travel time in the spatial dimension shows that HSR has a better competitive position on city pairs with a shorter HSR travel time, as on these routes airlines have a relatively limited travel time advantage. The frequency in the spatial dimension is just the approximation of the economic level of city pairs reflecting travel passengers. The impact of travel time in the temporal dimension is rather limited even though there has been a travel speed reduction for the HSR services during the period of analysis. In contrast to the findings that HSR ticket fares are elastic with the market share in Europe (Adler et al., 2010;Behrens & Pels, 2012), the ticket fares of HSR in both the temporal and spatial Table 5 Results of the frequency and travel time for short-, medium-, and long-haul travel. dimensions are not strongly related to the air passenger flows in China, as a result of the fixed HSR ticket fare mechanism in the control of the government. Fare probably can play an important role in competition only when it fluctuates according to the market. Our research further confirms that the short stretch of HSR routes in the spatial dimension and the high frequency of HSR services in the temporal dimension can definitely increase the competitiveness of HSR relative to airlines. While our research has identified the reaction of airlines to HSR services between 2007 and 2013, we note that from January 2016 the HSR ticket fares were no longer under the control of the NDRC (NDRC, 2015). HSR operators acquired the right to price train seats largely based on market passengers. Thus, future research could investigate how the flexible ticket fare influences the intermodal relationships between HSR and airlines in both the temporal and spatial dimensions. Second, although the aggregate time cost of intra-city trip to/from terminals are not significant in this research, future research could also study whether the individual differences of disaggregate time cost of intra-city trip to/from terminals influence the competitive relationships between HSR and airlines. Third, because long-distance transportation networks evolve over time and shape demand to some extent, it will be of interest to also take into account the recent conditions of both HSR and airline and from the perspectives of the demand side (actual O/D passenger flows) in both HSR and airline sides in the future, especially regarding the expansion of low-cost airline in China after 2013. Moreover, specific schemes of the subsidy for operational companies, air operation, airport and air traffic control, which vary from case to case, could also influence the cooperative relationship between HSR and air travel. A detailed case study research could shed light on that. | v2 |
2021-05-11T01:16:18.342Z | 2021-05-09T00:00:00.000Z | 234341589 | s2orc/train | Non-iterative Optimization Algorithm for Active Distribution Grids Considering Uncertainty of Feeder Parameters
To cope with fast-fluctuating distributed energy resources (DERs) and uncontrolled loads, this paper formulates a time-varying optimization problem for distribution grids with DERs and develops a novel non-iterative algorithm to track the optimal solutions. Different from existing methods, the proposed approach does not require iterations during the sampling interval. It only needs to perform a single one-step calculation at each interval to obtain the evolution of the optimal trajectory, which demonstrates fast calculation and online-tracking capability with an asymptotically vanishing error. Specifically, the designed approach contains two terms: a prediction term tracking the change in the optimal solution based on the time-varying nature of system power, and a correction term pushing the solution toward the optimum based on Newton's method. Moreover, the proposed algorithm can be applied in the absence of an accurate network model by leveraging voltage measurements to identify the true voltage sensitivity parameters. Simulations for an illustrative distribution network are provided to validate the approach.
I. INTRODUCTION
ISTRIBUTED energy resources (DERs) are envisioned to be dispersed in future distribution networks through power electronic devices to reduce carbon footprints [1]. Under the influence of ambient conditions, some renewable energy sources, such as photovoltaic (PV) systems and wind turbines (WTs), show fast time-varying characteristics. For example, the output of PV may change dramatically within a few seconds when affected by clouds, easily causing network security and voltage quality problems [2]. Additionally, DC/AC converter and electronic modulation technologies are widely used in distributed power supplies [3], which enables them to be rapidly regulated, e.g., in 0.02 s (for 50 Hz frequency). This feature suggests that DERs' fast-responding characteristics can enhance the active operation and regulation ability of distribution networks if the optimal determination of DERs' setpoints can match their adjustment time.
The main goal of time-varying optimization methods is to track the optimal trajectories of continuously varying optimization problems (within the allowable error range) [4]. A natural approach is to sample the problems at specific times and then solve the resulting sequence of time-invariant optimization problems using iterative algorithms [5]. Offline algorithms [6,7] are used to solve problems that change slowly over time. A dual -subgradient method was proposed in [8] to seek inverter setpoints of PVs to bridge the temporal gap between long-term system optimization and real-time inverter control. In fastchanging settings, measurement-based online methods [2], [9]- [13] have been developed on a timescale of seconds. A feedback control strategy for optimal reactive power setpoints for microgenerators was proposed in [10] to provide real-time reactive power compensation. Online primal-dual-type methods were applied to develop real-time feedback algorithmic frameworks for time-varying optimal power flow problems in [11]- [13].
The critical feature shared by the above-mentioned methods is that they only utilize the current information of the problem parameters. In other words, these algorithms do not perform a "prediction" step; rather, they only carry out "correction" steps once the current information is obtained [4]. These approaches need iterations during the sampling interval to converge toward the optimum of the sampled time-invariant problem, while the actual solution drifts with time. Therefore, these approaches are likely to induce a large tracking error [14]. To reduce tracking error, discrete-time prediction-correction (PC) methods were proposed in [15]- [21] utilizing the prediction information of the problem parameters to identify the change of the optimum and correct the prediction based on the newly acquired information. Specifically, Simonetto et al. [21] proposed a discrete-time PC approach to implement time-varying convex optimization and extended it to DERs operation optimization in distribution systems, while other algorithms in [15]- [20] are temporarily applied to simple mathematical examples. However, the discrete PC methods still cannot fully identify the dynamics within the interval, whose asymptotical error depends on the length of sampling interval and iteration steps of "correction." To track the optimal solution with an asymptotically vanishing error, a continuous-time PC method was designed for an unconstrained optimization problem [22]. In [14], the authors developed a PC interior-point method to solve constrained time-varying convex optimization problems and implemented it in a sparsity promoting least squares problem and a collisionfree robot navigation problem. In this paper, we will extend this method to develop a non-iterative PC algorithm to identify the optimal power setpoints of DERs and network voltages.
The exact relationship between nodal voltages and power injections in distribution networks is nonlinear, which will cause the optimization model to be non-convex and difficult to solve. A pivotal approach in most online methods is to approximate the nonlinear power flow by a linear model, e.g., in [2], [10]- [13], [21]. However, these schemes assume that we can obtain an ideal model of the distribution network, so they may not work properly when an accurate model is not available or feeder parameters change. For this situation, data-driven methods [23]- [26] provide some good ideas. A least squares estimator was utilized in [23] to compute voltage magnitude and power loss sensitivity coefficients in a low voltage network. Forward and inverse regression methods [24] and a recursive weighted least squares method [25] were studied for the case in which the model of the distribution system is not completely known. Different from the approaches mentioned above that estimate all the sensitivity elements (requiring a great deal of measurements and a high sampling rate), the authors in [26] proposed an approach to reduce the number of parameters to be estimated exploiting the structural characteristics of balanced radial distribution networks. In this paper, we involve a similar strategy for distribution networks, in which voltage measurements throughout the feeder are collected to identify the true voltage sensitivity parameters based on a linear power flow model.
To better track the dynamics within the interval, a non-iterative PC algorithm for distribution grids is developed in this paper to identify the time-varying optimal power setpoints of DERs with an asymptotically vanishing error. The fastchanging nature of renewable energy resources and uncontrolled loads is taken into account in the prediction term in order to identify the change of the optimum; a correction term represented by a Newton's method pushes the solution toward the optimum. Moreover, in the presence of uncertainty in the feeder parameters or the absence of an exact model of the system, voltage sensitivity parameters are identified by measurement information. By doing so, the optimal trajectories of DERs' power setpoints and network voltages will automatically adapt to system disturbance, such as renewable energy sources, uncontrollable loads, and system model parameters.
The main contributions of this paper follow. 1) A non-iterative algorithm with "prediction" and "correction" terms is constructed to track the optimal solutions of distribution grids' time-varying optimization with an asymptotically vanishing error. Compared with the existing methods, the proposed method is considerably faster because of its non-iterative nature, which makes the algorithm suitable to cope with the rapid changes of renewable energy resources. For example, the proposed algorithm can achieve fast calculation, so as to match the adjustment time of the inverter (i.e., 0.02 s). Moreover, the designed algorithm, based on "prediction" and "correction" terms, can obtain the evolution of the optimal trajectory, which demonstrates online-tracking capability with an asymptotically vanishing error.
2) Exploiting the structural characteristics of distribution networks, voltage sensitivity matrices are obtained based on a few online measurements, being different from methods of offline estimation followed by online application. This makes the designed approach very well suited for applications in the absence of an accurate network or with variational feeder parameters while without incurring extra computational burden.
The remainder of this paper is organized as follows. Section II formulates the time-varying optimization problem of distribution grids with DERs. In Section III, a non-iterative PC algorithm based on voltage measurements is proposed. Section IV presents simulation results on an illustrative system. The conclusion is provided in Section V.
II. PROBLEM FORMULATION
Consider a distribution feeder composed of n + 1 nodes collected in the set N ∪ {0} with N: = {1, …, n} and L distribution lines. Node 0 represents the point of common coupling (PCC) or substation. A time-varying minimization problem that captures the optimization objective, operation constraints, and the power flow equations is formulated as (1), which enables us to capture the variability of ambient conditions and noncontrollable energy assets.
where Ci(Pgi, Qgi, t) = Cpi(Pgi− P tar i (t)) 2 + Cqi Qgi 2 represents the cost objective associated with DERs; Cpi and Cqi are cost Specifically, for PV systems and WTs, ( ) ( ) Pt denoting the maximum available real power of PV or WT at node i. The set i is given by: Qt represents the reactive power limitation of DER i; i is the power factor angle.
As for energy storage systems (ESSs), ( ) 0 where c = 1n is an n-dimensional all-ones vector; parameters , which are called voltage sensitivities (see, e.g., [26]), are time varying and can be estimated from a few measurements using an effective data-driven algorithm in this paper, and we will describe in detail in Section III how to estimate these time-varying parameters. Then the time-varying optimization problem (1) can be rearranged as following quadratic form: ∈ N and 0 is a (1 × n)-dimensional vector, all entries in which are 0.
A. Non-iterative Prediction-Correction Algorithm
In this section, we develop a non-iterative PC method to solve the time-varying optimization problem. Let * () t u be the optimal solution of problem (3). Notice that there are p=2n+n time-varying inequality constraint functions included in (3b) and (3c), which can be compactly written as fi (u, t) ≤ 0 for i ∈ {1, 2, …, p}, or i ∈[p]. The following barrier function [5] is used to involve the inequality constraints into the objective function in problem (3): where the barrier parameter c(t) is an increasing function satisfying limt→∞ c(t) = ∞ and the slack s(t) is a decreasing function satisfying limt→∞ s(t) = 0, which is introduced to ensure that s(t) > fi(u, t), i ∈[p] holds for all times t ≥ 0. In particular, choosing s(0) ≥ maxi∈[p]{fi(u(0), 0)} is sufficient to make this case true, as verified in [14]. Then, the barrier function takes the value 0 when the inequality constraints are satisfied and +∞ in the opposite case.
Observe that the optimal solution * () t u to (4) should satisfy the first-order optimality condition , and thus its derivative should also satisfy formula (5): where u represents the total derivative of u to t, while s u , c u , and t u denote the partial derivative of u to s, c, and t, respectively. Solving (5) for * () t u we obtain Equation (6) is called the "prediction" term, which is used to predict how the optimal solution changes over time by considering the time-varying characteristics of problem parameters. However, if we cannot obtain the initial optimal solution * (0) u or any optimal solution * 0 () t u for some t0 ≥ 0, then (6) could not be simply relied on to track the evolution of * () t u . To push the solution to the optimum, Newton's method is implemented on ( ) , , , s c t u to rapidly converge on its minimizer * () t u , the continuous-time version of which yields the following correction term: By combining (6) and (7), a complete prediction-correction dynamic system can be obtained: where = I for some α > 0, and I denotes an identity matrix.
Notice that in the dynamic system (8), the calculation of control increment u does not require iteration, but can be obtained directly according to the right-side expression of the equal sign. Another thing worth mentioning is that the dynamical system (8) includes the prediction term t u , whose computation involves terms of DERs' available power and uncontrollable load drift over time. In online applications, perhaps only limited information about these terms is available. Here, assume the knowledge of the time-varying power within 1 s. Then Hermite polynomials can be used to approximate the continuous-time trajectory of the data sets with the desired level of accuracy [27].
Specifically for problem (3), the parameters in dynamic system (8) can be expressed as: ( ) ( ) ( ) , and ei be a basis vector whose i-th element is 1 and the other elements are 0. Then the specific parameters in (9)
B. Estimation of Voltage Sensitivity Parameters
In this section, the nonlinear relationship between V, P, and Q is approximated by the linear model (2), and the time-varying sensitivity parameters A and B will be estimated by measurements. Let ( 1) nL + M represent the node-to-branch incidence matrix of the distribution network, and M be a matrix obtained from M by removing the row associated with the PCC. Specially, in balanced radial distribution networks, voltage sensitivity matrices can be expressed as: where r and x 1 L whose elements represent the correlation coefficients between two connected nodes, and they are equivalent to line resistance and line reactance respectively under no load conditions. Assuming the network topology configurations M (which would not change frequently within a short period) and distribution line resistance-to-reactance ratios and Then (15) is equivalent to the following classical linear regression problem: whose closed-form solution is: where † 1 Notice that (20) is a closed-form formulation, only related to the known parameters (e.g., network topology parameters and power injections) and a few measurements. Therefore, the proposed algorithm can be extended to application in the absence of an accurate network without incurring extra computational burden.
C. Proposed Algorithm Framework
The schematic diagram of the presented algorithm framework is highlighted in Fig. 1. A dynamical system consisting of "prediction" and "correction" terms for distribution grids is developed to identify the time-varying optimal power setpoints of DERs and network voltages, which automatically adapt to system disturbance, such as renewable energy sources, uncontrollable loads, and system model parameters.
In actual execution, the output power increments of DERs are obtained based on the predictive power parameters in the ultra-short time and their gradients with respect to time, as well as the voltage measurements. Setpoints of DERs are obtained by integrating the increments over time, which will affect the system voltage through network constraints. After obtaining the new power parameters and network voltages, the new increments of DER setpoints can be calculated. That is to say, in each interval, the proposed algorithm only needs to perform a single one-step calculation instead of performing multiple iterations while guaranteeing a good tracking performance.
IV. SIMULATION
The performance of the proposed method was evaluated using several numerical experiments on a modified IEEE 33bus distribution network [28]. The first experiment under normal time-varying conditions was carried out to verify the tracking performance of the proposed non-iterative algorithm compared with the optimal trajectories intuitively, together with comparisons with iterative algorithms and fixed sensitivity methods. The second experiment with one PV halting and resuming operation suddenly was presented to demonstrate the robustness performance of our approach under abnormal scenarios. The third experiment tested a network reconfiguration scenario to show the benefits of the method's adaptivity to changes in the system model. The dynamic optimization system illustrated in Fig. 1 was established in a MATLAB/Simulink platform. We numerically solve the dynamical system in (8) with step size τ = 0.02 s. The computer was equipped with an Intel(R) Core(TM) processor i7-4800MQ with 16 GB of RAM. Fig. 2 shows the topology of the IEEE 33-bus distribution network, where PVs were located at nodes {2, 6, 11, 16, 19, 24, 28}, WTs were located at nodes {13, 26}, and an ESS was located at node {31}. The available real power of DERs [29] and real power loads [30] are given in Fig. 3. Consider that all loads follow the daily real power shown in Fig. 3
A. Dynamic Trajectories under Normal Time-Varying Conditions
First, a test under normal time-varying conditions was carried out at 12:00-12:01 p.m. A Gaussian distribution model was added to the rated line reactance parameters specified in [28] to simulate uncertain feeder parameters.
1) Results comparison with exact optimal solution: To verify the efficiency and tracking performance of the proposed algorithm, exact optimal trajectories obtained by sampling the problem every 0.02 s and solved using the solver of the "yalmip" toolbox in MATLAB are used for comparison. Notice that here we assume that the solver has sufficient time to obtain the optimal solution within each sampling interval, whose real computing time will be analyzed later. The trajectories of power outputs of DERs and voltage magnitudes of some example nodes are depicted in Fig. 4. The dashed lines represent the track trajectories calculated by the proposed algorithm, while the solid lines represent the exact optimal trajectories.
As shown in Fig. 4, starting from the initial values, trajectories of the control variables (i.e., the real and reactive power setpoints of DERs) and voltage magnitudes calculated by the proposed method gradually approached the optimal solution, tracked to the exact optimal trajectories after about 1.5 s, and stayed on that optimum in the subsequent time-varying process. This means that as long as we start the algorithm, we will be able to identify the exact optimum within a short period of time (e.g., 1.5 s) compared to the entire tracking time, which can be set according to the specific situation and is set to be 60 s in this case. The algorithm will remain on the optimal trajectory regardless of whether the parameters change over time. To demonstrate the accuracy of our proposed method, we plot the L2-norm of errors of control variables and the objective value versus t in Fig. 5. It can be found that the tracking errors drop rapidly and remain within a small range, such as 10 -3 -10 -4 for control variables and 10 -4 -10 -5 for the objective value. 2) Results comparison with iterative algorithms: To evaluate the advantages of the proposed non-iterative algorithm, we compared the proposed algorithm with some iterative algorithms, including the primal-dual algorithm in [12] (without a "prediction" step), and the discrete-time PC algorithm in [19]. For iterative algorithms, here, we sample the time-varying optimization problem every 1 s, and iterate multiple times in the interval until convergence. The exact optimal trajectories obtained by the same way as in 1) were used to be compared. The results of real power setpoint of WT13 are shown in Fig. 6 In terms of tracking performance, it can be seen that the trajectory obtained by the proposed algorithm (the green dotted line) can track the exact optimal trajectory (the green solid line) well-the tracking error is so small that it is almost negligible. The result obtained by the primal-dual algorithm (the orange solid line) always lags behind the exact optimal trajectory, since it tends to converge toward the optimal solution of the sampled problem, while the actual optimum is drifting with time, leading to a steady-state deviation. The discrete-time PC algorithm incorporates the prediction of time-varying parameters so that the result (the blue solid line) has little deviation at the sampling points. However, it still cannot fully capture the dynamics in the interval, leading to a large error within the sampling interval.
We compare the computation time of each algorithm in Table I. As can be seen, it took the primal-dual and discrete-time PC algorithms 0.79 s and 0.661 s, respectively, to produce solutions with tracking error. It took 0.0117 s for the proposed algorithm to iterate one step, while the optimal solution can be obtained in each iteration. This means that it can be applied in real time at intervals of 0.02 s to match the DER's regulation time. According to our tests, the real computing time for the solver to solve the sampling problem is 0.419 s, which means that it is difficult for the solver to obtain the optimal solution in the application interval of 0.02 s in practice. Here, we obtain the exact optimal solution just for the sake of comparison, regardless of the time cost.
3) Results comparison with fixed sensitivity method:
To evaluate the advantages of the sensitivity estimation method, we compared the tracking errors in Fig. 7 between the proposed algorithm with estimated sensitivities and the simplified version of the proposed algorithm with fixed sensitivities (calculated from the rate line parameters [32]). As shown, the errors of both the control variables and the objective value of the fixed sensitivity method are larger and more oscillating than those of the proposed estimated sensitivity method.
B. Dynamic Trajectories when One PV Is Abnormal
To test the robustness performance of the proposed method in an abnormal scenario, we assumed that PV28 halted operation at time T = 12:00:25 and resumed operation after 10 s, and showed the test results in Fig. 8.
As shown in Fig. 8, when PV28 suddenly halts operation, the other DERs begin to react and reach new optima at short notice. Especially, the ESS deployed at node 31 was changed from the original charging state to the discharge state to supplement the power deficiency of the system (shown in the upper figure of Fig. 8). When PV28 resumed operation, ESS31 returned to the charging state. Fig. 9. IEEE 33-bus distribution network reconfiguration topology.
A network reconfiguration scenario was tested at time T = 12:00:30 where bus 8 was connected to bus 29, and the line between bus 5 and bus 25 was disconnected, as shown in Fig. 9. Fig. 10 shows results of power outputs of DERs and voltage magnitudes, from which we can see that the trajectories obtained by the proposed algorithm can return quickly to the optimal trajectories when the network topology changes, taking about 0.5 s.
V. CONCLUSION
In the present study, a non-iterative algorithm was designed to track the time-varying optimal power setpoints of DERs and network voltages for distribution grids with an asymptotically vanishing error. Considering the fast-changing nature of renewable energy resources and uncontrolled loads, we derive a prediction term to identify the change of the optimum; in addition, a correction term is formed based on Newton's method to push the solution toward the optimum. The proposed algorithm can be applied online in the absence of an accurate network model. Simulation results demonstrate that the proposed method is applicable for fast-changing loads and power generations, and variable feeder parameters. | v2 |
2022-07-18T02:20:34.865Z | 2010-01-01T00:00:00.000Z | 250899619 | s2orc/train | THE LUMINOSITY AND MASS FUNCTIONS OF LOW-MASS STARS IN THE GALACTIC DISK. II. THE FIELD
We report on new measurements of the luminosity function (LF) and mass function (MF) of field low-mass dwarfs derived from Sloan Digital Sky Survey Data Release 6 photometry. The analysis incorporates ∼15 million low-mass stars (0.1 ), spread over 8400 deg2. Stellar distances are estimated using new photometric parallax relations, constructed from ugriz photometry of nearby low-mass stars with trigonometric parallaxes. We use a technique that simultaneously measures Galactic structure and the stellar LF from 7 < Mr < 16. We compare the LF to previous studies and convert to an MF using the mass–luminosity relations of Delfosse et al. The system MF, measured over −1.0< log <−0.1, is well described by a lognormal distribution with = 0.25 . We stress that our results should not be extrapolated to other mass regimes. Our work generally agrees with prior low-mass stellar MFs and places strong constraints on future theoretical star formation studies.
INTRODUCTION
Low-mass dwarfs (0.1 M < M < 0.8 M ) are, by number, the dominant stellar population of the Milky Way. These long-lived (Laughlin et al. 1997) and ubiquitous objects comprise ∼70% of all stars, yet their diminutive luminosities (L 0.05 L ) have traditionally prohibited their study in large numbers. However, in recent years, the development of large-format CCDs has enabled accurate photometric surveys over wide solid angles on the sky, such as the Two Micron All Sky Survey (2MASS; Skrutskie et al. 2006) and the Sloan Digital Sky Survey (SDSS; York et al. 2000). These projects obtained precise (σ 5%) and deep (r ∼ 22, J ∼ 16.5) photometry of large solid angles ( 10 4 deg 2 ). The resulting photometric data sets contain millions of low-mass stars, enabling statistical investigations of their properties. In particular, 2MASS photometry led to the discovery of two new spectral classes, L and T (Kirkpatrick et al. 1999;Burgasser et al. 2002), and was used to trace the structure of the Sagittarius dwarf galaxy (Majewski et al. 2003) with M giants. SDSS data led to the discovery the first field methane brown dwarf, which was the coolest substellar object known at the time of its discovery (Strauss et al. 1999). Other notable SDSS results include the discovery of new stellar streams in the halo (e.g., Yanny et al. 2003;Belokurov et al. 2006) and new Milky Way companions (e.g., Willman et al. 2005;Belokurov et al. 2007), as well as unprecedented in situ mapping of the stellar density ) and metallicity ) distributions of the Milky Way and confirmation of the dual-halo structure of the Milky Way (Carollo et al. 2007). SDSS has proven to be a valuable resource for statistical investigations of the properties of low-mass stars, including their magnetic activity and chromospheric properties (West et al. 2004(West et al. , 2008, flare characteristics (Kowalski et al. 2009), and their use as tracers of Galactic structure (GS) and kinematics (Bochanski et al. 2007a;Fuchs et al. 2009).
Despite the advances made in other cool-star topics, two fundamental properties, the luminosity and mass functions, remain uncertain. The luminosity function (LF) describes the number density of stars as a function of absolute magnitude (Φ(M) = dN/dM). The mass function (MF), typically inferred from the LF, is defined as the number density per unit mass (ψ(M) = dN/dM). For low-mass stars, with lifetimes much greater than the Hubble time, the observed present-day mass function (PDMF) in the field should trace the initial mass function (IMF). Following Salpeter (1955), the IMF has usually been characterized by a power law ψ(M) = dN/dM ∝ M −α , with the exponent α varying over a wide range, from 0.5 to 2.5. However, some studies have preferred a lognormal distribution. Previous investigations are summarized in Table 1 (MF) and Table 2 (LF), which show the total number of stars included and solid angle surveyed in each study.
In Section 2, we describe the SDSS photometry used to measure the field LF and MF. The color-absolute magnitude calibration is discussed in Section 3. In Section 4, we introduce a new technique for measuring the LF of large, deep photometric data sets and compare to previous analyses. The resulting "raw" LF is corrected for systematic effects such as unresolved binarity, metallicity gradients, and changes in GS in Section 5. The final LF and our MF are presented in Sections 6 and 7. Our conclusions follow in Section 8.
SDSS Photometry
The SDSS (York et al. 2000;Stoughton et al. 2002) employed a 2.5 m telescope (Gunn et al. 2006) at Apache Point Observatory (APO) to conduct a photometric survey in the optical ugriz filters (Fukugita et al. 1996;Ivezić et al. 2007). The sky was imaged using a time-delayed integration technique. Great circles on the sky were scanned along six camera columns, each consisting of five 2048 × 2048 SITe/Tektronix CCDs with an exposure time of ∼54 s (Gunn et al. 1998). A custom photometric pipeline (Photo; Lupton et al. 2001) was constructed to analyze each image and perform photometry. Calibration onto a standard star network (Smith et al. 2002) was accomplished using observations from the "Photometric Telescope" (PT; Hogg et al. 2001;Tucker et al. 2006). Further discussion of PT calibrations for low-mass stars can be found in Davenport et al. (2007). Absolute astrometric accuracy is better than 0. 1 (Pier et al. 2003). Centered on the northern Galactic cap, the imaging data span ∼10,000 deg 2 , and is 95% complete to r ∼ 22.2 (Stoughton et al. 2002;Adelman-McCarthy et al. 2008). When the north Galactic pole was not visible from APO, ∼300 deg 2 were scanned along the δ = 0 region known as "Stripe 82" to empirically quantify completeness and photometric precision (Ivezić et al. 2007). Over 357 million unique photometric objects have been identified in the latest public data release (DR7, Abazajian et al. 2009). The photometric precision of SDSS is unrivaled for a survey of this size, with typical errors 0.02 mag (Ivezić et al. 2004(Ivezić et al. , 2007.
Sample Selection
We queried the SDSS catalog archive server (CAS) through the casjobs Web site (O'Mullane et al. 2005) 6 for point sources with the following criteria. This flag serves two purposes. First, it implies that the GOOD flag has been set, where GOOD ≡ !BRIGHT AND (!BLENDED OR NODEBLEND OR N_CHILD = 0). BRIGHT refers to duplicate detections of bright objects and the other set of flags ensures that stars were not deblended and counted twice. The PRIMARY flag indicates that objects imaged multiple times are only counted once 2. 7 3. The object was classified morphologically as a star(TYPE = 6). 4. The photometric objects fell within the following brightness and color limits: The first two cuts extend past the 95% completeness limits of the survey (i < 21.3, z < 20.5; Stoughton et al. 2002), but more conservative completeness cuts are enforced below. The latter two cuts ensure that the stars have red colors typical of M dwarfs (Bochanski et al. 2007b;Covey et al. 2007;West et al. 2008).
This query produced 32,816,619 matches. To ensure complete photometry, we required 16 < r < 22. These cuts conservatively account for the bright end of SDSS photometry, since the detectors saturate near the 15th magnitude (Stoughton et al. 2002). At the faint end, the r < 22 limit is slightly brighter than the formal 95% completeness limits. 23,323,453 stars remain after these brightness cuts. SDSS provides many photometric flags that assess the quality of each measurement. These flags are described in detail by Stoughton et al. (2002) and in the SDSS Web documentation. 8 With the following series of flag cuts, the ∼23 million photometric objects were cleaned to a complete, accurate sample. Since only the r, i, and z filters were used in this analysis, all of the following flags were only applied to those filters. The r-band distribution of sources is shown in Figure 2, along with the subset eliminated by each flag cut described below. The color-color diagrams for each of these subsets are shown in Figure 3. Saturated photometry was removed by selecting against objects with the SATURATED flag set. As seen in Figure 2, this cut removes mostly objects with r < 15. However, there were some fainter stars within the footprint of bright, saturated stars. These stars are also marked as SATURATED and not included in our sample. NOTCHECKED was used to further clean saturated stars from the photometry. This flag marks areas on the sky where Photo did not search for local maxima, such as the cores of saturated stars. Similarly, we eliminated sources with the PEAKCENTER set, where the center of a photometric object is identified by the peak pixel and not a more sophisticated centroiding algorithm. As seen in Figure 2, both of these flags composed a small fraction of the total number of observations and are more common near the bright and faint ends of the photometry. Saturated objects and very low signalto-noise observations will fail many of these tests.
The last set of flags examines the structure of the pointspread function (PSF) after it has been measured. The PSF_FLUX_INTERP flag is set when over 20% of the star's PSF is interpolated. While Stoughton et al. (2002) claim that this procedure generally provides trustworthy photometry, they warn of cases where this may not be true. Visual inspection of the (r − i, i − z) color-color diagram in Figure 3 confirmed the latter, showing a wider locus than other flag cuts. The INTERP_CENTER flag is set when a pixel within three pixels of the center of a star is interpolated. The (r − i, i − z) color-color diagram of objects with INTERP_CENTER set is also wide, and the fit to the PSF could be significantly af-fected by an interpolated pixel near its center (Stoughton et al. 2002). Thus, stars with these flags set were removed. Finally, BAD_COUNTS_ERROR is set when a significant fraction of the star's PSF is interpolated over, and the photometric error estimates should not be trusted. Table 3 lists the number of stars in the sample with each flag set. For the final "clean" sample, we defined the following metaflag: 15,340,771 passed "Clean" stars within a 4 kpc 3 cube clean = (!SATURATED r,i,z AND !PEAKCENTER r,i,z AND !NOTCHECKED r,i,z AND !PSF_FLUX_INTERP r,i,z AND !INTERP_CENTER r,i,z AND !BAD_COUNTS_ERROR r,i,z AND (16 < psfmag_r < 22)).
After the flag cuts, the stellar sample was composed of 21,418,445 stars.
The final cut applied to the stellar sample was based on distance. As explained in Section 4.1, stellar densities were calculated within a 4 × 4 × 4 kpc 3 cube centered on the Sun. Thus, only stars within this volume were retained, and the final number of stars in the sample is 15,340,771. 9 In Figure 4, histograms of the r − i, i − z, and r − z colors are shown. These color histograms map directly to absolute magnitude, since color-magnitude relations (CMRs) are used to estimate absolute magnitude and distance. The structure seen in the color histograms at r−i ∼ 1.5 and r−z ∼ 2.2 results from the convolution of the peak of the LF with the Galactic stellar density profile over the volume probed by SDSS. Removing the density gradients and normalizing by the volume sampled constitutes the majority of the effort needed to convert these color histograms into an LF. The (g − r, r − i) and (r − i, i − z) color-color diagrams are shown in Figure 5, along with the model predictions of Baraffe et al. (1998) and Girardi et al. (2004). It is clearly evident that the models fail to reproduce the stellar locus, with discrepancies as large as ∼1 mag. These models should not be employed as color-absolute magnitude relations for low-mass stars.
Star-Galaxy Separation
With any deep photometric survey, accurate star-galaxy separation is a requisite for many astronomical investigations. At faint magnitudes, galaxies far outnumber stars, especially at the Galactic latitudes covered by SDSS. Star-galaxy identification is done automatically in the SDSS pipeline, based on the brightness and morphology of a given source. Lupton et al. (2001) investigated the fidelity of this process, using overlap between HST observations and early SDSS photometry. They showed that star-galaxy separation is accurate for more than >95% of objects to a magnitude limit of r ∼ 21.5. Since the present sample extends to r = 22, we re-investigated the star-galaxy separation efficiency of the SDSS pipeline. We matched the SDSS pipeline photometry to the HST Advanced Camera for Surveys (ACS) images within the COSMOS (Scoville et al. 2007) footprint. The details of this analysis will be published in a later paper (J. J. Bochanski et al., 2010, in preparation). In Figure 6, we plot the colors and brightnesses of COSMOS galaxies identified as stars by the SDSS pipeline (red filled circles), along with a representative subsample of 0.02% of the stars in our sample. This figure demonstrates that for the majority of the stars in the present analysis, the SDSS morphological identifications are adequate, and contamination by galaxies is not a major systematic.
CALIBRATION: PHOTOMETRIC PARALLAX
Accurate absolute magnitude estimates are necessary to measure the stellar field LF. Trigonometric parallaxes, such as those measured by Hipparcos (ESA 1997;van Leeuwen 2007), offer the most direct method for calculating absolute magnitude. Unfortunately, trigonometric parallaxes are not available for many faint stars, including the overwhelming majority of the low-mass dwarfs observed by SDSS. Thus, other methods must be employed to estimate a star's absolute magnitude (and distance). Two common techniques, known as photometric parallax and spectroscopic parallax, use a star's color or spectral type, respectively. These methods are calibrated by sources with known absolute magnitudes (nearby trigonometric parallax Hess diagram for objects identified as stars in the SDSS pipeline, but as galaxies with high-resolution ACS imaging in the COSMOS footprint (red filled circles). The black points show 0.02% of the final stellar sample used in the present analysis. Note that galaxy contamination is the most significant at faint, blue colors. These colors and magnitudes are not probed by our analysis, since these objects lie beyond our 4 × 4 × 4 kpc distance cut.
(A color version of this figure is available in the online journal.) stars, clusters, etc.), and mathematical relations are fitted to their color (or spectral type)-absolute magnitude locus. Thus, the color of a star can be used to estimate its absolute magnitude, and in turn, its distance, by the well-known distance modulus (m − M): where d is the distance, m λ,1 is the apparent magnitude in one filter, and m λ,1 − m λ,2 is the color from two filters, which is used to calculate the absolute magnitude, M λ, 1 .
There have been multiple photometric parallax relations, 10 as shown in Figure 7, constructed for low-mass stars observed by SDSS Williams et al. 2002;West et al. 10 Photometric parallax relations are often referred to as color-magnitude relations. We use both names interchangeably throughout this manuscript. stars with well-measured trigonometric parallaxes is required to provide a reliable relation. Fortunately, an observing program led by D. A. Golimowski et al. (2010, in preparation) acquired such observations and they kindly provided their data prior to publication. The resulting CMRs are used to estimate the absolute magnitude and distance to all the stars in our sample, as described below.
Photometric Telescope Photometry
The nearby star survey (D. A. Golimowski et al. 2010, in preparation) targeted stars with the colors of low-mass dwarfs and precise trigonometric parallaxes. The majority of targets were drawn from the Research Consortium on Nearby Stars (RECONS) catalog (e.g., Henry et al. 1994Henry et al. , 2004Kirkpatrick et al. 1995). Most of the stars selected from the RECONS sample are within 10 pc, with good parallactic precision (σ π /π 0.1). In addition to RECONS targets, the nearby sample included K dwarfs from the Luyten (1979) and Giclas et al. (1971) proper motion surveys. Parallax measurements for these additional stars were obtained from the Hipparcos (ESA 1997) or General Catalogue of Trigonometric Stellar Parallaxes (the "Yale" catalog; van Altena et al. 1995) surveys.
Near-infrared JHK s photometry was obtained from the 2MASS Point Source Catalog (Cutri et al. 2003). Acquiring ugriz photometry proved more problematic. Since typical SDSS photometry saturates near r ∼ 15, most of the nearby stars were too bright to be directly imaged with the 2.5 m telescope. In-stead, the 0.5 m PT was used to obtain (ugriz) photometry 11 of these stars. The PT was active every night; the 2.5 m telescope was used in imaging mode during the SDSS, observing patches of the nightly footprint to determine the photometric solution for the night, and to calibrate the zero point of the 2.5 m observations (Smith et al. 2002;Tucker et al. 2006). D. A. Golimowski et al. (2010, in preparation) obtained (ugriz) photometry of the parallax sample over 20 nights for 268 low-mass stars. The transformations of Tucker et al. (2006) and the Davenport et al. (2007) corrections were applied to the nearby star photometry to transform the "primed" PT photometry to the native "unprimed" 2.5 m system (see Davenport et al. 2007 for more details).
To produce a reliable photometric parallax relation, the following criteria were imposed on the sample. First, stars with large photometric errors (σ > 0.1 mag) in the griz bands were removed. Next, high signal-to-noise 2MASS photometry was selected, by choosing stars with their ph_qual flag equal to "AAA." This flag corresponds to a signal-to-noise ratio >10 and photometric uncertainties <0.1 mag in the JHK s bands. Next, a limit on parallactic accuracy of σ π /π < 0.10 was enforced. It ensured that the bias introduced by a parallax-limited sample, described by Lutz & Kelker (1973), is minimized. Since many of the stars in the nearby star sample have precise parallaxes (σ π /π < 0.04), the Lutz-Kelker correction is essentially negligible (< −0.05; Hanson 1979). Finally, contaminants such as known subdwarfs, known binaries, suspected flares, or white dwarfs were culled from the nearby star sample.
Additional Photometry
To augment the original PT observations, we searched the literature for other low-mass stars with accurate parallaxes and ugriz and JHK s photometry. The studies of Dahn et al. (2002) and Vrba et al. (2004) supplemented the original sample and provided accurate parallaxes (σ π /π 0.1) of late M and L dwarfs. Several of those stars were observed with the SDSS 2.5 m telescope, obviating the need for transformations between the primed and unprimed ugriz systems. Six late M and L dwarfs were added from these catalogs, extending the parallax sample in color from r − i ∼ 2.5 to r − i ∼ 3.0 and in M r from 16 to 20. Our final sample is given in .
Color-Magnitude Relations
Multiple color-absolute magnitude diagrams (CMDs) in the ugriz and JHK s bandpasses were constructed using the photometry and parallaxes described above. The CMDs were individually inspected, fitting the main sequence with linear, second-, third-, and fourth-order polynomials. Piecewise functions were also tested, placing discontinuities by eye along the main sequence. There is an extensive discussion in the literature of a "break" in the main sequence near spectral type M4 (or V − I ∼ 2.8; see Hawley et al. 1996;Reid & Gizis 1997;Reid & Cruz 2002;Reid & Hawley 2005). Certain colors, such as V − I, show evidence of a break ( Figure 10 of Reid & Cruz 2002), while other colors, such as V − K, do not ( Figure 9 of Reid & Cruz 2002). We did not enforce a break in our fits. Finally, the rms scatter about the fit for each CMD was computed, and the relation that produced the smallest scatter for each color-absolute magnitude combination was retained. Note that the rms scatter was dominated by the intrinsic width of the Table 4. M r was used for absolute magnitude, as it contains significant flux in all late-type stars. The r − z color has the longest wavelength baseline and small residual rms scatter (σ 0.40 mag). Other long baseline colors (g − r, g − z) are metallicity sensitive (West et al. 2004;Lépine & Scholz 2008), but most of our sample does not have reliable g-band photometry. The adopted photometric parallax relations in these colors did not include any discontinuities, although we note a slight increase in the dispersion of the main sequence around M r ∼ 12. The final fits are shown in Figures 7, 9, and 10, along with other published photometric parallax relations in the ugriz system.
ANALYSIS
Our photometric sample comprises a data set 3 orders of magnitude larger (in number) than any previous LF study (see Table 2). Furthermore, it is spread over 8400 deg 2 , nearly 300 times larger than the sample analyzed by Covey et al. (2008). This large sky coverage represents the main challenge in measuring the LF from this sample. Most of the previous Baraffe et al. 1998Hawley et al. 2002West et al. 2005 This Study Table 2 either assumed a uniform density distribution (for nearby stars) or calculated a Galactic density profile, ρ(r), along one line of sight. With millions of stars spread over nearly 1/4 of the sky, numerically integrating Galactic density profiles for each star is computationally prohibitive.
To address this issue, we introduced the following technique for measuring the LF. First, absolute magnitudes were assigned, and distances to each star were computed using the r − z and r − i CMRs from Table 4. Each CMR was processed separately. Next, a small range in absolute magnitude (0.5 mag) was selected, and the stellar density was measured in situ as a function of Galactic radius (R) and Galactic height (Z). This range in absolute magnitude was selected to provide high resolution in the LF, while maintaining a large number of stars (∼10 6 ) in each bin. Finally, a Galactic profile was fitted to the R, Z density maps, solving for the shape of the thin and thick disks, as well as the local density. The LF was then constructed by combining the local density of each absolute magnitude slice.
Stellar Density Maps
To assemble an (R, Z) density map, an accurate count of the number of stars in a given R, Z bin, as well as the volume spanned by each bin, was required. A cylindrical (R, Z, φ) coordinate system was taken as the natural coordinates of stellar density in the Milky Way. In this frame, the Sun's position was set at R = 8.5 kpc (Kerr & Lynden-Bell 1986) and Z = 15 pc above the plane (Cohen 1995;Ng et al. 1997;Binney et al. 1997). Azimuthal symmetry was assumed (and was recently verified by Jurić et al. 2008 and found to be appropriate for the local Galaxy). The following analysis was carried out in R and Z. We stress that we are not presenting any information on the φ = 0 plane. Rather, the density maps are summed over φ, collapsing the three-dimensional SDSS volume into a twodimensional density map. The coordinate transformation from a spherical coordinate system ( , b, and d) to a cylindrical (R, Z) system was performed with the following equations: where d was the distance (as determined by Equation (1) and the (M r , r − z) CMR), and b are the Galactic longitude and latitude, respectively, and R and Z are the positions of the Sun, as explained above. 12 The density maps were binned in R and Z. The bin width needed to be large enough to contain many stars (to minimize Poisson noise) but small enough to accurately resolve the structure of the thin and thick disks. The R, Z bin size was set at 25 pc. An example of the star counts as a function of R and Z is shown in Figure 11. The volume sampled by each R, Z bin was estimated with the following numerical method. A 4 × 4 × 4 kpc 3 cube of "test" points was laid down, centered on the Sun, at uniform intervals 1/10th the R, Z bin size (every 2.5 pc). This grid discretizes the volume, with each point corresponding to a fraction of the total volume. Here, the volume associated with each grid point was k = 2.5 3 pc 3 point −1 or 15.625 pc 3 point −1 . The volume of an arbitrary shape is straightforward to calculate: simply count the points that fall within the shape and multiply by k. The α, δ, and distance of each point were calculated and compared to the SDSS volume. The number of test points in each R, Z bin was summed and multiplied by k to obtain the final volume corresponding to that R, Z bin. This process was repeated for each absolute magnitude slice. The maximum and minimum distances were calculated for each absolute magnitude slice (corresponding to the faint and bright apparent magnitude limits of the sample), and only test points within those bounds were counted. The same volume was used for all stars within the sample. The bluer stars in our sample were found at distances beyond 4 kpc, but computing volumes at these distances would be computationally prohibitive. Furthermore, this method minimizes galaxy contamination, which is largest for bluer, faint objects (see Section 2.3). Since the volumes are fully discretized, the error associated with N points is Poissondistributed. A fiducial example of the volume calculations is shown in Figure 12.
After calculating the volume of each R, Z bin, the density (in units of stars pc −3 ) is simply, with the error given by, where N(R, Z) is the star counts in each R, Z bin and V(R, Z) is the corresponding volume. Fiducial density and error maps are shown in Figures 13 and 14. Note that the error in Equation (5) is dominated by the first term on the right-hand side. While a smaller k could make the second term less significant, it would be computationally prohibitive to include more test points. We discuss systematic errors which may influence the measured density in Section 5.
Galactic Model Fits
Using the method described above, (R, Z) stellar density maps were constructed for each 0.5 mag slice in M r , from M r = 7.25 to M r = 15.75, roughly corresponding to spectral types M0-M8. The bin size in each map was constant, at 25 pc in the R and Z directions. For R, Z bins with density errors (Equation (5)) of <15%, the following disk density structure was fitted: where ρ • is the local density at the solar position (R = 8500 pc, Z = 15 pc), f is the fraction of the local density contributed by the thin disk, R •,thin and R •,thick are the thin and thick disk scale lengths, and Z •,thin and Z •,thick are the thin and thick disk scale heights, respectively. Since the density maps are dominated by nearby disk structure, the halo was neglected. Furthermore, Jurić et al. (2008) demonstrated that the halo structure is only important at |Z| > 3 kpc, well outside the volumes probed here. Restricting the sample to bins with density errors <15% ensures that they are well populated by stars, have precise volume measurements, and should accurately trace the underlying Milky Way stellar distribution. Approximately 50% of the R, Z bins have errors 15%, while containing >90% of the stars in the sample. The density maps were fitted using Equation (8) and a standard Levenberg-Marquardt algorithm (Press et al. 1992), using the following approach. First, the thin and thick disk scale heights and lengths, and their relative scaling, were measured using 10 absolute magnitude slices, from M r = 7.25 to 11.75. These relatively more luminous stars yield the best estimates for GS parameters. Including lower-luminosity stars biases the fits, artificially shrinking the scale heights and lengths to compensate for density differences between a small number of adjacent R, Z bins. The scale lengths and heights and their relative normalization were fitted for the entire M r = 7.25-11.75 range simultaneously. The resulting GS parameters (Z o,thin , Z o,thick , R o,thin , Z o,thick , f ) are listed in Table 5 as raw values, not yet corrected for systematic effects (see Section 5 and Table 6). After the relative thin/thick normalization (f) and the scale heights and lengths of each component are fixed, the local densities were fitted for each absolute magnitude slice, using a progressive sigma clipping method similar to that of Jurić et al. (2008). This clipping technique excludes obvious density anomalies from biasing the final best fit. First, a density model was computed, and the standard deviation (σ ) of the residuals was calculated. The R, Z density maps were refitted and bins with density residuals greater than 50σ are excluded. This process was repeated multiple times, with σ smoothly decreasing by the following series: σ = (40, 30, 20, 10, 5). An example LF, constructed from the local densities of each absolute magnitude slice and derived from the M R , r − z CMR, is shown in Figure 15.
SYSTEMATIC CORRECTIONS
The observed LF is subject to systematics imposed by nature, such as unresolved binarity and metallicity gradients, as well as those from the observations and analysis, e.g., Malmquist bias. The systematic differences manifested in different CMRs, which vary according to stellar metallicity, interstellar extinction, and color, are isolated and discussed in Sections 5.1 and 5.2, and the results are used in Section 5.3 to estimate the systematic uncertainties in the LF and GS. Malmquist bias (Section 5.4) and unresolved binarity (Section 5.5) were quantified using Monte Carlo (MC) models. Each model was populated with synthetic stars that were consistent with the observed GS and LF. The mock stellar catalog was analyzed with the same pipeline as the actual observations and the differences between the input and "observed" GS and LF were used to correct the observed values.
Systematic CMRs: Metallicity
A star with low metallicity will have a higher luminosity and temperature compared to its solar-metallicity counterpart of the same mass, as first described by Sandage & Eggen (1959). However, at a fixed color, stars with lower metallicities have fainter absolute magnitudes. Failing to account for this effect artificially brightens low-metallicity stars, increasing their estimated distance. This inflates densities at large distances, increasing the observed scale heights (e.g., King et al. 1990).
Quantifying the effects of metallicity on low-mass dwarfs is complicated by multiple factors. First, direct metallicity measurements of these cool stars are difficult (e.g., Woolf & Wallerstein 2006;Johnson & Apps 2009), as current models do not accurately reproduce their complex spectral features. Currently, measurements of metallicity-sensitive molecular band heads (CaH and TiO) are used to estimate the metallicity of M dwarfs at the ∼1 dex level (see Gizis 1997;Lépine et al. 2003;Burgasser & Kirkpatrick 2006;West et al. 2008), but detailed measurements are only available for a few stars. The effects of metallicity on the absolute magnitudes of low-mass stars are poorly constrained. Accurate parallaxes for nearby subdwarfs do exist (Monet et al. 1992;Reid 1997;Burgasser et al. 2008), but measurements of their precise metal abundances are difficult given the extreme complexity of calculating the opacity of the molecular absorption bands that dominate the spectra of M dwarfs. Observations of clusters with known metallicities could mitigate this problem (Clem et al. 2008;An et al. 2008), but there are no comprehensive observations in the ugriz system that probe the lower main sequence. Note the smooth behavior, with a peak near M r ∼ 11, corresponding to a spectral type of ∼M4. The error bars (many of which are smaller than the points) are the formal uncertainties from fitting the local densities in each 0.5 mag absolute magnitude slice in stellar density.
To test the systematic effects of metallicity on this study, the ([Fe/H],ΔM r ) relation from Ivezić et al. (2008) was adopted. We note that this relation is appropriate for more luminous F and G stars, near the main-sequence turnoff, but should give us a rough estimate for the magnitude offset. The adopted Galactic metallicity gradient is At small Galactic heights (Z 100 pc), this linear gradient produces a metallicity of about [Fe/H] = −0.1, appropriate for nearby, local stars (Allende Prieto et al. 2004). At a height of ∼2 kpc (the maximum height probed by this study), the metallicity is [Fe/H] ∼−0.65, consistent with measured distributions ). The actual metallicity distribution is probably more complex, but given the uncertainties associated with the effects of metallicity on M dwarfs, adopting a more complex description is not justified. The correction to the absolute magnitude, ΔM r , measured from F and G stars in clusters of known metallicity and distance ) is given by Substituting Equation (9) into Equation (10) yields a quadratic equation for ΔM r in Galactic height. After initially assigning absolute magnitudes and distances with the CMRs appropriate for nearby stars, each star's estimated height above the plane, Z ini , was computed. This is related to the star's actual height, Z true , through the following equation: A star's true height above the plane was calculated by finding the root of this nonlinear equation. Since ΔM r is a positive value, the actual distance from the Galactic plane, Z true , is smaller than the initial estimate, Z ini . As explained above, this effect becomes important at larger distances, moving stars inward and decreasing the density gradient. Thus, if metallicity effects are neglected, the scale heights and lengths are overestimated. In Figure 16, the systematic effects of metallicity-dependent CMRs are shown. The first is the extreme limit, shown as the red histogram, where all stars in the sample have an [Fe/H] ∼ −0.65, corresponding to a ΔM r of roughly 0.5 mag. All of the stars in the sample are shifted to smaller distances, greatly enhancing the local density. This limit is probably not realistic, as prior LF studies (e.g., Reid & Gizis 1997;Cruz et al. 2007) would have demonstrated similar behavior. The effect of the metallicity gradient given in Equation (9) is shown with the solid blue line. Note that local densities are increased, since more stars are shifted to smaller distances.
Systematic CMRs: Extinction
The extinction and reddening corrections applied to SDSS photometry are derived from the Schlegel et al. (1998) dust maps and an assumed dust law of R V = 3.1 (Cardelli et al. 1989). The median extinction in the sample is A r = 0.09, while 95% of the sample has A r < 0.41. Typical absolute magnitude differences due to reddening range up to ∼1 mag, producing distance corrections of ∼40 pc, enough to move stars between adjacent R, Z bins and absolute magnitude bins. This effect introduces strong covariances between adjacent luminosity bins, and implies that the final LF depends on the assumed extinction law. Most of the stars in our sample lie beyond the local dust column, and the full correction is probably appropriate (Marshall et al. 2006). To bracket the effects of extinction on our analysis, two LFs were computed. The first is the (M r , r −z) LF, which employs the entire extinction correction. The second uses the same CMR, but without correcting for extinction. The two LFs are compared in Figure 17. When the extinction correction is neglected, stellar distances are underestimated, which increases the local density. This effect is most pronounced for larger luminosities. The dominant effect in this case is not the attenuation of light due to extinction, but rather the reddening of stars, which causes the stellar absolute magnitudes to be underestimated.
Systematic Uncertainties
The statistical error in a given LF bin is quite small, typically 0.1%, and does not represent a major source of uncertainty in this analysis. The assumed CMR dominates the systematic uncertainty, affecting the shape of the LF and resulting MF. To quantify the systematic uncertainty in the LF and GS, the following procedure was employed. The LF was computed five times using different CMRs: the (M r , r − z) and (M r , r − i) CMRs with and without metallicity corrections, and the (M r , r − z) CMR without correcting for Galactic extinction. The LFs measured by each CMR are plotted in Figure 18, along with the unweighted mean of the five LF determinations. The uncertainty in a given LF mag bin was set by the maximum and minimum of the five test cases, often resulting in asymmetric error bars. This uncertainty was propagated through the entire analysis pipeline using three LFs: the mean, the "maximum" LF, corresponding to the maximum Φ in each magnitude bin, and the "minimum" LF, corresponding to the lowest Φ value. We adopted the mean LF as the observed system LF and proceeded to correct it for the effects of Malmquist bias and binarity, as described below.
Monte Carlo Models: Malmquist Bias
Malmquist bias (Malmquist 1936) arises in flux-limited surveys (such as SDSS), when distant stars with brighter absolute magnitudes (either intrinsically, from the width of the main sequence, or artificially, due to measurement error) scatter into the survey volume. These stars have their absolute magnitudes systematically overestimated (i.e., they are assigned fainter absolute magnitudes than they actually possess), which leads to underestimated intrinsic luminosities. Thus, their distances will be systematically underestimated. This effect artificially shrinks the observed scale heights and inflates the measured LF densities. Assuming a Gaussian distribution about a "true" mean absolute magnitude M • , classical Malmquist bias is given bȳ where σ is the spread in the main sequence (or CMR), dA(m)/dm is the slope of the star counts as a function of apparent magnitude m, andM(m) is the observed mean absolute magnitude. Qualitatively,M(m) is always less than M • (assuming dA(m)/dm is positive), meaning that the observed absolute magnitude distribution is skewed toward more luminous objects. Malmquist bias effects were quantified by including dispersions in absolute magnitude of σ M r = 0.3 and σ M r = 0.5 mag and a color dispersion of σ r−z,r−i = 0.05 mag. These values were chosen to bracket the observed scatter in the color-magnitude diagrams (see Table 4). The LF measured with the Malmquist bias model is shown in Figure 19. The correction is important for most of the stars in the sample, especially the brightest stars (M r < 10). Stars at this magnitude and color (r − i ∼ 0.5, r − z ∼ 1; see Figure 4) are very common in the SDSS sample because they span a larger volume than lower-luminosity stars. Thus, they are more susceptible to having over-luminous stars scattered into their absolute magnitude bins. However, the dominant factor that produced the differences between the raw and corrected LFs was the value of the thin disk scale height.
Monte Carlo Models: Unresolved Binarity
For all but the widest pairs, binaries in our sample will masquerade as a single star. The unresolved duo will be overluminous at a given color, leading to an underestimate of its distance. This compresses the density maps, leading to decreased scale heights and lengths, as binary systems are assigned smaller distances appropriate to single stars.
Currently, the parameter space that describes M dwarf binaries, binary fraction, mass ratio, and average separation, is not well constrained. However, there are general trends that are useful for modeling their gross properties. First, the binary fraction (f b ) seems to steadily decline from ∼50% at F and G stars (Duquennoy & Mayor 1991) to about 30% for M dwarfs (Fischer & Marcy 1992;Delfosse et al. 2004;Lada 2006;Burgasser et al. 2007). Next, the mass ratio distribution becomes increasingly peaked toward unity at lower masses. That is, F and G stars are more likely to have a companion from a wide range of masses, while M dwarfs are commonly found with a companion of nearly the same mass, when the M dwarf is the primary (warmer) star (Burgasser et al. 2007). The average separation distribution is not well known, but many companions are found with separations of ∼10-30 AU (Fischer & Marcy 1992), while very low mass stars have smaller average separations (Burgasser et al. 2007). At the typical distances probed by the SDSS sample (hundreds of pc) these binary systems would be unresolved by SDSS imaging with an average PSF width of 1. 4 in r.
We introduced binaries into our simulations with four different binary fraction prescriptions. The first three (f b = 30%, 40%, and 50%) are independent of primary star mass. The fourth binary fraction follows the methodology of Covey et al. (2008) and is a primary-star mass-dependent binary fraction, given by where M p is the mass of the primary star, estimated using the Delfosse et al. (2000) mass-luminosity relations. This linear equation reflects the crude observational properties described above for stars with M p < 0.7 M . Near 1 M , the binary fraction is ∼50%, while at smaller masses, the binary fraction falls to ∼30%. Secondary stars are forced to be less massive than their primaries. This is the only constraint on the mass ratio distribution. An iterative process, similar to that described in Covey et al. (2008), is employed to estimate the binary-star population. First, the mean observed LF from Figure 18 is input as a primary-star LF (PSLF). A mock stellar catalog is drawn from the PSLF, and binary stars are generated with the prescriptions described above. Next, the flux from each pair is merged, and new colors and brightnesses are calculated for each system. Scatter is introduced in color and absolute magnitude, as described in Section 5.4. The stellar catalog is analyzed with the same pipeline as the data, and the output model LF is compared to the observed LF. The input PSLF is then tweaked according to the differences between the observed system LF and the model system LF. This loop is repeated until the artificial system LF matches the observed system LF. Note that the GS parameters are also adjusted during this process, and the bias-corrected values are given in Table 6. The thin disk scale height, which has a strong effect on the derived LF, is in very good agreement with previous values. As the measured thin disk scale height increases, the density gradients decrease, and a smaller local density is needed to explain distant structures. This change is most pronounced at the bright end, where the majority of the stars are many thin disk scale heights away from the Sun (see Figure 19). The preferred model thin disk and thick disk scale lengths were found to be similar. This is most likely due to the limited radial extent of the survey compared to their typical scale lengths. Upcoming IR surveys of disk stars, such as APOGEE (Allende Prieto et al. 2008), should provide more accurate estimates of these parameters.
SDSS observations form a sensitive probe of the thin disk and thick disk scale heights, since the survey focused mainly on the northern Galactic cap. Our estimates suggest a larger thick disk scale height and smaller thick disk fraction than recent studies (e.g., Siegel et al. 2002;Jurić et al. 2008). However, these two parameters are highly degenerate (see Figure 1 of Siegel et al. 2002). In particular, the differences between our investigation and the Jurić et al. (2008) study highlight the sensitivity of these parameters to the assumed CMR and density profiles, as they included a halo in their study and we did not. study sampled larger distances than our work, which may affect the resulting Galactic parameters. However, the smaller normalization found in our study is in agreement with recent results from a kinematic analysis of nearby M dwarfs with SDSS spectroscopy (J. S. Pineda et al. 2010, in preparation). They find a relative normalization of ∼5%, similar to the present investigation. The discrepancy in scale height Single-star (red filled circles) and system (black filled circles) LFs. Note that the major differences between our system and single-star LFs occur at low luminosities, since low-mass stars can be companions to stars of any higher mass, including masses above those sampled here.
(A color version of this figure is available in the online journal.) highlights the need for additional investigations into the thick disk and suggests that future investigations should be presented in terms of stellar mass contained in the thick disk, not scale height and normalization. The iterative process described above accounts for binary stars in the sample and allows us to compare the system LF and single-star LF in Figure 20. Most observed LFs are system LFs, except for the local volume-limited surveys. However, most theoretical investigations into the IMF predict the form of the single-star MF. Note that for all binary prescriptions, the largest differences between the two LFs are seen at the faintest M r , since the lowest-luminosity stars are most easily hidden in binary systems.
RESULTS: LUMINOSITY FUNCTION
The final adopted system and single-star M r LFs are presented in Figure 21. The LFs were corrected for unresolved binarity and Malmquist bias. The uncertainty in each bin is computed from the spread due to CMR differences, binary prescriptions, and Malmquist corrections. The mean LFs and uncertainties are listed in Tables 7 and 8. The differences between the single and system LFs are discussed below and compared to previous studies in both M r and M J .
Single-star versus System Luminosity Functions
Figure 21 demonstrates a clear difference between the singlestar LF and the system LF. The single-star LF rises above the system LF near the peak at M r ∼ 11 (or a spectral type ∼M4) and maintains a density about twice that of the system LF. 13 This implies that lower-luminosity stars are easily hidden in binary systems, but isolated low-luminosity systems Note. Densities are reported in units of (stars pc −3 0.5 mag −1 ) ×10 −3 . Note. Densities are reported in units of (stars pc −3 0.5 mag −1 ) ×10 −3 .
are intrinsically rare. The agreement between the system and single-star LFs at high luminosities is a byproduct of our binary prescription, which enforced that a secondary be less massive than its primary-star counterpart. Since our LF does not extend to higher masses (G and K stars), we may be missing some secondary companions to these stars, which would inflate the single-star LF at high luminosities. However, only ∼700,000 G dwarfs are present in the volume probed by this study. Even if all of these stars harbored an M-dwarf binary companion, the resulting differences in a given bin would only be a fraction of a percent.
M r LF
Since many traditional LF studies have not employed the r band, our ability to compare our work to previous results is hampered. The most extensive study of the M r LF was conducted by Jurić et al. (2008), using 48 million photometric Note. Densities are reported in units of (stars pc −3 0.5 mag −1 ) ×10 −3 . Note. Densities are reported in units of (stars pc −3 0.5 mag −1 ) ×10 −3 .
SDSS observations, over different color ranges. Figure 19 compares the M r system LF determined here to the "joint fit, bright parallax" results of Jurić et al. (2008, their Table 3), assuming 10% error bars. The two raw system LFs broadly agree statistically, although the Jurić et al. (2008) work only probes to M r ∼ 11, due to their red limit of r − i ∼ 1.4. We compare system LFs, since the Jurić et al. (2008) study did not explicitly compute an SSLF and their reported LF was not corrected for Malmquist bias.
M J LF
We next converted M r to M J using relations derived from the calibration sample described in . The J filter has traditionally been used as a tracer of mass (Delfosse et al. 2000) and bolometric luminosity (Golimowski et al. 2004) in low-mass stars, since it samples the spectral energy distribution (SED) near its peak. The largest field LF investigation to date, Covey et al. (2008), determined the J-band LF from M J = 4 to M J = 12. In Figure 22, our transformed system M J LF (given in Table 9) is plotted with the M J LF from Covey et al. (2008). The shape of these two LFs agrees quite well, both peaking near M J = 8, although there appears to be a systematic offset, in that our M J LF is consistently lower than the one from the Covey et al. (2008) study. This is most likely due to the different CMRs employed by the two studies. Covey et al. (2008) used an (M i , i − J ) CMR, as opposed to the various CMRs employed in the current study. Figure 22 also compares our single-star M J LF (Table 10) to the LF of primaries and secondaries, first measured by Reid & Gizis (1997) and updated by Cruz et al. (2007). These stars are drawn from a volume-complete sample with d < 8 pc. A This Study (Single) Reid & Gizis (1997) Figure 22. Left panel: M J system LF. We compare our system LF (red filled circles) to the system LF measured by Covey et al. (2008, blue triangles). Our system LF and the Covey et al. (2008) results agree reasonably well. Right panel: M J single-star LF. We compare our LF for single stars (green filled circles and dashed line) compared to the single-star LF measured by Reid et al. (2002, open squares). The single-star LFs also agree within the uncertainties, except for a few bins, resolving previous discrepancies between photometric and volume-complete samples. (A color version of this figure is available in the online journal.) total of 146 stars in 103 systems are found within this limit. The distances for the stars in this volume-complete sample are primarily found from trigonometric parallaxes, with <5% of the stellar distances estimated by spectral type. There is reasonable agreement between our M J LF and the LF from the volume-complete sample, within the estimated uncertainties, indicating that the photometric and volume-complete methods now give similar results. Furthermore, it indicates that our assumed CMRs are valid for local, low-mass stars. Finally, the agreement between the single-star LFs validates our assumed corrections for unresolved binarity.
RESULTS: MASS FUNCTION
The MF was calculated from the M J LFs and the mass-luminosity (M J ) relation from Delfosse et al. (2000). We computed both a single-star MF and a system MF. As discussed in Covey et al. (2008), some past discrepancies between MFs are probably due to comparing analytic fits and the actual MF data. This effect is discussed below, where we compare our results to available MF data from nearby (e.g., Reid & Gizis 1997) and distant (e.g., Zheng et al. 2001) samples. We also compare our analytic fits to seminal IMF studies.
The single-star and system MFs are shown in Figure 23. As seen in the LFs, there is agreement between the two relations at higher masses. At masses less than 0.5 M , the shapes of the MFs are roughly equivalent, but the single-star density is roughly twice that of the systems. We note a small possible correction to the lowest-mass bin (log M/M = −0.95). At these masses, young brown dwarfs, with ages less than 1 Gyr and masses near M ∼ 0.075 M will have luminosities similar to late-type M dwarfs. Since these objects are not stellar, they should be removed from our MF. Assuming a constant star formation rate, Chabrier (2003a) estimated that brown dwarfs contribute ∼10% of the observed densities at the faintest absolute magnitudes (or lowest masses). Recent studies of nearby, young M dwarfs (e.g., Shkolnik et al. 2009) show that ∼10% have ages less than 300 Myr, further supporting the presence of young brown dwarfs in our sample. Thus, a correction of 10% to the lowest-mass bin would account for young brown dwarfs (see Figure 23). We do not apply the correction, but include its impact on the uncertainty of the last MF bin. In Figures 24 and 25, we display the lognormal and broken power-law fits to the system and single-star MFs. While the broken power law is preferred by many observers Kroupa 2002), the lognormal formalism has been popularized by some theorists (e.g., Padoan et al. 1997;Hennebelle & Chabrier 2008). Our MF data are best fitted by a lognormal distribution, as confirmed by an f test. We suggest using the lognormal form when comparing to previous MF fits, but stress that comparisons using the actual MF data (Tables 11 and 12) are preferred.
The system MF is compared to the pencil-beam survey of Zheng et al. (2001) MF in Figure 24. Their data were acquired with the HST, and the stars in their sample are at distances similar to this study. The Zheng et al. (2001) study sparked some discussion in the literature (e.g., Chabrier 2003aChabrier , 2003b, as their results differed dramatically from the nearby star Table 13. The MF data (open squares) and power-law fit from Zheng et al. (2001, light dashed line) are also shown. The agreement between our data and the Zheng et al. (2001) data is good at lower masses, but the data diverge at higher masses. Table 14. The MF data (open squares) and single power-law fit from Reid & Gizis (1997, light dashed line) are also shown. The data are in reasonable agreement, with discrepancies larger than the error bars in only two bins.
sample (e.g., Reid & Gizis 1997). The proposed solution was unresolved binarity. Binary systems would be resolved easily at small distances, but not at the larger distances probed by the HST sample. We compare our system MF to the Zheng et al. (2001) sample and find agreement over a large range of masses (M < 0.4 M ). At larger masses (M ∼ 0.4 M ), the MFs diverge. This can most likely be attributed to differing CMRs, in particular differences in the corrections for stellar metallicity gradients.
Our single-star MF is compared to the nearby star sample (Reid & Gizis 1997) in Figure 25. The MFs agree remarkably well, with discrepancies in only two bins (likely the result of small numbers in the nearby star sample). This indicates that our CMRs and methodology are valid, since the output densities are in agreement. It suggests that our binary corrections are Note. Densities are reported in units of (stars pc −3 0.1 log M −1 ) ×10 −3 . We compare our single-star LF to seminal IMF analytic fits in Figure 26 (see Table 1). While we advocate the comparison of MF data whenever possible (as discussed in Covey et al. 2008), it is informative to compare our results to these studies. The Kroupa (2002) and Chabrier (2003a) studies demonstrate the best agreement with our data at masses M < 0.4 M , but diverge at higher masses, predicting larger space densities than we infer here. The disagreement of the Miller & Scalo (1979) MF with the other three MFs suggests an issue with their normalization.
The IMF in other Mass Regimes
A single analytic description of the IMF over a wide range in mass may not be appropriate. Figure 27 shows the derived MFs from this study, and those from the Reid & Gizis (1997) sample and the Pleiades (Moraux et al. 2004). The lognormal fit from this study is extended to higher masses, and it clearly fails to match the Pleiades MF. Therefore, it is very important to only use the analytic fits over the mass ranges where they are appropriate. Extending analytic fits beyond their quoted bounds can result in significant inaccuracies in the predicted number of stars.
Theoretical Implications of the IMF
Any successful model of star formation must accurately predict the IMF. The measured field MF traces the IMF of lowmass stars averaged over the star formation history of the Milky Way. Thus, the field MF is not a useful tool for investigating changes in the IMF due to physical conditions in the star-forming regions, such as density or metallicity. However, it does lend insight into the dominant physical processes that shape the IMF. Recent theoretical investigations (see Elmegreen 2007, and references therein) have mainly focused on three major mechanisms that would shape the low-mass IMF: turbulent fragmentation, competitive accretion and ejection, and thermal cooling arguments.
Turbulent fragmentation occurs when supersonic shocks compress the molecular gas (Larson 1981;Padoan et al. 2001). Multiple shocks produce filaments within the gas, with properties tied to the shock properties. Clumps then form along these filaments and collapse ensues. In general, the shape of the IMF depends on the Mach number and power spectrum of shock velocities (Ballesteros-Paredes et al. 2006;Goodwin et al. 2006) and the molecular cloud density (Padoan & Nordlund 2002). Turbulence readily produces a clump distribution similar to the ubiquitous Salpeter IMF at high masses (>1 M ). However, the flattening at lower masses is reproduced if only a fraction of clumps are dense enough to form stars (Moraux et al. 2007).
An alternative model to turbulent fragmentation is accretion and ejection (Bate & Bonnell 2005). Briefly, small cores form near the opacity limit (∼0.003 M ), which is set by cloud composition, density, and temperature. These clumps proceed to accrete nearby gas. Massive stars form near the center of the cloud's gravitational potential, thus having access to a larger gas reservoir. Accretion ends when the nascent gas is consumed or the accreting object is ejected via dynamical interactions. The characteristic mass is set by the accretion rate and the typical timescale for ejection, with more dense star-forming environments producing more low-mass stars. This method has fallen out of favor recently, as brown dwarfs have been identified in weakly bound binaries (Luhman 2004b;Luhman et al. 2009), which should be destroyed if ejection is a dominant mechanism. Furthermore, if ejection is important, the spatial density of brown dwarfs should be higher near the outskirts of a cluster compared to stars, and this is not observed in Taurus (Luhman 2004a(Luhman , 2006 or Chamaeleon (Joergens 2006). Larson (2005) suggested that thermal cooling arguments are also important in star formation. This argument has gained some popularity, as it predicts a relative insensitivity of the IMF to initial conditions, which is supported by many observations (e.g., Kroupa 2002;Moraux et al. 2007;Bastian et al. 2010;and references therein). This insensitivity is due to changes in the cooling rate with density. At low densities, cooling is controlled by atomic and molecular transitions, while at higher densities, the gas is coupled with dust grains, and these dust grains dominate the cooling. The result is an equation of state with cooling at low densities and a slight heating term at high densities. This equation of state serves as a funneling mechanism and imprints a characteristic mass on the star formation process, with little sensitivity to the initial conditions.
The general shape of the IMF has been predicted by star formation theories that account for all of these effects (Chabrier 2003a(Chabrier , 2005. In particular, the high-mass IMF is regulated by the power spectrum of the turbulent flows (Padoan & Nordlund 2002;Hennebelle & Chabrier 2009) and is probably affected by the coagulation of less massive cores, while the flatter, low-mass distribution can be linked to the dispersions in gas density and temperatures (Moraux et al. 2007;Bonnell et al. 2006). As the IMF reported in this study is an average over the star formation history of the Milky Way, changes in the characteristic shape of the IMF cannot be recovered. However, our observational IMF can rule out star formation theories that do not show a flattening at low masses, with a characteristic mass ∼0.2 M . Recent numerical simulations have shown favorable agreement with our results (Bate 2009); however, most numerical simulations of star formation are restricted in sample size and suffer significant Poisson uncertainties. Analytical investigations of the IMF (Hennebelle & Chabrier 2008) are also showing promising results, reproducing characteristic masses ∼0.3 M and lognormal distributions in the low-mass regime.
CONCLUSIONS
We have assembled the largest set of photometric observations of M dwarfs to date and used it to study the low-mass stellar luminosity and MFs. Previous studies were limited by for the lognormal fit and ψ(M) ∝ M −α for the power-law fit.
Table 14
Single Mass Function Analytic Fits The precise photometry of the SDSS allowed us to produce a clean, complete sample of M dwarfs, nearly 2 orders of magnitude larger than other studies. To accurately estimate the brightness and distances to these stars, we constructed new photometric parallax relations from data kindly provided to us prior to publication (D. A. Golimowski et al. 2010, in preparation). These relations were derived from ugrizJHK s photometry of nearby stars with known trigonometric parallax measurements. We compared our new relations to those previously published for SDSS observations.
We also introduced a method for measuring the LF within large surveys. Previous LF investigations either assumed a Galactic profile (for pencil-beam surveys, such as Zheng et al. 2001) or a constant density (for nearby stars, i.e., Reid & Gizis 1997). However, none of these samples have approached the solid angle or the number of the stars observed in this study. We solved for the LF and GS simultaneously, using a technique similar to Jurić et al. (2008). Our LF is measured in the r band. Using multiple CMRs, we investigated systematic errors in the LF and computed the effects of Malmquist bias, unresolved binarity, and GS changes using MC models. This allowed us to compare our results both to distant LF studies (which sampled mostly the system LF) and nearby star samples (which can resolve single stars in binary systems).
Finally, we computed MFs for single stars and systems. Low-luminosity stars are more common in the single-star MF, since they can be companions to any higher-mass star. We fitted both MFs with a broken power law, a form preferred by Kroupa (2002), and a lognormal distribution, which is favored by Chabrier (2003a). The lognormal distribution at low masses seems to be ubiquitous: it is evident both in the field (this study ;Chabrier 2003a) and open clusters, such as Blanco 1 (Moraux et al. 2007), the Pleiades (Moraux et al. 2004), and NGC 6611 (Oliveira et al. 2009). The best fits for this study are reported in Tables 13 and 14. We stress the point first made in Covey et al. (2008) that comparing MF data is preferable to comparing analytic fits, since the latter are often heavily swayed by slight discrepancies among the data. We also caution the reader against extrapolating our reported MF beyond 0.1 M < M < 0.8 M , the masses that bound our sample. In the future, we plan to investigate the LF in other SDSS bandpasses, such as i and z. Our system and single-star MFs represent the best current values for this important quantity for low-mass stars. | v2 |
2018-06-12T00:50:26.270Z | 2018-01-01T00:00:00.000Z | 46992859 | s2orc/train | Determinants of persistent asthma in young adults
ABSTRACT Objective: The aim of the study was to evaluate determinants for the prognosis of asthma in a population-based cohort of young adults. Design: The study was a nine-year clinical follow up of 239 asthmatic subjects from an enriched population-based sample of 1,191 young adults, aged 20–44 years, who participated in an interviewer-administered questionnaire and clinical examination at baseline in 2003–2006. From the interview, an asthma score was generated as the simple sum of affirmative answers to five main asthma-like symptoms in order to analyse symptoms of asthma as a continuum. The clinical examination comprised spirometry, bronchial challenge or bronchodilation, and skin prick test. Results: Among the 239 individuals with asthma at baseline 164 (69%) had persistent asthma at follow up, while 68 (28%) achieved remission of asthma and seven (3%) were diagnosed with COPD solely. Determinants for persistent asthma were use of medication for breathing within the last 12 months: Short-acting beta-adrenoceptor agonists (SABA) only (OR 3.39; 95%CI: 1.47–7.82) and inhaled corticosteroids (ICS) and/or long-acting beta-adrenoceptor agonists (LABA) (8.95; 3.87–20.69). Stratified by age of onset determinants for persistence in individuals with early-onset asthma (age less than 16 years) were FEV₁ below predicted (7.12; 1.61–31.50), asthma score at baseline (2.06; 1.15–3.68) and use of ICS and/or LABA within 12 months (9.87; 1.95–49.98). In individuals with late-onset asthma the determinant was use of ICS and/or LABA within 12 months (6.84; 2.09–22.37). Conclusions: Pulmonary function below predicted, severity of disease expressed by asthma score and use of ICS and/or LABA were all determinants for persistent early-onset asthma, whereas only use of ICS and/or LABA was a determinant in late-onset asthma. A high asthma score indicated insufficient disease control in a substantial proportion of these young adults.
Introduction
Asthma is a common complex respiratory disorder with various overlapping phenotypes [1,2]. Common features include fluctuating respiratory symptoms associated with variable airflow limitation and bronchial hyperresponsiveness (BHR) due to inflammation of the airways. The age of asthma onset is an important factor for dividing the phenotypes and a major determinant of the prognosis [3][4][5], but the prognosis for adult-onset asthma is only sparsely documented [6]. In a prospective study of individuals with adult-onset asthma higher age, higher body mass index (BMI) and low lung function were associated with greater asthma severity, while non-sensitisation and a normal lung function were predictors for remission [7]. A review has shown that adult-onset asthma has a worse prognosis and a lower response to standard asthma treatment than childhood-onset asthma [8]. In a 12year follow-up study of adult-onset asthma elevated BMI at baseline, smoking and current allergic or persistent rhinitis predicted uncontrolled asthma, and elevated blood eosinophils and good lung function (FEV1) at baseline protected from uncontrolled asthma [9]. Remission rates vary widely due to varying definitions of asthma and observation time but generally early-onset asthma has a substantially higher remission rate than late-onset asthma [10].
Despite reported remission of asthma the disease is usually considered as a treatable, but not curable disease once present [11]. The understanding of determinants that affect the course of diagnosed asthma, e.g. avoidance of environmental or occupational exposures [12], is therefore important for tertiary prevention, since asthma persistence is associated with frequent and severe symptoms with development of impaired lung function [13].
In a recent publication, we have reported risk factors for incident asthma in a cohort of young adults [14]. The aim of the present study was to evaluate determinants for the prognosis of asthma in the same cohort.
Methods
The present study was a 9-year clinical follow up of 239 individuals with asthma from an enriched populationbased sample of 1,191 young adults who participated in an interviewer-administered questionnaire and clinical examination at baseline in 2003-2006, the RAV-study (Risk Factors for Asthma in Adults). The protocol was based on the European Community Respiratory Health Survey II (ECRHS II) [15] and the baseline study has been reported elsewhere [16]. In brief, the baseline study population comprised a random sample of 10,000 individuals, aged 20-44 years and standardised by sex and age. Among 7,271 (73%) individuals who answered a screening questionnaire (Phase 1), a random sample corresponding to 20% of the study population plus a complementary symptom group of individuals reporting respiratory symptoms were invited to an interview and clinical examination (Phase 2). Of 1,191 subjects who participated in the clinical examination at baseline 424 had asthma. A total of 742 (62%) individuals were reexamined at follow up in 2012-2014 leaving 239 subjects with asthma at baseline for further analysis.
Ethical approval for the study was obtained by The Regional Scientific Ethical Committee for Southern Denmark and written informed consent was obtained from all participants.
Interview
The interview at baseline and follow up was a slightly modified electronic ECRHS main questionnaire performed in connection to the clinical examination by skilled interviewers trained in standardised interview technique. The interview comprised items on asthma history, asthma-like symptoms, medication, smoking habits, education and occupation.
Clinical examination
The clinical examination at baseline comprised a spirometry using a MicroLoop Spirometer (Micro Medical, Rochester, UK) and a skin prick test (SPT). The spirometry was followed by a methacholine challenge test using a Mefar MB3 dosimeter (Mefar, Bovezzo, Italy) or bronchodilation by inhalation of Terbutalin, 1.5 mg if forced expiratory volume in 1 s (FEV₁) was <70% of predicted or <1.5 l. The SPT comprised a panel of 13 commercially available inhalation allergens from ALK-Abelló, Gentofte, Denmark.
The spirometry at follow up was carried out by using an EasyOne Spirometer (ndd Medical Technologies, Andover, MA, USA) followed by bronchodilation by inhalation of Salbutamol, 0.2 mg from spacer (AeroChamber Plus Flow-Vu) following the ERS guidelines for standardisation of spirometry [17].
Diagnoses
Asthma at baseline was defined by an affirmative answer to the question 'Have you ever had asthma'? combined with asthma-like symptoms, use of medication for breathing within the last 12 months or airflow obstruction according to a modified definition used by de Marco et al. in a recent study [18] (Table S1). Obstruction was defined according to the lower limit of normal (LLN) [19] i.e. the 5th percentile of FEV₁/ FVC distribution corresponding to a z-score <−1.64. At baseline, the maximum values of FEV₁ and FVC were applied without reversibility testing since methacholine challenge test was performed. At follow up, the maximum value was the best of either the pre-bronchodilator (pre-BD) or post-bronchodilator (post-BD) value. COPD was defined according to criteria of LLN combined with symptoms consistent with COPD, modified from de Marco et al. [18] (Table S1). Transient airflow obstruction was defined by obstruction at baseline but no obstruction at follow up, while fixed obstruction was defined by having post-BD obstruction at follow up. Incident cases of asthma and COPD during the follow-up period were identified by applying the definitions used at baseline, although slightly modified since BHR was not measured at follow up (Table S1). Asthma-COPD overlap syndrome (ACOS) was defined when criteria for both asthma and for COPD were met. Early-onset asthma was defined when age of first attack of asthma was less than 16 years and late-onset when 16 years or more. Remission of asthma was defined by not fulfilling the criteria for asthma at follow up.
Determinants
BHR at baseline was defined as a 20% fall or more in FEV₁ after a dose of 1 mg methacholine or less. The bronchodilation was positive with an increase in FEV₁ of ≥12% and ≥200 ml. FEV₁ below predicted was defined by FEV₁<100% predicted corresponding to a z-score<0 [19]. Atopy was defined by one or more positive SPT (mean wheal diameter ≥3 mm). The type of medication used for breathing within the last 12 months was recorded and categorised into three levels: (1) no medication, (2) only short-acting beta-adrenoceptor agonists (SABA), and (3) inhalation corticosteroid (ICS) and/or long-acting beta-adrenoceptor agonists (LABA). The applied five-item asthma score was developed by Pekkanen, Sunyer et al. [20,21] and consisted of the simple sum of affirmative answers to five main asthmalike symptoms ranging from zero to five, not including questions regarding asthma attacks or asthma medication, in order to grade symptoms of asthma as a continuum (Table S2). Current smoking at baseline was defined in individuals, who reported smoking for at least one year and were still smoking. Occupation at baseline, reported as the last held job, was coded according to the International Standard Classification of Occupations from 1988 (ISCO88) and classified in highand low-risk jobs for asthma and COPD, respectively using job grouping tools formerly applied in the ECRHS [22]. Furthermore, participants were categorised in white and blue collar workers by using ISCO88-code <6000 and ≥6000, respectively.
Statistical analyses
Univariate and multivariate analyses were conducted by logistic regression models calculating odds ratios (OR) with 95% confidence intervals (CI) for the association between the dependent outcome (asthma or COPD) and the independent determinants with mutual adjustment for potential confounders. Univariate analyses were performed on a comprehensive set of potential determinants. For the multivariate analyses, a reduced set of determinants was selected based on clinical relevance and specific interest i.e. sex, FEV₁ below pred., BHR, ACOS, asthma score, medication for breathing, current smoking, and high-risk occupation. The analyses of asthma score as an outcome were performed using the score as a continuous variable. Supplementary we analysed the asthma score as a categorical variable. Asthma score was analysed by ordered logistic regression calculating ORs. FEV₁ at baseline and at follow up was analysed by linear regression. Results were considered statistically significant at p < 0.05. Analyses were carried out using Stata, version 13.1 (StataCorp, College Station, Texas, USA).
Results
Among the 1,191 participants of the baseline study, 449 individuals were lost to follow up. Withdrawal analyses showed that the proportion of cases of asthma at baseline did not differ between participants and non-responders (32.2% vs. 27.8%, p = 0.120). Compared to non-responders a larger proportion of the participants were older than 35 years old (53.6% vs. 41.4%; p = 0.000) and reported nasal allergy (42.1% vs. 32.3%; p = 0.001) whereas a smaller proportion were female (53.6% vs. 60.6%; p = 0.022) and current smoker (27.0% vs. 33.0%; p = 0.030). Participants and non-responders did not differ significantly concerning the other variables analysed.
Characteristics of the study population and associations with determinants at baseline by diagnosis at follow up are shown in Table 1. There was an almost equal distribution of individuals with early-and lateonset asthma. The risk of persistent asthma increased with increasing asthma score at baseline, both when the score was analysed as a continuous variable and when analysed as a categorical variable (data not shown). Current smoking at baseline was associated with a reduced risk of persistent asthma at follow up. Of 68 individuals who were smoking at baseline, 27 (40%) ceased smoking during follow up of whom 15 had persistent asthma and 12 were in remission at follow up (p = 0.88). Figure 1 shows that the average five-item asthma score decreased from baseline to follow up (mean 2.00 vs. 1.47, p < 0.001), which was also the case for the group of individuals with persistent asthma (mean 2.25 vs. 1.86, p = 0.004). The percentage of individuals reporting use of medication for breathing increased with increasing asthma score, but there was no change in the proportion of use of medication from baseline to follow up. All of the five questions defining asthma score predicted risk of persistent asthma in unadjusted analyses (Table S2), but only an affirmative answer to the question 'shortness of breath while wheezing or whistling in the last 12 months' revealed an increased risk of persistent asthma in the mutually adjusted analysis. None of the questions showed significant association to COPD at follow up.
In the adjusted analyses, determinants associated with persistence of asthma at follow up ( Table 2) were use of SABA and use of ICS and/or LABA, whereas current smoking showed a reduced risk. Age of onset of asthma showed heterogeneity. In individuals with early-onset asthma FEV₁ below predicted, asthma score and use of ICS and/or LABA at baseline determined an increased risk, while use of ICS and/or LABA was the only determinant of persistent late-onset asthma.
The adjusted analyses of the association between baseline determinants and asthma score at follow up and FEV₁ at baseline and follow up are shown in Table 3. At follow up, all determinants except current smoking were positively associated with asthma score although not significantly, which was independent of the diagnostic criteria applied.
FEV₁ decreased in average 196 ml from baseline to follow up corresponding to 21.3 ml/year, which is 19.4 ml/year in individuals with early-onset asthma and 22.5 ml/year in late-onset (data not shown). At baseline, the presence of ACOS was associated with a reduced FEV₁. At follow up, only medication with SABA predicted a lower decline in FEV₁ compared to those without medication. If the initial FEV 1 at follow up was analysed the average annual fall was 37.6 ml/ year with no difference related to age at onset.
Discussion
In the present population-based cohort study of young adults, a considerable remission rate was found and during the 9-year follow up the average asthma score based on frequency of asthma-like symptoms declined even in individuals with persistent asthma. Using medication for breathing within 12 months before baseline predicted overall persistent asthma nine years later. Age at onset of asthma showed different determinants of persistent asthma as FEV₁ below predicted, asthma score and use of ICS and/or LABA were all determinants in individuals with early-onset asthma, while use of ICS and/or LABA was the only determinant in individuals with late-onset asthma. However, there was indication of insufficient asthma control in the present cohort since a considerable fraction of individuals with persistent asthma had several symptoms at follow up. Table 2. Determinants for persistent asthma and COPD at follow up by logistic regression in 239 individuals with asthma at baseline. Bold values denote significant associations (p < 0.05). a n (total) = 155 due to missing data on age at first asthma attack. b n (total) = 31 due to missing data on age at first asthma attack.
NA: Not applicable
The table shows odds ratios (OR) with 95% confidence intervals (95% CI).
The different baseline determinants associated with an increased risk of persistent asthma in the two groups described by the age of onset may reflect characteristics of two distinct phenotypes. We demonstrated an impact of FEV₁ below predicted on the risk of persistent asthma in individuals with early-onset asthma, which is in accordance with a recent study in which the impact on lung function of early-onset asthma was considerably greater than for late-onset asthma [23]. We were not able to show an impact of atopy on the persistence of asthma, but in a recent review comparing studies on early-and late-onset of current asthma, the findings showed that adults with early-onset disease were more likely to be atopic and had a higher frequency of asthma attacks, whereas adults with late-onset disease were more likely to be female and had greater degrees of fixed airflow obstruction [24].
The use of medication at baseline was a determinant for persistence of disease in the present study. The use of ICS and/or LABA was a stronger determinant than the use of SABA only. This may reflect more severe disease when using long-term controllers than shortterm relievers. The strongest association was seen in individuals with early-onset asthma which may likewise indicate more severe disease. Use of asthma medication in early-and late-onset asthma has been reported, but details vary between studies [24].
In the present study, the applied asthma score demonstrated its usability since increasing asthma score at baseline predicted an increased risk of persistent asthma in individuals with early-onset asthma, while there was no increased risk of later COPD. A substantial proportion of individuals had asthma score equal to zero at baseline (18.8%) and at follow up (36.8%). This may be due to mild cases diagnosed by the diagnostic criteria or due to individuals who remitted during follow up. However, results concerning asthma score in the adjusted models must be interpreted with caution since the five questions comprising the asthma score are part of the diagnostic criteria, which additionally includes supplementary questions on asthma attacks and medication as well as BHR and airway obstruction. When the asthma score was analysed independently of the diagnostic criteria (Table 3), a positive association was still demonstrated in the majority of determinants analysed thus supporting the applicability of the score.
The decreased asthma score during follow up may reflect the individuals who achieved remission during follow up as well as regression towards the mean since all individuals had asthma at baseline. Alternatively, it may imply some effect of asthma treatment during follow up or that the asthma score actually does not fully cover the spectrum of symptoms. However, at follow up a substantial part of individuals who reported use of medication for breathing had a considerable number of symptoms expressed by the asthma score indicating partly controlled or uncontrolled asthma. This emphasises the need for further medication even though some may have treatment resistant asthma.
We showed no overall change in use of medication for breathing from baseline to follow up. A considerable proportion of individuals were not medicated at all regardless of symptoms and even in individuals with asthma scores on 4-5 more than 10% were not currently medicated and only about half used controller medication indicating a need for treatment even though poor adherence may also play a role [25]. These findings are in accordance with previous studies showing that insufficient symptom control of asthma remains frequent among individuals with asthma [26,27].
The lung function data showed an overall decline in FEV₁ of 21.3 ml per year in individuals with current asthma at baseline. This is not different from predicted Table 3. Association between baseline determinants and asthma score at follow up (OR per step increase), FEV₁ at baseline and FEV₁ at follow up (coeff. Bold values denote significant associations (p < 0.05). a n = 236; 3 participants had missing in variable 'high-risk occupation (asthma)'. b n = 236; 3 participants had missing in variable 'high-risk occupation (COPD)'. Regarding FEV₁ negative numbers report larger decrease. The table shows odds ratios (OR) and regression coefficients with 95% confidence intervals (95% CI).
in non-asthmatic subjects but the results may be evaluated with caution since pre-BD FEV₁ was used at baseline and post-BD at follow up which may tend to underestimate the value although best FEV₁ was used at both occasions. Analyses using pre-BD values showed larger decline per year, but did not influence on the role of the other determinants in the multivariate analyses. Previous studies have suggested accelerated lung function declines in asthma [28,29] and in a review of adult-onset asthma decline in FEV₁ varied between 25 and 95 ml per year [6]. Still, recent studies have found decline in FEV₁ in asthmatics of 25.3 [18] and 25.6 ml [30] per year, respectively. When adjusted for FEV₁ at baseline, determinants for change in lung function during follow up showed a smaller decrease of 50 ml (corresponding to 5.5 ml per year) in females than in males which may reflect the more than one liter lower FEV₁ in females overall compared to males at baseline. Equal to this, individuals with ACOS and individuals who were current smokers showed a reduced lung function of nearly 0.40 and 0.14 l, respectively, at baseline in comparison with their references i.e. healthy individuals and non-smokers, respectively. These determinants may have played a role even before baseline which may be the reason for the lowered decline in lung function than their references during follow up. The issue of asthma and smoking remains controversial. In the present study, smoking was associated with a reduced risk of persistent asthma, which could be due to a 'healthy smoker effect', i.e. individuals with asthma at baseline had quit smoking earlier or had never started smoking due to airway symptoms since no difference between persistent asthma and remission was found among the individuals who ceased smoking during follow up. Previous studies have suggested that smoking may have a negative effect on longitudinal changes in lung function in individuals with asthma [31]. A recent large study on asthma in the general population aged 20-100 years showed that smoking was the main explanation of poor prognosis and comorbidities in individuals with asthma during 4.5 years of follow up [32].
We were not able to confirm the findings in other studies showing poor prognosis of asthma in individuals with high-risk occupation [33]. The analyses showed no significant correlation between high-risk occupation and persisting asthma, which could be due to the young age of the study population, the relatively short duration of follow up or a 'healthy worker effect', i.e. that subjects with asthma before baseline had chosen an occupation that would not provoke or exacerbate their airway symptoms.
We found that 29% of the individuals with current asthma at baseline achieved remission during follow up, which is slightly higher than the recent comparable longitudinal study using similar follow-up period and diagnostic criteria in which the remission of current adult asthma was 22.2% [18]. Nearly the same range of remission was reported in a study of individuals with self-reported current asthma sampled from the Italian population in which 30% recovered from their asthma after about 10 years [34]. The prevalence of remission of asthma in adults has been reported in the range from 5 to 40%, and usually limited to individuals with mild disease [10,35,36]. However, remission rates can be difficult to compare since there is no golden standard to define remission [6].
Individuals with ACOS at baseline had higherhowever not significantrisk of persistent asthma than individuals with asthma solely. This is in line with a recent study of young adults in the same age range [18] suggesting that ACOS may represent a phenotype with severe asthma, which progresses to fixed airflow obstruction, possibly due to structural changes in the airways. This is supported by the observation that the determinants for persistent asthma, i.e. use of medication were not associated to COPD at follow up. A recent study of different phenotypes of chronic airway diseases in the general population confirmed the poor prognosis of individuals with ACOS, especially in a subgroup with lateonset asthma defined by current self-reported asthma with onset after 40 years of age [30].
The present study is population-based which constitutes a strength when evaluating determinants for prognosis of asthma in the general population. The longitudinal study design and the use of validated and internationally applied questionnaires and clinical examinations including measurements of pulmonary function of all participants are further strengths [37,38]. By use of multivariate logistic regression models in the analysis, we believe to have controlled for confounding factors potentially able to have an impact on the outcome, i.e. persistent asthma.
Although the study was population based, a risk of selection bias exists since asthmatic individuals with more severe airway symptoms may be more prone to participate than individuals with minor symptoms. Misclassification of disease status may have occurred due to the choice of diagnostic criteria. By use of selfreported information, individuals in remission who reported ever asthma at baseline may have been mild cases without symptoms at follow up. Furthermore, we may have overestimated the number of cases of COPD by using the LLN by the 5th percentile of FEV₁/FVC corresponding to z-score <−1.64, and the relatively low dose of beta-agonist chosen due to the study setting in a general population [39]. The use of two different kinds of spirometers, one with a turbine (MicroLoop) at baseline and one with ultra-sound transit time measurement (ndd EasyOne) at follow up may have affected the lung function data although calibration check was performed daily on both spirometers to minimise this bias. Furthermore, definitions of the age limit between early-and late-onset asthma vary widely in the literature [6,8,24]. However, the applied criteria for asthma and COPD are in line with recent research [18]. The cut-off at 16 years was chosen at baseline in order to evaluate risk of occupational factors in a relevant group.
Limitations of the study include the size of the study population with a limited number of individuals having asthma leading to low power of the analyses even though we used an enriched sample, and further subgroup analyses to evaluate the impact of potential effect modification were not performed.
Conclusion
The present study showed that determinants for persistent asthma in young adults differed according to age at onset of disease. Pulmonary function below predicted, increased asthma score and use of medication for breathing within 12 months before baseline determined persistence of asthma in individuals with early-onset disease, while use of medication was the only determinant for persistence in individuals with late-onset asthma. Use of controller medication for breathing showed stronger association with persistent asthma than only use of reliever medication in both groups.
Evaluation of asthma score and use of asthma medication indicated insufficiently treated asthma underlining the importance of regular monitoring of symptoms, pulmonary function, and treatment of adult asthma. Furthermore, in comparison with asthma, the diagnosis of ACOS at baseline was associated with an increased yet not significant risk of persisting asthma at follow up, which in line with other studies indicates that ACOS represents a phenotype of severe asthma.
Disclosure statement
No potential conflict of interest was reported by the authors. | v2 |
2020-12-01T15:15:27.720Z | 2020-12-01T00:00:00.000Z | 227235999 | s2orc/train | Potato consumption and risk of cardio-metabolic diseases: evidence mapping of observational studies
Background Recent systematic review of clinical trials concluded that there was no convincing evidence to suggest an association between potatoes and risk of cardio-metabolic diseases. Objective Summarize observational study data related to potato intake and cardio-metabolic health outcomes in adults using evidence mapping to assess the need for a future systematic review. Methods We searched MEDLINE®, Commonwealth Agricultural Bureau, and bibliographies for eligible observational studies published between 1946 and July 2020. Included studies evaluated potato intake in any form or as part of a dietary pattern with risk for cardio-metabolic diseases. Outcomes of interest included cardiovascular disease (CVD), cerebrovascular diseases, diabetes, hypertension, blood lipids, and body composition. Results Of 121 eligible studies, 51 reported two different methods to quantify potato intake (30 studies quantified intake as either grams or serving; 20 studies reported times per week; one reported both methods) and 70 reported potato as part of a dietary pattern and compared higher vs. lower intake, linear change, or difference in potato intake among cases and controls. Studies that quantified potato intake as either grams or serving reported the following outcomes: diabetes (8 studies); cerebrovascular stroke (6 studies); five studies each for CVD, systolic and diastolic blood pressure, and hypertension; three studies each for body mass index, body weight, CVD mortality; two studies for myocardial infarction; and one study each for blood glucose, HOMA-IR, and blood lipids. Higher potato intake was associated with an increased risk for blood pressure and body weight, and the results of all other outcomes observed no association. Potato consumption as part of dietary pattern studies reported a negative association between fried form of potato and all or most cardio-metabolic risk factors and diseases. Conclusion Evidence mapping found sufficient data on the association between potato intake and cardio-metabolic disease risk factors to warrant for a systematic review/meta-analysis of observational studies. Supplementary Information The online version contains supplementary material available at 10.1186/s13643-020-01519-y.
Background
Potatoes, a predominant food staple in the USA [1], contain a variety of nutrients and phytochemicals that include potassium, vitamin C, phosphorus, magnesium, B vitamins, dietary fiber, and polyphenols [2]. Potatoes contribute the third-highest total phenolic content to the diet among fruits and vegetables, after oranges and apples [2]. They are carbohydrate-rich providing little fat and many of the compounds found in potatoes have been shown to be beneficial to health through antioxidant, anti-inflammatory, and anti-hyperlipidemic actions [3]. In contrast, they are also considered to be a highglycemic-index food [4] and consumption has been suggested to increase the cardio-metabolic risk of type 2 diabetes, obesity, and cardiovascular disease (CVD). However, a recent systematic review of clinical trials concluded that there was no convincing evidence to suggest an association between intake of potatoes and risk of these diseases [5]. Furthermore, results from single-meal test studies have found that intake of boiled potatoes increased satiety compared with intake of other iso-caloric preparations of rice, bread, and pasta [6].
The purpose of this evidence map is to summarize observational and epidemiologic studies examining potato intake and biomarkers of cardio-metabolic health with an objective to identify a comprehensive evidence base for conduct of further systematic review/meta-analysis.
Description of evidence maps
An evidence map is a systematic search of a broad field to organize, summarize, and synthesize current scientific evidence into a visual representation, often a tabular format or a searchable database [7]. Evidence mapping helps identify not only the areas rich in studies for the conduct of further systematic review or meta-analysis but also can identify gaps in knowledge for future research needs [8]. However, evidence mapping does not assess risk of bias or meta-analyze included studies. Our evidence map depicts a summary of literature on the relationship between potato intake and cardio-metabolic health outcomes and biomarkers of cardio-metabolic risk. Using the methodology outlined in the standard systematic review methods [9][10][11], we followed these steps: (1) identify the scope of search and the guiding question; (2) organize a team and assign each person's roles and responsibilities; (3) develop a strategy for a systematic and comprehensive search; (4) define scientific criteria and approach to the selection of studies; (5) screen potentially eligible abstracts; (6) extract data during full-text screening; and (7) categorize the outcomes and summarize the characteristics of included studies. We constructed a study flow diagram to describe our flow of study screening and inclusion from the retrieved data. This is a review of published literature and therefore, it is not necessary to include a statement regarding adherence to the guidelines of the Declaration of Helsinki and Institutional Review Board approval.
Identification of the scope
Our goal was to identify and summarize the extent and distribution of current evidence on the guiding key question that assessed whether higher potato intake, compared to lower intake, was associated with cardio-metabolic health outcomes and biomarkers of cardio-metabolic risk.
Strategy of systematic search and study selection
We conducted an electronic search for studies that evaluated potato intake and cardio-metabolic health outcomes, published from 1946 to July 2020 in MEDLINE® and Commonwealth Agricultural Bureau (CAB) databases. We also searched the bibliography of prior systematic reviews and eligible studies for relevant studies. In electronic searches, we combined the National Library of Medicine's Medical Subject Headings (MeSH) terms, keyword nomenclature, or text words for potatoes in combination with health-related terms (Additional Table S1). Searches were limited to observational and epidemiological studies conducted among adults (≥ 18 years of age). Additional eligibility criteria are detailed in Table 1. The titles and abstracts identified in the literature searches were screened independently by two team members and any conflicts were reviewed by all team members and resolved in weekly team meetings. Full-text publications for citations that met the inclusion criteria were retrieved and were also screened independently by two team members using the predefined eligibility criteria (Table 1). Any conflicts during full-text screening were also reviewed and resolved as a team.
Data extraction and synthesis
We collected pertinent data from eligible studies into the Systemic Review Data Repository (SRDR TM ), a publicly available web-based database application. The basic components of data extraction included (1) population; (2) potato source; (3) study design (observational studies, including prospective cohorts and case-control design); (4) outcome; (5) duration of follow-up; (6) number of participants; (7) number of studies per outcome and exposure; and (8) funding source. We extracted data when analyses stratified data by potato type and/or by sex.
One team member extracted pertinent data from the studies that met the inclusion criteria and a second team member verified the data entries. Any conflicts during the extraction phase were discussed by the assigned extractor and reviewer, and updated by the extractor. Extracted data was analyzed using Microsoft Excel 2007©, and was summarized in narrative form, tables, and figures.
Results
Our search identified 3581 abstracts. After the title and abstract screening in duplicate, 193 citations were identified for full-text retrieval and review against eligibility criteria. After full-text screening, a total of 121 articles were eligible for inclusion. The full list of included and excluded articles is listed in Additional Tables S2-S4, respectively. The study flow diagram is depicted in Fig. 1.
Study design characteristics
We categorized eligible studies depending on the details of potato intake-25 unique studies (in 31 publications) quantified potato intake as either grams or servings per day; 21 studies reported frequency (e.g., times per week) of potato intake, but did not provide data on grams or servings per day, with the exception of one study that reported both intake as frequency and servings per day; and the remaining 69 unique studies (in 70 publications) included potato intake as part of a dietary pattern or score.
Studies that quantified potato intake in grams or servings per day Studies that quantified potato intake either as grams or servings per day included 15 cohort studies, 8 crosssectional studies, and 3 case-control studies, while the Nurses' Health Study [NHS] contributed to a longitudinal study and a case-control study (Additional Table S5).
Seven of 8 cross-sectional studies were cross-sectional evaluation of potato intake from cohort studies.
Study and population characteristics
Findings from the 25 observational studies included were reported between 1993 and 2019 ( Table 2). The 15 cohort studies enrolled between 1981 and 410,701 participants (89,716 to 5,942,912 person-years). The 8 cross-sectional studies included between 110 and 41,391 subjects, and the 3 case-control studies enrolled between 390 and 2658 subjects. Funding sources included government only (14 studies, 56%), academia only (2 studies, 8%), and multiple funding sources (7 studies, 28%), and 2 studies (8%) did not report funding sources. The average age of study populations at baseline ranged between 36 and 73.7 years ( Table 2). Four studies
Potato intake assessment
The studies listed in Additional Table S5 examined potato intake using validated questionnaires at various average doses ranging from 5.3 g per day to greater than 286 g per day, and from less than 1 serving per month to 1 serving per day. Studies utilized different types of comparisons-13 studies (52%) compared the highest quantile to the lowest quantile of potato intake, 16 studies (64%) compared the outcomes based on a linear change of potato intake, and 3 studies (12%) compared potato intake between those who had a cardio-metabolic disease and control subjects. Of these, 2 studies reported results from all three types of comparisons and 3 studies reported results by both quantiles and linear change.
Outcome descriptions
Outcomes reported in the evaluated studies are listed in Additional Table S5. All included studies adjusted models for potential confounders including, sex and age, diet, or other risk factors of cardio-metabolic disorders.
Cardiovascular disease
The CVD-related outcomes reported were incidence of overall CVD (4 studies, 16%), stroke (4 studies, 16%), myocardial infarction (2 studies, 8%), and CVD deaths (3 studies, 12%). Most studies observed no difference in the outcomes except for one study that reported decreased CVD mortality with higher intake of potatoes.
Type 2 diabetes and glucose homeostasis
The incidence of type 2 diabetes was reported by 8 studies (32%), and blood glucose concentrations and homeostatic model assessment-insulin resistance (HOMA-IR) were reported by 1 study (4%) each. Of 8 studies that assessed type 2 diabetes incidence, 4 (50%) studies reported increased risk associated with potato intake, whereas 2 (25%) reported no difference and 2 (25%) reported a decreased risk. Potato intake was found to be associated with a higher glucose concentration but not with HOMA-IR.
Hypertension and blood pressure measures
Five studies (20%) assessed hypertension as an outcome, reporting no difference in incidence in 3 studies and an increased incidence in 2 studies. Systolic blood pressure (SBP) was measured in 5 studies (20%); 3 reported an increased SBP, 1 reported no difference, and 1 found decreased SBP with potato intake. Diastolic blood pressure (DBP) was also assessed in 5 studies (20%); 3 reported no difference, 1 reported an increase, and 1 reported a decrease in DBP with higher intake of potatoes.
Blood lipids
One study (4%) each, measured total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C), lowdensity lipoprotein cholesterol (LDL-C), and triglycerides (TG) and found no difference between higher and lower intake of potatoes.
Body mass index and body weight
Six studies (24%) assessed either BMI (3 studies) or body weight (3 studies); of these five studies reported, an increased BMI or body weight with potato intake and one study found no association between potato intake and BMI.
Other outcomes
One study (4%) measured the volume of visceral adipose tissue (VAT) and subcutaneous abdominal adipose tissue (SAAT) as outcomes, reporting no association with potato intake. One study (4%) assessed malondialdehyde (MDA) as a marker of oxidative stress and reported that higher intake of potatoes was associated with an increased level of MDA. One study (4%) reported change in hs-CRP, oxidized low-density lipoprotein, and Interleukin-6, and found that higher intake of deep-fried potatoes was associated with higher concentrations of hs-CRP, but not with oxidized low-density lipoprotein and IL-6 ( Fig. 2).
Studies with frequency information of potato intake
Among the 21 studies that reported potato intake by frequency of consumption (e.g., times per week), 10 were cohort studies with a follow-up duration ranged from 4 to 20 years and 11 were cross-sectional studies. The total number of participants enrolled was between 338 and 410,701 in the cohort studies and between 210 and 50,339 in the cross-sectional studies. Four studies (19%) included only male subjects while five studies (24%) enrolled only female subjects and the remaining studies included both males and females. The most frequently measured outcomes were the incidence of type 2 diabetes (5 studies, 24%), obesity (4 studies, 19%), and blood concentrations of glucose (3 studies, 14%) (Fig. 3).
Outcomes in studies with potato intake frequency information
Potato intake increased the risk of the following outcomes: CVD (1 of 2 studies); CVD death (1 of 3 studies); incident diabetes (2 of 5 studies); impaired glucose tolerance (1 study); increased blood pressure among men (1 study); increased LDL-C levels (1 of 2 studies); obesity (2 of 4 studies); and increase in waist circumference among women (1 of 2 studies). Favorable associations with potato intake were also found for the following outcomes: decreased risk of diabetes (1 study); a reduction in 2-h plasma glucose levels (1 study); and a decrease in body weight with boiled potatoes (1 study). All other remaining studies found no association between potato intake and outcomes.
Studies reporting potato intake as dietary patterns
Of 69 studies in 70 publications that reported potato intake as a component of a dietary pattern or pre-defined dietary score, 57 studies reported on potatoes as part of a dietary pattern, 11 studies reported potatoes as part of Association by type of analysis and per outcome in studies that quantified potato intake. This bar graph shows the number of studies reported for each outcome and the breakdown of statistical significance (green color indicates favorable association/decreased risk; yellow color indicates no difference; red color indicates unfavorable association/increased risk for cardio-metabolic outcomes). CVD, cardiovascular disease; DBP, diastolic blood pressure; SBP, systolic blood pressure; CHD, coronary heart disease; HOMA-IR, homeostatic model assessment-insulin resistance; TC, total cholesterol; HDL, high-density lipoprotein; LDL, low-density lipoprotein; TG, triglycerides; BMI, body mass index week. This bar graph shows the number of frequency studies reported for each outcome and the breakdown of statistical significance by color (green color indicates favorable association/decreased risk; yellow color indicates no difference; red color indicates unfavorable association/increased risk for cardio-metabolic outcomes). CVD, cardiovascular disease; HDL, high-density lipoprotein; LDL, low-density lipoprotein; BMI, body mass index; hs-CRP, high-sensitivity C-reactive protein a dietary score, and 1 study reported potato intake as both part of a dietary pattern and a dietary score. Fortysix studies assessed the overall intake of potato regardless of potato preparation type or failed to specify potato preparation type. Of these 46 studies that included potato in a dietary pattern, 26 studies reported a higher risk of cardio-metabolic disease, 11 studies reported a lower disease risk, 1 study that included potatoes in a prudent diet had lower risk, but higher risk in the western diet group along with French fries, and the remaining 8 studies reported no association. On the other hand, 26 remaining studies examined dietary patterns that specifically contained French fries or potato chips. In all but one study (with no association) were associated with incident and/or risk factors of cardiometabolic disease. Of note, among all 69 studies, 3 studies reported results stratified by types of potato preparation, all 3 studies found an increase in cardio-metabolic risk factors (e.g., blood insulin, BMI) with dietary patterns that included French fries or potato chips, but not with the dietary pattern that included baked or boiled potatoes.
Research-dense areas and evidence gaps
The most extensively evaluated outcomes were the incidence of CVD, type 2 diabetes, and obesity. However, we found less sufficient evidence for the associations between potato intake and biomarkers of cardio-metabolic risk (e.g., inflammation, oxidative stress, and blood lipids). For example, we have identified only two studies that reported the assessment of inflammation with Creactive protein (CRP).
Regarding potato intake measured, we found a considerable heterogeneity in the way potato was prepared: boiled, fried, or both. However, the majority of the studies did not perform subgroup analyses, resulting in a lack of evidence assessing the influence of specific types of potato intake on cardio-metabolic risk.
Discussions
This evidence map has identified a large body of epidemiological evidence that evaluated potato intake with cardio-metabolic disease outcomes and categorized outcome data according to the quantified methods of potato intake, as reported in individual studies. Among studies that quantified potato intake, there was an increased risk of type 2 diabetes, weight gain, and SBP, but not for clinical endpoints of CVD and CVD mortality. However, among studies that provided frequency information on potato intake, without specifying the quantity of potatoes consumed, the associations with overall CVD risk factors or outcomes were less conclusive. In studies of dietary pattern or score, dietary patterns that included French fries or potato chips were associated with an increased cardio-metabolic risk or CVD mortality. Notably, such associations were not observed for dietary patterns that contained non-fried types of potatoes (baked or boiled), which emphasizes that the fried types of potato preparations along with associated types of foods consumed with potatoes may increase the cardio-metabolic risk.
The most consistent results from studies that quantified potato intake found that an increased consumption of potatoes was generally related to an increased risk of weight gain and type 2 diabetes. For body weight, two cohorts reported by subgroups of potatoes (such as baked/boiled, fries, or chips) showed unfavorable outcomes, irrespective of the type of potato preparations, and one cohort that reported results for any type of potato also found an unfavorable outcome. This may be attributed to the high content of starches or refined carbohydrates in potatoes as it has been reported that the amount of starches or refined carbohydrates contained in foods was most associated with weight gain rather than other dietary metrics that are currently believed such as fat content or energy density [12]. Also, potatoes have glycemic index (GI) values in a relatively high range regardless of the cooking method [13]. For diabetes, most cohorts conducted in the USA reported unfavorable outcomes associated with total potato intake, whereas one study conducted in China and another conducted in Iran reported a favorable outcome for type 2 diabetes. It suggests that the dietary pattern may be of particular importance in evaluation of potato intake and health outcomes, considering that different types of potatoes are consumed in different cultures (e.g., mashed or fried potatoes in Western diet; boiled potatoes in Asian and Mediterranean diets) [14]. Although potatoes have been condemned as unhealthy due to high carbohydrate content and GI, they are rich in potassium, magnesium, vitamin C, B vitamins, fiber, and polyphenols [13][14][15], each of which is associated with a decreased risk of chronic disease. However, most often potatoes are not consumed alone [4] and therefore, it may be more important to assess the effects of potatoes along with the type of diet or dietary patterns rather than assessing the effects of potatoes on outcomes of interest. The different types of potato preparations are often coconsumed with different types of diet [16,17]. For example, French fries are commonly served with fast foods (such as burgers and sodas), potato chips as a snack, and baked/boiled potatoes are part of a meal. The differences in co-consumed diets may explain the difference in our evidence mapping results. Further careful evaluations of dietary patterns may help understand the inconsistencies in the results across studies quantified potato intake.
Description of existing literature
Two meta-analyses have been published on the association of potato intake with risk of chronic disease and mortality [14,15]. A previous systematic review of 20 prospective cohort studies, which focused on various types of mortality as outcomes, reported no association between total potato intake and risk of all-cause and cancer mortality as well as insufficient evidence accumulated for CVD mortality [18]. However, another meta-analysis of 28 reports from prospective studies showed that a one daily serving (150 g/d) increase in total potato intake was associated with an 18% (95% CI, 10-27%) increase in risk for type 2 diabetes and 12% (95% CI, 1-23%) increase in risk for hypertension while reporting no association with risk of all-cause mortality, CHD, stroke, and colorectal cancer [19]. French fries consumption that was specified only in a smaller subset of the included studies showed a stronger positive association with type 2 diabetes (RR, 1.66; 95% CI, 1.43-1.94) and hypertension (RR, 1.37; 95% CI, 1.15-1.63) risks. A meta-analysis of six cohort studies showed that an increase of one daily serving of total potato intake was associated with a 20% (95% CI, 13-27%) increase in risk of type 2 diabetes [20]. More recently, one large-scale meta-analysis of 185 prospective studies and 58 RCTs was published on the relationship between carbohydrate quality (i.e., dietary fiber, glycemic index/load, and whole-grain intake) and chronic disease outcomes; however, no specific data was available on potato intake [21].
Strengths and limitations
Increasingly, evidence mapping methods are currently used to identify gaps and topics for future systematic reviews. Our review identified a variety of epidemiological methods used in evaluation of potato intake and chronic disease outcomes. The strengths of our approach include evaluation of different types of observational designs as well as the different types of evaluations (quantity of potato intake versus dietary pattern studies). Our review identified that there are sufficient number of studies available for conducting a future systematic review.
The limitation of evidence mapping includes lack of critical appraisal of individual study quality. Observational studies using food frequency questionnaires are often limited by the participants recall bias. Nonetheless, among eligible studies, there was considerable heterogeneity regarding potato intake; they were consumed as boiled, fried, or both and studies often failed to report subgroup analyses by types of potato preparation. Among studies that quantified potato, only a few cohorts contributed to the majority of the data. This may impact the generalizability to larger populations. Studies that provided intake data in terms of frequency per week intake of potato had incomplete information on the total intake per week or per day, precluding their utilization in future meta-analysis to assess a dose-response relationship. Although we included only observational studies, there was heterogeneity in the way outcome data was reported with some studies reporting results for longitudinal data, while others reported results for crosssectional or case controls.
Conclusion
Our qualitative gap analysis using evidence mapping has identified 121 different observational studies including prospective cohort, case-control, and cross-sectional studies to examine if higher potato intake is associated with an increased risk of developing cardio-metabolic disease as well as higher CVD risk factors. Our findings demonstrate sufficient evidence on the relationship between potato intake and risk factors associated with CVD such as type 2 diabetes, weight gain, and high blood pressure in particular. This evidence map also identifies ample evidence available for a future systematic review and meta-analysis for these outcomes, addressing the need for a thorough evaluation of different types of potato preparations as well as the accompanied diet.
Additional file 1: Table S1. Search strategy. . Table S4. List of 75 studies identified from abstract screening but excluded after full-text screening. List of studies excluded during full-text screening. Table S5. Baseline characteristics of included studies. Table listing the cohort names, study design, enrollment years, country, funding, N analyzed, reported age of participants, percent of male participants, baseline health status of participants, type of potato consumed, type of analysis, potato intake amounts, and list of study outcomes reported for each study. | v2 |
2021-04-20T05:14:45.376Z | 2021-04-19T00:00:00.000Z | 233297299 | s2orc/train | 254 COVID-19 related outcomes in psoriasis and psoriasis arthritis patients
Psoriasis is a systemic chronic inflammatory disorder that affects the skin and is associated with other disorders There is scan literature on outcomes of COVID19 patients with Psoriasis (Pso) and Psoriasis Arthritis (PsoA), especially from multicenter data Therefore, the aim was to examine investigate the risk of COVID complications in these two groups A retrospective cohort study was done using TriNetX, a federated real time database of 63 million records COVID patient cohorts were identified by validated ICD-10/serology codes per CDC guidelines An 1:1 matched propensity score analysis was conducted, adjusting for comorbidities and demographics, to calculate adjusted Risk Ratios (aRR) with 95% CI 45-day COVID complications were examined with severe COVID defined as a composite of mortality and ventilation Subgroup analyses were also performed for Pso and PsoA patients on systemic immunosuppressants In a matched sample of 2288 patients in each cohort, there was no differences between Pso-COVID patients and non-Pso COVID patients in hospitalization (0 90[0 78-1 03]), sepsis (0 78[0 54-1 14]), mortality (0 82[0 57-1 19]), and severe COVID (0 77[0 58-1 03]) Pso-COVID patients had statistically significant lower risk of acute respiratory distress syndrome (0 51[0 30-0 90]) and mechanical ventilation (0 65[0 45-0 95]) In a matched sample of 502 patients in each cohort, PsoA-COVID patients had no differences in any of the listed outcomes A subgroup analysis revealed that Pso-COVID and PsoA-COVID patients with a one-year history of systemic immunosuppressant use also had no differences in COVID outcomes compared to Pso-COVID patients and PsoA-COVID patients without immunosuppressants respectively Pso-COVID and PsoA-COVID patients were not at higher risk for severe COVID complications History of immunosuppressant use in both cohorts also revealed no higher risk in COVID complications Additional studies are warranted to visit the longer-term impacts of COVID on Pso and PsoA patients
250
Direct healthcare cost of atopic dermatitis in the Swedish population I Lindberg 1 , A de Geer 2 , G Ortsäter 1 , A Rieem Dun 1 , K Geale 1,3 , JP Thyssen 4 , L Von Kobyletzki 5 , A Metsini 6 , D Henrohn 2 , P Neregard 2 , A Cha 7 , JC Cappelleri 8 , W Romero 9 and MP Neary 10 1 Quantify Research, Sthlm, Sweden, 2 Pfizer AB, Sollentuna,Sweden,3 Public Health & Clinical Medicine,Umea University,Umea,Sweden,4 Dermatology & Venerology,Bispebjerg Hospital,Cph,Denmark,5 Dermatology,Skåne University Hospital,Lund,Sweden,6 Inst Medical Sciences,Orebro University,Orebro,Sweden,7 Pfizer Inc,NY,New York,United States,8 Pfizer Inc,Groton,Connecticut,United States,9 Pfizer Ltd,London,United Kingdom and 10 Pfizer Inc, Collegeville, Pennsylvania, United States Data quantifying population-based direct healthcare costs (DHCC) for atopic dermatitis (AD) by severity are limited. This study was designed to provide estimates for these costs. Patients were identified at first AD diagnosis in the National Patient Registry (secondary care) or in primary care (national coverage: 31%) (International Classification of Diseases-10 L20) or first dispensation of topical calcineurin inhibitor or topical corticosteroid (Anatomical Therapeutic Chemical code D11AH01/02 once; D07 twice in a year) in the Prescribed Drug Registry in 2007-17 (index) and followed until death, emigration, 31 Dec 2018 or adulthood. Patients without AD diagnosis with a record of diagnoses/treatment for other non-AD skin conditions were excluded. Patients were matched 1:1 on age, gender and region to controls. 1-year DHCC for secondary and primary care visits and filled prescriptions were compared with controls (2020V). Disease severity (mild-to-moderate [M2M] vs severe) using AD treatment and visits as proxies was assessed between index to 30 days after. 187,338 M2M (48% female; mean age 4) and 46,754 severe children (51%; 8), while 445,317 M2M (55%; 55) and 11,640 severe adults (57%; 53) were included. In children vs. controls, 1-year DHCC for secondary care, primary care and medications were respectively V72, V23, V33 million (mn) higher in M2M and V26, V4, V13 mn higher in severe; in adults vs. controls, V353, V68, V182 mn higher in M2M and V21, V2, V17 mn higher in severe (all comparisons significant, p<0.05). On population level, AD is associated with substantial economic burden, which is higher in M2M vs severe AD partially due to higher prevalence of M2M.
251
Dermatologist preferences regarding implementation strategies to improve statin use among patients with psoriasis JS Barbieri 1 , R Beidas 2 , G Gondo 4 , N Williams 6 , A Armstrong 5 , A Ogdie 2 , N Mehta 3 and J Gelfand 1 1 Dermatology, University of Pennsylvania, Philadelphia, Pennsylvania, United States, 2 University of Pennsylvania, Philadelphia, Pennsylvania, United States, 3 National Institutes of Health, Bethesda, Maryland, United States, 4 NPF, Alexandria, Virginia, United States, 5 University of Southern California Keck School of Medicine, Los Angeles, California, United States and 6 Boise State University, Boise, Idaho, United States Patients with psoriasis are at increased risk of cardiovascular (CV) disease, but are less likely to have high cholesterol identified and treated with statins. Since many patients with psoriasis do not routinely see primary care, involving dermatologists to screen for cholesterol and potentially prescribe statins has promise to improve CV outcomes. To evaluate dermatologist preferences for strategies to improve statin use among psoriasis patients, a survey consisting of a best-worst scaling choice experiment of 8 implementation strategies and items on willingness to screen and manage CV risk factors was fielded among dermatologists recruited through the National Psoriasis Foundation from Oct-Dec 2020. Ratio-scaled preference scores for each strategy were generated using hierarchical Bayes analysis in Lighthouse Studio. In these preliminary results among 69 dermatologists, 44% were male and 25% practiced in an academic setting. Overall, 64% agreed that checking a lipid panel and calculating a CV risk score seems doable and 32% agreed that prescribing statins seems doable. Additionally, 68% agreed that they would consider changing their practice if a trial demonstrated that psoriasis patients achieved better CV prevention when their dermatologists screened for high cholesterol and prescribed statins. In the best-worst scaling experiment, the highest ranked strategies included clinical decision support (preference score, 23.2), patient educational materials (15.7), and physician educational outreach (15.4). Our results highlight that dermatologists are willing to consider lipid screening and prescribing statins in those with psoriasis. These findings will guide the design of a future trial to evaluate strategies to improve lipid screening and statin use among psoriasis patients. Background: Dermatology patients and practitioners use social media for rapid dissemination of health information. TikTok, a short-form video sharing platform, is the fastest growing social media network and represents a novel, unsupervised source of medical information. Methods: Top dermatologic diagnoses and procedures from publicly available survey data were queried as TikTok hashtags. Content of the first 40 videos for each hashtag were analyzed from July 10 to 13, 2020, and classified by creator (healthcare professional, personal, business, professional organization), content (education, promotional, patient experience, entertainment), and impact (views, likes). Results: A total of 544 videos were analyzed. Laypeople created the most videos (45%), followed by healthcare professionals (HCPs) (39%). Board-certified dermatologists (BCDs) accounted for a minority of total posts (15.1%). BCDs accounted for the most videos made by a HCP (33%). Predominant content was educational (40.8%), followed by entertainment (26.7%). Videos from laypeople received the largest percentage of views (50.68%). The most-liked (66.9 million) and most-viewed (378 million) posts were both related to #skincare, but only 2.5% of analyzed #skincare videos were produced by BCDs. Conclusion: A majority of dermatology-related videos on TikTok are produced by laypeople. However, the top 5 dermatologists on TikTok have a combined following of 5.2 million (M), with over 600M views and 80M likes, illustrating the wide reach and potential opportunity for education using this novel platform. TikTok has a large audience interested in skin-related education. This highlights a potential role for BCDs to engage in this space as educators and viewers, both to provide accurate information and to be aware of skincare trends our patients are exposed to.
253
Racial differences in cutaneous sarcoidosis J Lai 1 , Y Semenov 2 , N Sutaria 1 , S Roh 1 , J Choi 1 , Z Bordeaux 1 , N Kim 1 , J Alhariri 1 and S Kwatra 1 1 Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States and 2 Dermatology, Massachusetts General Hospital, Boston, Massachusetts, United States A substantial percentage of sarcoidosis patients experience cutaneous symptoms. Racial differences in systemic involvement, cutaneous presentation, and prognosis remain understudied. We conducted a retrospective chart review from a population-based sample of 240 patients diagnosed with cutaneous sarcoidosis at the Johns Hopkins Hospital aged 18+ from 2015-2020. Multiple logistic regressions were conducted to assess differences in disease characteristics by race after adjusting for insurance type, age at diagnosis, and sex. More black patients than white patients had cutaneous sarcoidosis (B¼183, W¼47). Compared to white patients, black patients were more likely to be female (F¼75.4%, M¼24.6%, p¼.005), were diagnosed earlier (B¼41.7, W¼49.8, p<.0001), and had longer follow-up time (B¼14.2 yrs, W¼8.3 yrs, p¼.004). Although black patients had more progressive disease (p¼.033), this association was not significant when controlling for age of diagnosis (p¼.37). Thus, earlier age of diagnosis was associated with worse prognosis (p<.001). Blacks with cutaneous sarcoidosis were less likely to also have ocular sarcoid involvement (OR .108, 95% CI .014-.836; p¼.033). Among patients with pulmonary involvement, blacks were more likely to have restrictive lung disease (OR 3.15, CI 1.15-8.7; p¼.007) and decreased diffusing capacity (OR 3.12, CI 1.15-8.67; p¼.026). Black patients were more likely than white patients to have cutaneous manifestations of plaques (OR 3.94, CI .1.69-9.17; p¼.002) and lupus pernio p¼.016). Here we demonstrate racial differences in sarcoid prognosis, systemic, and cutaneous involvement. Racial differences in systemic and cutaneous involvement may be related to differences in differential disease prognosis, potentially serving as indications for more extensive therapy to combat progressive sarcoid.
254
COVID-19 related outcomes in psoriasis and psoriasis arthritis patients R Raiker 2 , H Pakhchanian 1 and VA Patel 1 1 The George Washington University School of Medicine and Health Sciences, Washington, District of Columbia, United States and 2 West Virginia University School of Medicine, Morgantown, West Virginia, United States Psoriasis is a systemic chronic inflammatory disorder that affects the skin and is associated with other disorders. There is scan literature on outcomes of COVID19 patients with Psoriasis (Pso) and Psoriasis Arthritis (PsoA), especially from multicenter data. Therefore, the aim was to examine investigate the risk of COVID complications in these two groups. A retrospective cohort study was done using TriNetX, a federated real time database of 63 million records. COVID patient cohorts were identified by validated ICD-10/serology codes per CDC guidelines. An 1:1 matched propensity score analysis was conducted, adjusting for comorbidities and demographics, to calculate adjusted Risk Ratios (aRR) with 95% CI. 45-day COVID complications were examined with severe COVID defined as a composite of mortality and ventilation. Subgroup analyses were also performed for Pso and PsoA patients on systemic immunosuppressants. In a matched sample of 2288 patients in each cohort, there was no differences between Pso-COVID patients and non-Pso COVID patients in hospitalization ( In a matched sample of 502 patients in each cohort, PsoA-COVID patients had no differences in any of the listed outcomes. A subgroup analysis revealed that Pso-COVID and PsoA-COVID patients with a one-year history of systemic immunosuppressant use also had no differences in COVID outcomes compared to Pso-COVID patients and PsoA-COVID patients without immunosuppressants respectively. Pso-COVID and PsoA-COVID patients were not at higher risk for severe COVID complications. History of immunosuppressant use in both cohorts also revealed no higher risk in COVID complications. Additional studies are warranted to visit the longer-term impacts of COVID on Pso and PsoA patients.
255
The risk of contracting COVID-19 after dermatological procedures compared with other medical procedures R Raiker 2 , H Pakhchanian 1 , A Baghdjian 3 and VA Patel 1 1 The George Washington University School of Medicine and Health Sciences, Washington, District of Columbia, United States, 2 West Virginia University School of Medicine, Morgantown, West Virginia, United States and 3 Pasadena City College, Pasadena, California, United States During the COVID19 pandemic, research has shown that many patients have decided to delay elective procedures, even if available, to reduce COVID exposure. There is scant literature that demonstrated the risk of COVID after dermatological procedures and whether these risks are higher compared to other medical procedures. This study aims to investigate these risks. A retrospective cohort study was done using TriNetX, a federated real time database of 63 million patient records. Patients undergoing any procedure were identified by CPT codes from Jan 2020-Nov 2020. ICD-10 and serology codes were used to identify 30-day risk of post-procedural COVID diagnosis per CDC guidelines. A 1:1 matched propensity score analysis was conducted, adjusting for comorbidities and demographics, to calculate adjusted Risk Ratios (aRR) with 95% CI. 224,536 dermatological procedures were conducted during the timeframe. Overall, there was a 2% risk of 30 day COVID diagnosis after a dermatological procedure. After matching, patients had a lower risk of contracting COVID after undergoing dermatological procedures when compared to urinary procedures ( | v2 |
2012-03-27T15:49:01.000Z | 2012-03-14T00:00:00.000Z | 73555919 | s2orc/train | Barium abundance in red giants of NGC 6752. Non-local thermodynamic equilibrium and three-dimensional effects
(Abridged) Aims: We study the effects related to departures from non-local thermodynamic equilibrium (NLTE) and homogeneity in the atmospheres of red giant stars in Galactic globular cluster NGC 6752, to assess their influence on the formation of Ba II lines. Methods: One-dimensional (1D) local thermodynamic equilibrium (LTE) and 1D NLTE barium abundances were derived using classical 1D ATLAS stellar model atmospheres. The three-dimensional (3D) LTE abundances were obtained for 8 red giants on the lower RGB, by adjusting their 1D LTE abundances using 3D-1D abundance corrections, i.e., the differences between the abundances obtained from the same spectral line using the 3D hydrodynamical (CO5BOLD) and classical 1D (LHD) stellar model atmospheres. Results: The mean 1D barium-to-iron abundance ratios derived for 20 giants are<[Ba/Fe]>_{1D NLTE} = 0.05 \pm0.06 (stat.) \pm0.08 (sys.). The 3D-1D abundance correction obtained for 8 giants is small (~+0.05 dex), thus leads to only minor adjustment when applied to the mean 1D NLTE barium-to-iron abundance ratio for the 20 giants,<[Ba/Fe]>_{3D+NLTE} = 0.10 \pm0.06(stat.) \pm0.10(sys.). The intrinsic abundance spread between the individual cluster stars is small and can be explained in terms of uncertainties in the abundance determinations. Conclusions: Deviations from LTE play an important role in the formation of barium lines in the atmospheres of red giants studied here. The role of 3D hydrodynamical effects should not be dismissed either, even if the obtained 3D-1D abundance corrections are small. This result is a consequence of subtle fine-tuning of individual contributions from horizontal temperature fluctuations and differences between the average temperature profiles in the 3D and 1D model atmospheres: owing to the comparable size and opposite sign, their contributions nearly cancel each other.
Introduction
Red giants in Galactic globular clusters (GGCs) carry a wealth of important information about the chemical evolution of individual stars and their harboring populations. Owing to their intrinsic brightness, they are relatively easily accessible to highresolution spectroscopy, thus are particularly suitable for tracing the chemical evolution histories of intermediate age and old stellar populations. Unsurprisingly, a large amount of work has been done in this direction in the past few decades (for a review see, e.g., Gratton et al. 2004;Carretta at al. 2010), which has resulted, for example, in the discoveries of abundance anti-correlations for Na-O (Kraft 1994;Gratton et al. 2001;Carretta et al. 2009a), Mg-Al (see, e.g., Carretta et al. 2009b), Li-Na (Pasquini et al. 2005;Bonifacio et al. 2007), and correlation for Li-O (Pasquini et al. 2005;Shen et al. 2010).
Although the GGC stars display a scatter in their light element abundances, there is generally no spread in the abundances of iron-peak and heavier elements larger than the typical measurement errors (≈0.1 dex). The only known exceptions are ω Cen (Suntzeff & Kraft 1996;Norris et al. 1996) and M 54 (Carretta et al. 2010b), which do show noticeable star-tostar variations in the iron abundance. However, it is generally accepted that they are not genuine GGCs but instead remnants of dwarf galaxies. The first cluster where significant start-tostar variation in heavy element abundances was detected was M 15 (Sneden et al. 1997(Sneden et al. , 2000Otsuki et al. 2006;Sobeck et al. 2011). Roederer & Sneden (2011) found that the abundances of heavy elements La, Eu, and Ho in 19 red giants of M 92 indicate that there are also significant star-to-star variations. The latter claim, however, was questioned by Cohen (2011), who found no heavy element abundance spread larger than ∼ 0.07 dex in 12 red giants belonging to M 92. The primary formation channels of the 1 s-process elements are the low-and intermediate-mass asymptotic giant branch (AGB) stars, thus the information about the variations in heavy element abundances may shed light on the importance of AGB stars to the chemical evolution of GGCs.
Chemical inhomogeneities involving the light elements in GGCs are the result of the products of a previous generation of stars. The nature of the stars producing these elements, or 'polluters' as they are often called, remains unclear. The main contenders are rapidly rotating massive stars, that pollute the cluster through their winds (Decressin et al. 2007) or AGB stars (D'Ercole et al. 2011, and references therein). A second order issue is whether the "polluted" stars are coeval and only their photospheres are polluted or they are true second-generation stars formed from the polluted material. The evidence of multiple main-sequences and sub-giant branches in GGCs (see Piotto 2008Piotto , 2009, for reviews) strongly supports the latter hypothesis, although some contamination of the photospheres may still be possible.
Most of the abundance studies in GGCs have made the assumption of local thermodynamic equilibrium (LTE). Nonequilibrium effects may become especially important at low metallicity owing to the lower opacities (e.g., overionization by UV photons; see, e.g., Asplund 2005;Mashonkina et al. 2011, for more details). Deviations from LTE also occur because of the lower electron number density in the lower metallicity stellar atmospheres, which in turn decreases the electron collision rates with atoms and ions. Since most GGCs have metallicities that are significantly lower than solar, it is clearly desirable to derive abundances using the non-LTE (NLTE) approach.
Nevertheless, real stars are neither stationary nor onedimensional (1D), as assumed in the classical 1D atmosphere models that are routinely used in stellar abundance work. A step beyond these limitations can be made by using three-dimensional (3D) hydrodynamical atmosphere models that account for the three-dimensionality and non-stationarity of stellar atmospheres from first principles. Recent work has shown that significant differences may be expected between stellar abundances derived using 3D hydrodynamical and classical 1D model atmospheres (Collet et al. (2007(Collet et al. ( , 2009González Hernández et al. (2009);Ramírez et al. (2009);Behara et al. (2010); Dobrovolskas et al. (2010); Ivanauskas et al. (2010); see also Asplund (2005) for a review of earlier work). These differences become larger at lower metallicities and at their extremes may reach 1 dex (!).
It is thus timely to re-analyze in a systematical and homogeneous way the abundances of various chemical elements in the GGCs, employing for this purpose state-of-the-art 3D hydrodynamical atmosphere models together with NLTE analysis techniques. A step towards this was made in our previous work, where we derived 1D NLTE abundances of Na, Mg, and Ba in the atmospheres of red giants belonging to GGCs M10 and M71 (Mishenina et al. 2009). We found that in the case of the red giant N30 in M71 the 3D-1D abundance corrections for Na, Mg, and Ba, were minor and did not exceed 0.02 dex.
In this study, we extend our previous work and derive 1D NLTE abundances of barium in the atmospheres of 20 red giants that belong to the Galactic globular cluster NGC 6752. The analysis is done using the same techniques as in Mishenina et al. (2009). We also derive the 3D-1D LTE abundance corrections for the barium lines in 8 red giants and apply them to correct the 1D barium abundances for the 3D effects. Finally, we quantify the influence of both NLTE and 3D-related effects on the formation of barium lines. The paper is organized as follows. In Sect. 2, we describe the observational material used in the abundance analysis. The procedure of barium abundance determinations is outlined in Sect. 3, where we also provide the details of the LTE/NLTE analysis and the determination of the 3D-1D abundance corrections. A discussion of our derived results is presented in Sect. 4 and the conclusions are given in Sect. 5.
Observational data
We used reduced spectra of 20 red giants in NGC 6752 available from the ESO Science Archive 1 . The high resolution (R = 60 000) spectral material was acquired with the UVES spectrograph at the VLT-UT2 (programme 65.L-0165(A), PI: F. Grundahl). Spectra obtained during the three individual exposures were co-added to achieve the average signal-to-noise ratio S /N ≈ 130 at 600.0 nm. Observations were taken in the standard Dic 346+580 nm setting that does not include the Ba II 455.403 nm resonance line. The other three Ba II lines at 585.369, 614.173, and 649.691 nm (see Table 2) are all found in the upper CCD of the red arm covering the range 583-680 nm. More details of the spectra acquisition and reduction procedure are provided by Yong et al. (2005). All the red giants studied in this work are located at or below the red giant branch (RGB) bump.
Atmospheric parameters and iron abundances
Continuum normalization of the observed spectra and equivalent width (EW) measurements were made using the DECH20T 2 software package (Galazutdinov 1992), where the EWs were determined using a Gaussian fit to the observed line profiles.
Stellar model atmospheres used in the abundance determinations were calculated with the Linux port version (Sbordone et al. 2004;Sbordone 2005) of the ATLAS9 code (Kurucz 1993), using the ODFNEW opacity distribution tables from Castelli & Kurucz (2003). Models were computed using the mixing length parameter α MLT = 1.25 and microturbulence velocity of 1 km s −1 , with the overshooting option switched off. The LTE abundances were derived using the Linux port version (Sbordone et al. 2004;Sbordone 2005) of the Kurucz WIDTH9 3 package (Kurucz 1993;Kurucz 2005;Castelli 2005).
The effective temperature, T eff , was determined under the assumption of excitation equilibrium, i.e., by requiring that the derived iron abundance should be independent of the excitation potential, χ (Fig. 1, upper panel). To obtain the value of surface gravity, log g, we required that the iron abundances determined from the Fe I and Fe II lines would be equal. The microturbulence velocity, ξ t , was determined by requiring that Fe I lines of different EWs would provide the same iron abundance (Fig. 1, lower panel). The derived effective temperatures, gravities, and microturbulence velocities of individual stars agreed to within 60 K, 0.2 dex, and 0.16 km s −1 , respectively, with those determined by Yong et al. (2005).
The LTE iron abundances for all stars in our sample were derived using 50-60 neutral iron lines (Table A.1; note that the iron abundance derived from the ionized lines was required to match that of neutral iron, i.e., to obtain the estimate of surface gravity, thus it is not an independent iron abundance measurement).
To minimize the impact of NLTE effects on the iron abundance determinations, we avoided neutral iron lines with the excitation potential χ < 2.0 eV. Oscillator strengths and damping constants for all iron lines were retrieved from the VALD database (Kupka et al. 2000). The obtained iron abundances are provided in Table 1. The contents of the table are as follows: the star identification and its coordinates are given in Cols. 1-3, effective temperatures and iron abundance derivatives relative to the excitation potential are in columns 4 and 5, respectively, the adopted microturbulence velocity and iron abundance derivative relative to the equivalent width are in columns 6 and 7, respectively, the adopted values of log g are in column 8, iron abundances obtained from Fe I and Fe II lines are in columns 9 and 10, respectively, and the difference between them is in column 11. The mean iron abundance obtained for the 20 stars is [Fe/H] = −1.60 ± 0.05, which is in excellent agreement with [Fe/H] = −1.62 ± 0.02 obtained by Yong et al. (2005).
One-dimensional LTE abundances of barium
One-dimensional (1D) LTE barium abundances were derived from the three Ba II lines centered at 585.3688 nm, 614.1730 nm, and 649.6910 nm. Damping constants and other atomic parameters of the three barium lines are provided in Table 2. The line equivalent widths were measured with the DECH20T software (Table 3, columns 2-4). Hyperfine splitting of the 649.6910 nm line was not taken into account in the 1D LTE analysis. The derived barium abundances and barium-to-iron abundance ratios are given in Table 3, columns 5 and 7, respectively.
We note that the barium line at 614.1730 nm is blended with a neutral iron line located at 614.1713 nm. To estimate how this Wiese & Martin (1980); b natural broadening constant, from Mashonkina & Bikmaev (1996); c Stark broadening constant, from Kupka et al. (2000); d van der Waals broadening constant, from Korotin et al. (2011) affects the accuracy of the abundance determination, we synthesized the barium 614.1730 nm line with and without the blending iron line, for all stars in our sample. The comparison of the equivalent widths of blended and non-blended lines reveals that the contribution of the iron blend never exceeds ∼ 2.4 %, or ≤ 0.05 dex in terms of the barium abundance. The contribution of the iron blend to the EW of the 614.1730 nm line was thus taken into account by reducing the measured equivalent widths of this barium line by 2.4 % for all stars. We would like to point out, however, that in the 1D NLTE analysis the barium abundances were derived by fitting the synthetic spectrum to the observed line profile, thus the influence of the iron blend at 614.1713 nm was properly taken into account. Assessment of the abundance sensitivity on the atmospheric parameters yields the following results: -Change in the effective temperature by ±80 K leads to a change in the barium abundance measured from the three Ba II lines by ∓0.03 dex; -Change in the surface gravity by ±0.1 dex changes the barium abundance by ∓0.02 dex; -Change in the microturbulence velocity, ξ t , by ±0.1 km s −1 changes the barium abundance by ∓0.07 dex.
Since barium lines in the target stars are strong and situated in the saturated part of the curve of growth, it is unsurprising that the uncertainty in the microturbulence velocity is the largest contributor to the uncertainty in the derived barium abundance. The total contribution from the individual uncertainties in T eff , log g, and ξ t leads to the systematic uncertainty in the barium abundance determinations of ∼ 0.08 dex. We note, however, that the latter number does not account for the uncertainty in the equivalent width determination and thus only provides the lower limit to the systematic uncertainty (e.g., 5 percent in the equivalent width determination leads to the barium abundance uncertainty of ∼ 0.1 dex).
The obtained mean 1D LTE barium abundance for the sample of 20 stars in NGC 6752 is A(Ba) 1D LTE = 0.80±0.09±0.08 and the barium-to-iron ratio is [Ba/Fe] 1D LTE = 0.24 ± 0.05 ± 0.08. In both cases, the first error is a square root of the variance calculated for the ensemble of individual abundance estimates of 20 stars. The second error is the systematic uncertainty in the atmospheric parameter determination. The difference between the individual barium abundances derived in a given star using the three barium lines is always below ∼ 0.1 dex.
One-dimensional NLTE abundances of barium
The one dimensional (1D) NLTE abundances of barium were determined using the version of the 1D NLTE spectral synthesis code MULTI (Carlsson 1986) modified by Korotin et al. (1999). The model atom of barium used in the NLTE spectral synthesis calculations was taken from Andrievsky et al. (2009). To summarize briefly, it consisted of 31 levels of Ba I, 101 levels of Ba II (n < 50) and the ground level of Ba III. In total, 91 bound-bound transitions were taken into account between the first 28 levels of Ba II (n < 12, l < 5). Fine structure was taken into account for the levels 5d 2 D and 6p 2 P 0 , according to the prescription given in Andrievsky et al. (2009). We also accounted for the hyperfine splitting of the barium 649.6910 nm line. Isotopic splitting of the barium lines was not taken into account. Owing to the low ionization potential of neutral barium (∼ 5.2 eV), Ba II is the dominant ionization stage in the line-forming regions of investigated stars, with n(Ba I)/n(Ba II) 10 −4 . It is therefore safe to assume that none of the Ba I transitions may noticeably change the level populations of Ba II (cf. Mashonkina et al. 1999). Further details about the barium model atom, the assumptions used, and implications involved can be found in Andrievsky et al. (2009) and Korotin et al. (2011).
The solar abundances of iron and barium were assumed to be log A(Fe) ⊙ = 7.50 and log A(Ba) ⊙ = 2.17 respectively, on the scale where log A(H) ⊙ = 12. These abundances were determined using the Kurucz Solar Flux Atlas (Kurucz et al. 1984) and the same NLTE approach as applied in this study.
A typical fit of the synthetic line profiles to the observed spectrum is shown in Fig. 2, where we plot synthetic and observed profiles of all three barium lines used in the analysis. The elemental abundances and barium-to-iron abundance ratios derived for the individual cluster giants are provided in Table 3 (columns 6 and 8, respectively).
The mean derived 1D NLTE barium-to-iron ratio for the 20 cluster red giants is [Ba/Fe] 1D NLTE = 0.05 ± 0.06 ± 0.08. The first error is the square root of the variance in [Ba/Fe] 1D NLTE estimates obtained for the ensemble of 20 stars, thus measures the star-to-star variation in the barium-to-iron ratio. The second error is the systematic uncertainty resulting from the stellar parameter determination (see Section 3.1). The individual line-toline barium abundance scatter was always significantly smaller than 0.1 dex.
We find that barium lines generally appear stronger in NLTE than in LTE, which leads to lower NLTE barium abundances. This is in accord with the results obtained by Short & Hauschildt (2006) for the metallicity of NGC 6752, and similar to the trends obtained for cool dwarfs by Mashonkina et al. (1999). The NLTE-LTE abundance corrections for the three individual barium lines are always very similar, with the differences being within a few hundredths of a dex.
3D-1D barium abundance corrections
We have used the CO 5 BOLD 3D hydrodynamical (Freytag et al. 2012) and LHD 1D hydrostatic (Caffau & Ludwig 2007) stellar atmosphere models to investigate how strongly the formation of barium lines may be affected by convective motions in the stellar atmosphere. The CO 5 BOLD code solves the 3D equations of radiation hydrodynamics under the assumption of LTE. The model assumes a cartesian coordinate grid. For a detailed description of the CO 5 BOLD code and its applications, we refer to Freytag et al. (2012).
Since we did not have CO 5 BOLD models available for the entire atmospheric parameter range covered by the red giants in NGC 6752, we estimated the importance of 3D hydrodynamical effects only for stars on the lower RGB. For this purpose, we used a set of 3D hydrodynamical CO 5 BOLD models with T eff = 5000 K and log g = 2.5, at four different metallicities, [M/H]= 0.0, -1.0, -2.0, and -3.0 4 . Allowing for the error margins of ∼ 100 K in the effective temperature and ∼ 0.25 dex in gravity, we assumed that the effective temperature and gravity of this model set is representative of the atmospheric parameters of the stars NGC 6752-08, and NGC 6752-19 to NGC6752-30 (8 objects, see Table 1). For these stars, the extreme deviations from the parameters of the 3D model are ∆T eff ∼ 110 K and ∆ log g ∼ 0.26. These differences would only have a marginal effect on the uncertainty in the abundance estimates, i.e., the systematic uncertainty for the 3D barium abundance derivations would only increase from ±0.08 dex quoted in Sect. 3.2 to ±0.10 dex.
The 3D hydrodynamical models were taken from the CIFIST 3D model atmosphere grid (Ludwig et al. 2009). The model pa- To illustrate the differences between the 3D hydrodynamical and 1D classical stellar model atmospheres, we show their temperature stratifications at the metallicity of [M/H] = −2.0, which is the closest to that of NGC 6752 (Fig. 3, upper panel). In the same figure, we also indicate the typical formation depths of the three barium lines. It is obvious that at these depths, the temperature of the 3D hydrodynamical model fluctuates very strongly, especially in the outer atmosphere, as indicated by the RMS value of horizontal temperature fluctuations (∆T RMS = (T − T 0 ) 2 x,y,t , where T 0 is the temporal and horizontal temperature average obtained on surfaces of equal optical depth). As we see below, differences in the atmospheric structures lead to differences in the line formation properties and henceforth to differences in barium abundances obtained with the 3D hydrodynamical and 1D classical model atmospheres.
Twenty 3D snapshots (i.e., 3D model structures at different instants in time) were selected to calculate the Ba II line profiles. The snapshots were chosen in such a way that the statistical properties of the snapshot sample (average effective temperature and its r.m.s value, mean velocity at the optical depth unity, etc.) would match as close as possible those of the entire ensemble of the 3D model run. The 3D line spectral synthesis was performed for each individual snapshot and the resulting line profiles were averaged to yield the final 3D spectral line profile.
The influence of convection on the spectral line formation was estimated by means of 3D-1D abundance corrections. The 3D-1D abundance correction is defined as the difference between the abundance A(Y) derived for a given element Y from the same observed spectral line using the 3D hydrodynamical and classical 1D model atmospheres, i.e., (Caffau et al. 2011). This abundance correction can be separated into two constituents: (a) correction owing to the horizontal temperature inhomogeneities in the 3D model, 3D , and (b) correction owing to the differences between the temperature profiles of the average 3D and 1D models, 1D . Abundances corresponding to the subscript 3D were derived using the average 3D models, which were obtained by horizontally averaging 3D model snapshots on surfaces of equal optical depth. Spectral line profiles were calculated for each average 3D structure corresponding to individual 3D model snapshots. These line profiles were averaged to yield the final 3D profile, which was used to derive the ∆ 3D −1D abundance corrections. The full abundance correction was then ∆ 3D−1D ≡ ∆ 3D− 3D + ∆ 3D −1D . Spectral line synthesis for all three models, i.e., 3D, 3D , and 1D, was made using the Linfor3D code 5 .
The barium lines in the target stars are strong (cf. Table 3) and thus the derived 3D-1D abundance corrections are sensitive to the microturbulence velocity, ξ t , of the comparison 1D model. The 3D-1D abundance corrections were therefore calculated using the equivalent widths and microturbulence velocities of the target stars derived in Sect. 3.1 and 3.2. Furthermore, cubic interpolation was used to interpolate between the 3D-1D abundance corrections derived at four different metallicities to obtain its value at the metallicity of the cluster, [Fe/H] = −1.6. The cubic interpolation between the four values of metallicities was chosen because of the nonlinear dependence of the 3D-1D abundance corrections on metallicity. The results are provided in Table 5, which contains the ∆ 3D−1D and ∆ 3D− 3D abundance corrections for the three individual Ba II lines (columns 2-4), the 3D-1D abundance correction for each star (i.e., averaged over the three barium lines, column 5), the microturbulence velocity used with the 1D comparison model (column 6, from Sect. 1), the 3D LTE barium abundances (column 7), the 3D LTE barium-to-iron ratio (column 8), and finally both the 1D NLTE barium-to-iron ratio before (column 9, from Sect. 3.3) and after correction for the 3D effects (column 10).
Abundance corrections are sensitive to the choice of the 1D microturbulence velocity and line strength, therefore stars with very similar atmospheric parameters may have different abundance corrections. This is, for example, the case for NGC 6752-19 and NGC 6752-30. These two stars have largest and smallest microturbulence velocities in the entire sample, respectively, and NGC6752-19 has slightly stronger barium lines than NGC 6752-30 (Table 3). This leads to noticeably different abundance corrections, despite both stars having very similar effective temperatures and gravities (Table 5). Yong et al. (2005) and those obtained here is somewhat concerning, especially since both studies were based on the same set of UVES spectra, while the atmospheric parameters and iron abundances of individual stars employed by us and Yong et al. (2005) agree very well (Sect. 3.1). Moreover, the comparison of the equivalent width measurements obtained by us and Yong et al. (2005) also shows good agreement. One would thus also expect good agreement in the derived barium abundances -which is unfortunately not the case. We therefore felt it was important to look into the possible causes of this discrepancy.
To this end, we first obtained the 1D LTE barium abundances using the MULTI code. This independent abundance estimate was made using the same procedure as for the 1D NLTE abundance derivations, i.e., by fitting the observed and synthetic line profiles of the three Ba II lines, with the difference that in this case the line profile calculations performed with MULTI were done under the assumption of LTE. The mean barium-to-iron abundance ratio obtained in this way, [Ba/Fe] = 0.22 ± 0.06 ± 0.08, agrees well with the value derived in Section 3.2 ( [Ba/Fe] = 0.24 ± 0.05 ± 0.08).
In their abundance determinations, Yong et al. (2005) used an older version of the ATLAS models (Kurucz 1993). The differences between these ATLAS models and those used in our work is that (a) different opacity tables were used in the two cases (i.e., ODFNEW from Castelli & Kurucz 2003, with our models), and (b) the ATLAS models of Kurucz (1993) were calculated with the overshooting parameter switched on, while in our case the overshooting was switched off. To check the influence of these differences on the abundance derivations, we obtained the 1D LTE barium abundance using the older atmosphere models of Kurucz (1993), with the atmospheric parameters and iron abundances derived in Sect. 3.1. In this case, the mean derived barium-toiron abundance ratio was [Ba/Fe] 1D LTE = 0.23 ± 0.05 ± 0.08, i.e., the effect of differences in the model atmospheres was only ∼ 0.01 dex. The change in the barium abundances owing to differences in the atomic parameters (line broadening constants, oscillator strengths) used in the two studies was more significant, i.e., the abundances derived by us using the atomic parameters of Yong et al. (2005) were ∼ 0.1 dex lower. However, this still leaves a rather large discrepancy, ∼ 0.15 dex, between the barium-to-iron ratios obtained by us and Yong et al. (2005), for which we unfortunately cannot find a plausible explanation.
As in the case of the 1D LTE abundances, the extent of the star-to-star variations in the derived 1D NLTE barium-to-iron ratio, [Ba/Fe] 1D NLTE = 0.05 ± 0.06 ± 0.08, is small and can be fully explained by the uncertainties in the abundance determination. The 1D NLTE barium-to-iron ratio derived here is similar to the value [Ba/Fe] 1D NLTE = 0.09 ± 0.20 obtained for two red giants in M10 by Mishenina et al. (2009). The elemental ratios obtained in the two studies are thus very similar, although one should keep in mind that the estimate of Mishenina et al. (2009) is based on only two stars. The metallicities of the two clusters are very similar too, [Fe/H] = −1.56 in the case of M10 (Harris 1996(Harris , 2010 and [Fe/H] = −1.60 for NGC 6752 (Sect. 3.1). Galactic field stars typically show no pronounced dependence of [Ba/Fe] on metallicity, although the scatter at any given metallicity is large (Sneden et al. 2008). One may therefore conclude that, taken into account the high [Ba/Fe] spread in field stars, the [Ba/Fe] ratio derived here is comparable to those seen in Galactic field stars and other globular clusters of similar metallicity.
The 3D-corrected 1D NLTE barium abundance in NGC 6752
The 3D-1D barium abundance corrections obtained for the eight stars in NGC 6752 (see Section 3.4 above) provide a hint of the net extent to which the 3D hydrodynamical effects may influence spectral line formation (and thus, the abundance determinations) in their atmospheres (Table 5). In the case of all red giants investigated, the corrections are small, -0.03 to +0.15 dex, and the mean abundance correction for the eight stars is ∆ 3D−1D = 0.05. We note though that the individual contributions to the abundance correction, ∆ 3D− 3D and ∆ 3D −1D , are substantial (∼ ±0.1 dex) but often because of their opposite sign nearly cancel and thus the resulting abundance correction is significantly smaller (Table 5). This clearly indicates that the role of convection-related effects on the spectral line formation in these red giants cannot be neglected, even if the final 3D-1D abundance correction, ∆ 3D−1D , is seemingly very small. The mean 3D LTE barium-to-iron abundance ratio obtained for the eight red giants is [Ba/Fe] 3D LTE = 0.28 ± 0.07 ± 0.10. The 3D LTE barium abundance measurements made for a given star from the three barium lines always agree to within ≈ 0.03 dex. In the case of all twenty giants studied here, the mean 1D NLTE barium-to-iron ratio corrected for the 3D-related effects is [Ba/Fe] 3D+NLTE = 0.10 ± 0.08 ± 0.10 and therefore is only slightly different from the 1D NLTE value obtained in Sect. 3.3. However, the positive sign of the 3D-1D abundance differences indicates that in the spectra of red giants in NGC 6752 the three studied Ba II lines will be weaker in 3D than in 1D, in contrast to what is generally seen in red giants at this metallicity (cf. Collet et al. 2007;Dobrovolskas et al. 2010).
For the Ba II lines, the 3D-1D abundance corrections are sensitive to the choice of microturbulence velocities in the 1D models: an increase in the microturbulence velocity by 0.10 km s −1 leads to an increase of 0.07 dex in the 3D-1D abundance correction. At the same time, the 1D abundance itself decreases by roughly the same amount. The result is that although the 3D correction is sensitive to microturbulence, the 3D corrected abundance is not.
Conclusions
We have derived the 1D LTE and 1D NLTE abundances of barium for 20 red giant stars in the globular cluster NGC 6752. The mean barium-to-iron abundance ratios are [Ba/Fe] 1D LTE = 0.24 ± 0.05 ± 0.08 and [Ba/Fe] 1D NLTE = 0.05 ± 0.06 ± 0.08 (the first error measures the star-to-star variation in the abundance ratio and the second is the systematic uncertainty in the atmospheric parameter determination, see Sect. 3.1). Individual barium-to-iron abundance ratios show little star-to-star variation, which leads us to conclude that there is no intrinsic barium abundance spread in the RGB stars at or slightly below the RGB bump in NGC 6752. This conclusion is in line with the results obtained in other studies, for stars in both this and other GGCs (Norris & Da Costa 1995;James et al. 2004;Yong et al. 2005).
The derived 1D NLTE barium-to-iron abundance ratio is comparable to the one observed in Galactic halo stars of the same metallicity (Sneden et al. 2008). It is also similar to the mean barium-to-iron abundance ratio obtained by Mishenina et al. (2009) for 2 red giants in the Galactic globular cluster M10. We therefore conclude that the barium-to-iron abundance ratios obtained here generally agree with those seen in the oldest Galactic populations and are not very different from those observed in halo stars.
We have also obtained 3D LTE barium abundances for 8 red giants on the lower RGB in NGC 6752. The mean 3D LTE barium abundance, [Ba/Fe] 3D LTE = 0.28 ± 0.07 ± 0.10, is only 0.05 dex higher than that obtained for these stars in 1D LTE. This small 3D-1D correction leads to very minor adjustment of the mean 1D NLTE barium-to-iron ratio for the 20 investigated giants, [Ba/Fe] 3D+NLTE = 0.10 ± 0.08 ± 0.10.
It would be misleading, however, to conclude that the role of the 3D effects in the formation of the barium lines in the atmospheres of red giants in NGC 6752 is minor. As a matter of fact, we have found that the 3D-1D abundance corrections owing to horizontal temperature inhomogeneities in the 3D model (i.e., ∆ 3D− 3D correction) and differences in the temperature profiles between the average 3D and 1D models (∆ 3D −1D correction) are substantial and may reach ∼ ±0.1 dex (Table 5). However, their sign depends on the line strength, and owing to this subtle fine-tuning their sum is significantly smaller, from -0.03 to 0.02 dex, which for this given set of atmospheric and atomic line parameters maintains the size of the 3D-1D abundance corrections at the level of the errors in the abundance determination. | v2 |
2021-03-30T05:11:25.594Z | 2021-03-01T00:00:00.000Z | 232405269 | s2orc/train | Consumer-Led Adaptation of the EsSense Profile® for Herbal Infusions
This work aimed to adapt the EsSense Profile® emotions list to the discrimination of herbal infusions, aiming to evaluate the effect of harvesting conditions on the emotional profile. A panel of 100 consumers evaluated eight organic infusions: lemon verbena, peppermint, lemon thyme, lemongrass, chamomile, lemon balm, globe amaranth and tutsan, using a check-all-that-apply (CATA) ballot with the original EsSense Profile®. A set of criteria was applied to get a discriminant list. First, the terms with low discriminant power and with a frequency mention below 35% were removed. Two focus groups were also performed to evaluate the applicability of the questionnaire. The content analysis of focus groups suggests the removal of the terms good and pleasant, recognized as sensory attributes. Six additional terms were removed, considered to be too similar to other existing emotion terms. Changes in the questionnaire, resulting in a list of 24 emotion terms for the evaluation of selected herbal infusions, were able to discriminate beyond overall liking. When comparing finer differences between plants harvested under different conditions, differences were identified for lemon verbena infusions, yielding the mechanical cut of plant tips as the one leading to a more appealing evoked emotions profile.
Introduction
Although the term "tea" (chá in Portuguese) refers to infusions made from leaves of Camellia sinensis (L.) Kuntze, in Portuguese colloquial language, it also refers to the wide variety of infusions prepared from dried aromatic plants or parts of plants, such as roots, root-stocks, shoots, leaves, flowers, barks, fruits or seeds other than the leaves of C. sinensis. The popularity of these herbal and fruit beverages prepared as infusions reflects the increasing consumer appreciation for the wide range of natural and refreshing tastes and other sensory properties they offer [1].
This reinforces their general social and/or recreational value [2], and their beneficial health properties. They are rich in polyphenols and other functional constituents that possess relatively high antioxidant activities [3]. In addition to water and tea, herbal infusion beverages help to complete proper hydration which is essential for the maintenance of the corporal water equilibrium as well as being part of the Mediterranean diet [4]. They also contribute to a well-balanced diet as they contain no sugar and have almost no calories [5].
Moreover, the marketing of herbal infusions has become increasingly sophisticated and diverse. In Europe, more than 400 different plant parts are used as single or blended ingredients for the preparation of herbal and fruit infusions. Since 2010, the volume of herbal infusion sales has grown in most European countries, by almost 17% [6]. In terms of total volume, Germany is the largest market by far, with sales of herbal infusions amounting to 39,455 tonnes in 2016, contrasting with 19,220 tonnes of tea sold. Even in the United
Experimental Design
To achieve the proposed research goals, two different phases were carried out, with a three-month difference between them, following the Jiang, King and Prinyawiwatkul [9] approach for the emotion lexicon process. In the first phase, the Portuguese version [35] of the 39-item EsSense Profile ® was used as a starting point for the definition of the emotion term list, aiming for a complete herbal infusions description. Consumers were asked to evaluate samples of eight different herbal infusions using a CATA ballot with the full EsSense Profile ® . Different types of loose-leaf herbal infusions were evaluated to achieve a representative sample of this product category, with a wide range of sensory properties: a vast array of colours, aromas and tastes. After this, two focus groups were performed to assess the applicability of the questionnaire to discriminate herbal infusions with different harvesting conditions. In the second phase, to assess the impact of harvesting conditions (the type of cut and part of the plant) on the emotional profile, a large panel of consumers The samples used in this research were provided by a Portuguese producer of aromatic plants. The production of this farmer is solely based on organic farming, certified in 2005 by Ecocert Portugal. The samples were dried according to their regular commercial procedure (dried in a professional dryer, with an optimal temperature of 35-45 • C, for 72 h) and stored in hermetic bags before being processed and analysed. All infusions were prepared using 4.5 g of whole dried leaves picked from plants and infused in 1.5 L of natural mineral water (Continente, Portugal), following the procedure developed by Cardoso [36].
For the preparation of infusions, different steeping times and temperatures were used (Table 1), following two criteria: (i) commercial samples of lemon balm, chamomile, globe amaranth, and tutsan were prepared following producer instructions; (ii) samples of lemon verbena, lemongrass, lemon thyme, and peppermint were prepared following the results of previous work, in which steeping time and temperature were optimized for the consumers' maximum liking [37,38]. When selected times were reached, leaves were removed by picking the strainer from the teapot. The resulting infusions were left to cool down to 65 • C and then placed in thermally insulated flasks until serving. All samples were served in white porcelain teacups (approximately 100 mL) coded with three-digit random codes, according to the infused plant, in individual booths, under normal white lighting.
Tasters were provided with a porcelain spittoon, a glass of bottled natural water and unsalted crackers (Continente, Portugal), for palate rinse between samples. To guarantee a full appreciation of the herbal infusions, participants were allowed to add castor sugar cubes (2.5 g) and instructed to use the same amount for all samples under evaluation, according to their regular consumption habits.
Sensory Panel
A non-trained panel of 100 naïve tasters (60% female; 40% male; 35.4 ± 11.1 years old) were recruited for the EsSense Profile ® validation, all of them being regular consumers of herbal infusions (at least once a week). Panellists were recruited from Sense Test's consumer database, selected from among residents from the Oporto metropolitan area, North of Portugal, and received a small financial compensation for their participation. The company Sense Test ensures the protection and confidentiality of data through the National Data Protection Commission and accomplished internal conduct. Moreover, informed consent was obtained, and participants were free to quit the evaluation at any time.
Ballot and Questionnaire Format
Paper ballots included two questions: overall liking and elicited emotions evaluation. For each of the eight infusions, consumers were asked to score their overall liking, using a 9-point hedonic scale, ranging from 1-"dislike extremely" to 9-"like extremely", before rating their emotional responses using the 39-terms original EsSense Profile ® lexicon, as a measure to avoid potential hedonic score bias [24]. Emotion terms were selected immediately after the scoring of liking, to evaluate the short-term emotions responses [30]. A CATA response format was chosen in a "Yes/No" forced question format with the intent to induce a higher attention level while keeping the advantages of a faster and simpler task [39]. Emotion terms were presented in alphabetic order for one half of the panel and in the inverse order for the other half of the panel, according to previous authors [8,16], to avoid an effect of the list presentation order.
All samples were presented monadically, in one test session and following a balanced order of presentation, according to a Latin square design, to counterbalance carryover effects [40].
Focus Groups
Consumers from the initial panel were recruited for the focus groups. Considering that the researchers wanted at least 15% participants of the initial panel, two focus groups were performed, with the attempt to have no more than eight participants per group: focus group 1, n = 8 (75% female; 25% male; 36.9 ± 5.4 years old); and focus group 2, n = 7 (86% Female, 14% Male, 33.6 ± 4.5 years old) [41]. Both focus group discussions took place at the Sense Test's focus group room, in the Portuguese language, and had a duration of approximately 60 min. Both were conducted by the same moderator, the first author, to ensure consistency in interviewing style [42]. The moderator was assisted by other co-authors in dealing with video recording. After an initial icebreaker introduction, participants were invited to taste all the eight herbal infusions and to mark on a tabular ballot (with all the 39 emotions of the EsSense Profile ® questionnaire on the rows, and all the eight herbal infusions on the columns) all the emotions that were evoked by the different herbal infusions. Then, they were also invited to consider and pay closer attention to the emotion terms list from the EsSense Profile ® . Subsequently, the focus group discussion began, and the moderator guided the discussion considering the most relevant topics to the purpose of this research: (i) the capacity of the terms to describe herbal infusions; ii) the presence of redundant terms; (iii) the evaluation of terms that should be grouped or removed [13,28].
The focus group sessions were video-recorded for accuracy of transcription and analysis, following participants' informed consent, and the recordings were anonymously transcribed verbatim.
Data Analysis
Simple descriptive statistics of overall liking data were performed and two nonparametric tests, the Friedman and Wilcoxon [43] tests, were applied. All statistical tests were applied at a 95% confidence level (p < 0.05).
For each emotional term, a Cochran's test was applied to identify its ability to discriminate the samples.
The focus group transcriptions were analysed and themes were developed by the researchers, based upon the core themes of the focus group guide, considering similarities and differences of participants' responses [44]. To illustrate the analysis, direct quotes by the participants were transcribed, serving as a description of the topic explored. The quotes used in this text were translated into English. A new list of terms for the emotional profile of the eight infusions was determined crossing focus group analysis and the results generated by the previous task (CATA).
Data analysis was performed using the XL-STAT 2020 ® [45] The impact of the harvesting conditions on the elicited emotional profile associated with four different infusions was evaluated: lemon verbena (Aloysia triphylla), lemongrass (Cymbopogon citratus), lemon thyme (Thymus x citriodorus) and peppermint (Mentha x piperita). Leaves from the different plants were collected from the same farm as the samples used in experiment I. However, these were harvested following a 2 k factorial plan according to the type of cut (manual and mechanical) and part of the plant (tips and 2nd half leaves, referred to as 2nd half). All plants came from the same plantation lot and were harvested between spring and summer months, totalling four different batches, for each plant.
All the samples were prepared following the steeping time and temperatures described in Table 1.
Sensory Evaluation
Considering the established previous ballot, 300 naïve tasters (61% female; 39% male; 37.4 ± 12.4 years old) evaluated one of the four types of infusions. The option to have the naïve tasters tasting only samples from one plant aims to avoid the systematic interaction of tasters and emotions presented in the list, minimizing the use of the list in a rational way. These participants were selected based on their regular consumption of loose-leaf herbal infusions and were recruited from the same database as for Experiment I. Consumers were invited to score their overall liking for each sample, and then their emotional profile, following the new emotion list questionnaire previously built to evaluate infusions. The procedure for the sample presentation and the format of the CATA ballot was the same as described in Section 2.2.3.
Data Analysis
To analyse CATA questions, initially, a Chi-square test was used to identify significant differences perceived by consumers between samples for each of the terms [47]. After checking the statistical relation between emotions and samples, the frequency of use of each term was determined, by counting the number of consumers who have used each attribute to describe the samples.
To obtain a two-dimensional representation of the samples, a correspondence analysis (CA) was applied from the previously determined contingency table. This analysis provides a sensory map of the samples, allowing the determination of similarities and differences between samples as well as the features that characterize their attributes [48]. A multidimensional alignment (MDA) was applied to assess the degree of association between products and attributes on the perceptual map [49]. Table 2 shows the mean results of overall liking for the eight samples evaluated. It is possible to observe a high level of liking for all the samples. The least liked were tutsan, chamomile and globe amaranth infusions. The EsSense Profile ® consumer test results (n = 100) revealed that consumers did not differentiate samples regarding the following emotion terms: nostalgic (p = 0.102, average citation of 32%), wild (p = 0.085, average citation of 9%) and guilty (p = 0.978, average citation of 5%) ( Figure 1). These terms were removed because they did not contribute to the differentiation of herbal infusions. Different authors discussed the removal of terms only if quoted by less than 50% of the participants [20] rather than removing terms with less than 20% citation in a checklist questionnaire [16]. In this study, using a forced-choice (yes/no) CATA ballot, authors have decided to apply an intermediate value (minimum citation values of 35%); therefore, the terms aggressive (citation range 3-15%), bored (citation range 7-24%), disgusted (citation range 1-20%) and worried, (citation range 6-27%) although discriminating between samples, were removed.
Pre-Selection of Terms from the EsSense Profile ®
Additionally, according to the focus groups' content analysis, different emotional terms were suggested for removal from the original list. This was done because consumers considered the terms good (bem) and pleasant (agradável) as sensory/hedonic attributes (e.g., "I interpret that the pleasant is whether the infusion is pleasant or not, and pleased means that if I feel pleased or not with the infusion", G2P3).
A few emotion terms were considered as very similar to other emotion terms presented in the EsSense Profile ® questionnaire, and therefore removed [9], to obtain a simpler ballot (retained term signalled in bold): happy (feliz) and glad (contente) (e.g., "I think that glad and happy is also very similar", G2P5); steady (firme) and secure (seguro) (e.g., " . . . and for example secure and steady I think the two terms turn out to be the same thing", G1P1); mild (meigo), tame (dócil) and tender (terno) (e.g., "are very similar, and make the list very long", G1P8 and G1P4); warm (caloroso) and affectionate (carinhoso) (e.g., " . . . and I would do the same with the affectionate and warm", G1P8); whole (completo) and satisfied (satisfeito), (e.g., "For example satisfied with complete because I think they are very identical", G1P4).
Using all the previous information, the authors have compiled an emotion list with 24 terms for the discrimination of loose-leaf herbal infusions (see Table 3). Table 4 shows the mean values of overall liking for each infusion and each treatment (plant part × type of cut) and the aggregate liking for each infusion. From the results, one can observe that all samples have an average value of overall liking higher than 7, with no significant differences between the type of cut (manual and mechanical) and the plant part (tips and 2nd half). From the comparison of the aggregate data for each herbal infusion, one can see that the average values are close to each other.
Herbal Infusions Comparison
On the emotional profile analyses, the authors start with an overview, analysing first the aggregate data (differences between herbal infusions) and then the individual treatment data. Figure 2 shows the configurations of samples and elicited emotion terms in the first and second dimensions of the correspondence analysis applied to the CATA counts for the four herbal infusions: lemongrass, lemon thyme, lemon verbena and peppermint. This configuration explains 90.7% of the total variance of the experimental data. From the analysis of Figure 2, one can perceive that different types of herbal infusions evoke different emotions. Lemongrass samples evoked the following emotions: joyful, quiet, active, loving and understanding; these emotions are related to affection, liveliness and understanding, and also strongly evoked daring, which is related to energy and adventure. Lemon thyme samples evoked the resulting emotions: polite, affectionate, secure, tender, and good-natured related to security, affection and respect. Lemon verbena and peppermint samples evoked the emotions calm, peaceful, friendly, emotions related to peace, friendship and relaxed, Peppermint also evoked the emotions: daring, and energetic which are related to happiness, satisfaction, energy and adventure. Using all the previous information, the authors have compiled an emotion list with 24 terms for the discrimination of loose-leaf herbal infusions (see Table 3). [50]); the (a) at the end of some terms represents the masculine/feminine gender-specific variation (e.g., Ativo(a) reads as Ativo/Ativa). [50]); the (a) at the end of some terms represents the masculine/feminine gender-specific variation (e.g., Ativo(a) reads as Ativo/Ativa).
Kept Emotion Terms-EN (PT) Removed Emotion Terms-EN (PT)
Active (Ativo(a)) Interested (Interessado(a)) Aggressive (Agressivo(a)) Adventurous (Aventureiro(a)) Joyful (Jovial) Bored (Aborrecido(a)) Affectionate (Carinhoso(a)) Loving (Amoroso(a)) Disgusted (Enojado(a)) Calm (Calmo(a)) Merry 1. Overall liking ranging between: 1-Dislike extremely and 9-Like extremely; a With no significant difference between treatments, for each herbal infusion, according to the Friedman test, at a 95% confidence level. Figure 3 shows the results from the cosines of the angles of herbal infusions with the significant food-elicited emotion terms, resulting from MDA analysis [49]. From this analysis, one may depict in a more detailed way the differences between samples, namely, through their correlation with the different elicited emotion terms.
From Figure 3, one can conclude that the four samples differ in the evoked emotions. The lemongrass infusions are strongly related to understanding, joyful, quiet and loving emotion terms and negatively related to energetic, friendly, peaceful and calm. For both the peppermint and the lemon verbena herbal infusions, one may observe further insights into the elicited emotions. The peppermint infusions are strongly positively related to energetic, daring and calm and negatively related to understanding, secure, quiet, tender and polite. While the lemon verbena infusions were merely positively related to friendly and peaceful emotion terms and negatively related to good-natured, loving and affectionate. The lemon-thyme infusions were strongly related to secure and good-natured emotion terms, and negatively related to active and daring. and good-natured related to security, affection and respect. Lemon verbena and peppermint samples evoked the emotions calm, peaceful, friendly, emotions related to peace, friendship and relaxed, Peppermint also evoked the emotions: daring, and energetic which are related to happiness, satisfaction, energy and adventure. Figure 3 shows the results from the cosines of the angles of herbal infusions with the significant food-elicited emotion terms, resulting from MDA analysis [49]. From this analysis, one may depict in a more detailed way the differences between samples, namely, through their correlation with the different elicited emotion terms.
From Figure 3, one can conclude that the four samples differ in the evoked emotions. The lemongrass infusions are strongly related to understanding, joyful, quiet and loving emotion terms and negatively related to energetic, friendly, peaceful and calm. For both the peppermint and the lemon verbena herbal infusions, one may observe further insights into the elicited emotions. The peppermint infusions are strongly positively related to energetic, daring and calm and negatively related to understanding, secure, quiet, tender and polite. While the lemon verbena infusions were merely positively related to friendly and peaceful emotion terms and negatively related to good-natured, loving and affectionate. The lemon-thyme infusions were strongly related to secure and good-natured emotion terms, and negatively related to active and daring. A closer look into the impact of the type of cut and plant part of each herbal infusion on the elicited emotional profile illustrates differences between treatments of the lemon verbena infusion. Figure 4 shows the only four emotion terms that yield significant differences when describing the emotion-related profile of the samples: glad, joyful, adventurous and energetic. The 2nd Half and Tips-Manually harvested products were strongly related to the glad emotion and negatively related with adventurous and energetic, while for Tips from the mechanical harvest the opposite occurs. For the remaining herbal infusions (lemongrass, peppermint and lemon-thyme) no significant differences on A closer look into the impact of the type of cut and plant part of each herbal infusion on the elicited emotional profile illustrates differences between treatments of the lemon verbena infusion. Figure 4 shows the only four emotion terms that yield significant differences when describing the emotion-related profile of the samples: glad, joyful, adventurous and energetic. The 2nd Half and Tips-Manually harvested products were strongly related to the glad emotion and negatively related with adventurous and energetic, while for Tips from the mechanical harvest the opposite occurs. For the remaining herbal infusions (lemongrass, peppermint and lemon-thyme) no significant differences on the elicited emotional profile between treatments, within each herbal infusion, were found. tion terms in the correspondence analysis map, resulting from multidimensional alignment (MDA) analysis. Dark bars represent the angles below 45°-representing emotion terms positively correlated with the samples, as well as the angles above 135°-representing emotion terms negatively correlated with the samples.
A closer look into the impact of the type of cut and plant part of each herbal infusion on the elicited emotional profile illustrates differences between treatments of the lemon verbena infusion. Figure 4 shows the only four emotion terms that yield significant differences when describing the emotion-related profile of the samples: glad, joyful, adventurous and energetic. The 2nd Half and Tips-Manually harvested products were strongly related to the glad emotion and negatively related with adventurous and energetic, while for Tips from the mechanical harvest the opposite occurs. For the remaining herbal infusions (lemongrass, peppermint and lemon-thyme) no significant differences on the elicited emotional profile between treatments, within each herbal infusion, were found. Figure 5 compares the emotional profile of the samples from the different treatments (type of cut × plant area) with the emotional profile of the correspondent sample from experiment I. Only the emotion terms common to both lists are presented. Generally, there is a similar characterization of the four treatment samples with low significant variances between them. When comparing with the equivalent sample from experiment I, one can observe that for lemongrass and lemon verbena the emotional profiles are similar, while for peppermint and lemon thyme there are some minor changes. For peppermint, some emotion terms such as quiet, peaceful, calm, affectionate and good-natured get a higher frequency of mention when the treatment samples are evaluated, while for other terms like joyful, energetic and active the frequency of mention is higher during the experiment I. For lemon thyme, similar behaviour is observed, tender, quiet, pleased, peaceful, good-natured, calm and affectionate were more elicited on the treatment samples' evaluation, while energetic and active were more elicited on the experiment I evaluation. These differences can be explained due to the samples' nature and also because the samples' evaluation in both experiments was performed by different groups of consumers.
Comparison of the Samples Emotional Profile from the Original and the Adapted List
frequency of mention when the treatment samples are evaluated, while for other terms like joyful, energetic and active the frequency of mention is higher during the experiment I. For lemon thyme, similar behaviour is observed, tender, quiet, pleased, peaceful, goodnatured, calm and affectionate were more elicited on the treatment samples' evaluation, while energetic and active were more elicited on the experiment I evaluation. These differences can be explained due to the samples' nature and also because the samples' evaluation in both experiments was performed by different groups of consumers.
Discussion
One of the purposes of this research was to develop a shorter version of the EsSense Profile ® , applied to a new food category-herbal infusions, following an emotional consumer lexicon adaptation. This approach has the advantage of balancing the cost of time and resources when compared to the pre-determined lexicons. For this, authors have combined the consumer voice into a specific product development process, benefiting from the emotional list available from literature, as consumers may not be able to articulate all their emotions during the experiment [8].
In fact, during Experiment I, authors found, through a consumer test (n = 100), that some of the emotion terms from the 39-emotion list of the EsSense Profile ® were not relatable, nor did they contribute to the discrimination of the herbal infusion products. Other studies by Bhumiratana et al. [51], Chaya, Eaton, Hewson, Vázquez, Fernández-Ruiz, Smart and Hort [13], Silva, Jager, van Bommel, van Zyl, Voss, Hogg, Pintado and de Graaf [14], Talavera and Sasse [52] reinforce the fact that the focus group methodology may be useful to define the final list of emotions. In fact, in this research, during the focus group sessions, participants mentioned that the EsSense Profile ® list was too extensive for herbal infusion products. Moreover, this approach allowed for the exclusion of irrelevant terms, thus shortening the list and removing potential consumer confusion [26] while maintaining relevant terms. Indeed, when the emotion terms were translated into Portuguese, some of them were perceived as synonyms (e.g., tender and mild), representing a certain redundancy upon consumer evaluation, thus leading to some degree of consumer fatigue. As a result, this approach allowed for an increasing discriminative ability of the lexicon [8], even considering that in this experiment consumers were not allowed to create from the beginning an emotional lexicon in their own words and discuss them [8,13].
Indeed, Jiang, King and Prinyawiwatkul [9], summarized some of the common criteria for emotion lexicon development that are in line with the decisions applied in the present research for the definition of the emotion lexicon for herbal infusions, such as, terms should: (a) be discriminating (exclusion of nostalgic, wild and guilty), (b) have high usage frequency (aggressive, bored, disgusted and worried removed since the frequency of usage was very low), (c) belong to the domain of emotion or to have no misunderstanding and no vague meaning (pleasant and good removed because they misled as sensory attributes) (d) not be redundant (for the Portuguese language happy is similar to glad; steady to secure; mild to tame, warm to tender).
The final consumer-validated list to evaluate herbal infusions contains 24 emotional terms, a shorter list with more distinct words. Despite some concerns that this new list may yield a lower performance, with terms that do not guarantee the differentiation of the products, just through the effect of having a reduction in the number of list terms, results have proven the ability of the shorter list to still discriminate between different herbal infusions. This customized list consists mostly of positive emotions, which is in agreement with the results obtained in other studies on product emotions, considering that food experiences are mainly positive [53,54]. This could also be explained by the fact that our panellists were all consumers of herbal infusions, with a tendency to have a positive emotional profile within this product category [16,55]. This means that the presence of regular consumers ensures the likelihood of a positive effect being evoked, because, as previously referred, consumers use positive emotions when describing foods [10], particularly for those that are more familiar with the product category [55,56]. Moreover, the hedonic evaluation of the herbal teas yielded very positive average liking, supporting that there was no clear need to include additional negative terms as in the work by Kuesten et al. [57], where the PANAS questionnaire was used to evaluate the emotional response to aromas of phytonutrient supplements.
The second purpose of this study was to evaluate the impact of the harvest conditions (the type of cut and plant part) on the evoked emotional profile. Despite the identification of no significant differences in the overall liking of the samples from the different harvest conditions, researchers found differences in the treatments related to the emotional profile of lemon verbena infusions, which later helped with the definition of the premium lot. This is in line with the works on food-evoked emotions by King and Meiselman [16] and Ng, Chaya and Hort [8], showing that the measurement of overall liking is not a sufficient benchmark to predict product success. The premium lot combination chosen was the Tips-Mechanical sample, which is the one with a more differentiated and intense emotional profile. Moreover, as presented by Rocha et al. [58], this premium lot was also significantly differentiated from other commercial samples of lemon verbena, particularly by its positive correlation with emotion-related terms adventurous and energetic.
Conclusions
The adapted version of the EsSense Profile ® presented a good potential to discriminate herbal infusions. The dynamic nature of the EsSense Profile ® emotions list was validated, meaning that it is not a static list of emotions, but one that can be adapted for the product category under evaluation. The importance of the consumers' voice regarding the definition of the emotions list was emphasized, particularly regarding the meaning of the terms and the length of the list. Indeed, the emotional profiles evoked by the chosen herbal infusions gave an additional dimension to liking, in the sense that the herbal infusions evaluated in this research were equally liked, from a sensory point of view, but differed substantially in their emotional profile. It was shown that for different commercially available lemon verbena, yielding similar liking scores upon blind tasting, there were significant differences in the infusions-evoked emotional profile, with the premium lot being the one with stronger evoked emotions, such as adventurous and energetic. For this purpose, the consumers' voice was combined with the EsSense Profile ® , to adapt the last to the herbal infusions category. This adaptation into the specific product category was conveyed by gathering terms from consumers' product-evoked emotions, elicited when they were thinking or experiencing the food product.
Small differences in outcomes of the evaluation of the type of cut and plant part can be justified by the organic production method or by the high quality of the plant used for the preparation of this infusion. On the other hand, the fact that people are forced to rationalize food-related emotions may condition their answers.
These results give an in-depth knowledge about the consumers' emotional perception of herbal infusions. This information is useful for producers and markets, who may use this information to improve their communication strategies. | v2 |
2019-07-22T06:02:22.389Z | 2019-05-01T00:00:00.000Z | 197878319 | s2orc/train | Reducing coal consumption by people empowerment using local waste processing unit
Until the next following decades, energy mixed in Indonesia will be dominated by coal. Many studies assert that biomass can be used as coal substitution but it is not the case in the real world because the cost of biomass is still higher than the coal price. This study proposes the cheaper way of making biomass by using special method of local waste processing unit that has a patented name, TOSS. This kind of biomass, which was invented by STT PLN Jakarta, school of technology, is more economical than other biomass because waste as raw materials is much cheaper than other commonly used biomass like wooden forestry or agroplantations. Many cities in the world are solving their municipal waste problem by using large scale and high tech approach, which process is conducted in the landfill area. TOSS is using small scale and simple technology that can convert waste to become pellet by local people in its source. The pilot project at Klungkung showed that the pellet of TOSS can be used not only for cooking but also for diesel fuel substitution. This study will use that finding to show that TOSS pellet can also reduce coal consumption by mixing it with normal coal. The simulation is conducted by calculating the equivalent energy and capacity of waste energy from TOSS in term of coal equivalent under the context of Indonesia.
Introduction
Currently, Indonesia becomes oil and gas importer after several decades buoyed by its abundant oil and gas reserve. The only remaining fossil reserve is coal but it will be ended in 72 years, if the annual coal consumption remained as high as 400 million tons. The coal for power plant consumption until the year of 2023 is still more than 50 percent of energy mix. As shown in Figure 1, this number is not so much reduced from the current portion of 58.3%. If there is no special intention to utilize alternative energy, mainly renewable energy, it can be predicted that for the next following decades, more than half of national energy mix is composed by coal.
Figure 1. Indonesia Energy Mix Policy
Many studies assert that biomass can be used as coal substitution but in the real world there are very limited countries that have policy to use biomass as coal substitution or boiler cofiring. The reason is that the cost of biomass is still higher than that of coal. To overcome those challenge, School of Technology Jakarta, STT PLN has conducted research and pilot project to convert fresh waste into pellet by empowering local people and local small enterprise. This model has a copy right logo and name of TOSS. The waste pellet processed by TOSS model is more economical than those of other biomass because waste is much cheaper than other biomass made from wooden forestry or agroplantations. Indeed, many cities in the world have applied various technology to solve their municipal waste problem by using large scale and expensive approach , which process is conducted in the landfill area.
TOSS converts waste into pellet in its source using small scale and simple technology that can be easily proceeded by local people. In less than ten days TOSS could produce pellet as biocoal with approximately 3000 kcal/kg. The pilot project at Klungkung showed that the pellet of TOSS can reduce 80 % of diesel consumption. This study will use that finding to show that TOSS pellet can reduce coal consumption by mixing it with normal coal. The simulation is conducted by calculating the equivalent energy and capacity of waste energy from TOSS in the context of Indonesia compare to that of coal.
The purpose of the study
This study will conduct simulation to calculate the national waste capacity and energy potential in term of coal energy equivalent, by calculating the coal equivalent energy and capacity of waste to become TOSS pellet in the context of Indonesia. The potential waste energy will be estimated using national municipal waste data from the offficial documents. The result can be used to show that TOSS pellet can reduce Indonesia coal consumption by mixing it with normal coal that commonly be used for coal fired power plant. As shown on Figure 2, the annual coal consumption of Coal Power Plant (CPP) in Indonesia is around 100 Million ton. If all CPP mixes its coal with at least one percent biomass , the country may have potential saving of one million ton of coal per year. What should be done by the government is to establish policy that any used of coal for industry must be mixed by certain percent of biomass as have already applied in several countries.
Coal as a largest fossil fuel potential
Coal is the largest available reserve of fossil fuel which was formed from the plants decomposition that has a high energy potential. Therefore, until recently, coal is still highly consumed for both electricity generation and heating purpose [1]. Coal was getting energy from the sun, which was then stored in the dead plant for several hundred million years [2]. The older the coal maturity, the higher the energy content. The highest calorific value of coal is Anthracite (31-36 MJ/Kg), followed by Bituminous (25-34 MJ/Kg), Sub-bitumonous (19-30 MJ/Kg), and the lowest is Lignite (12-19 MJ/Kg). According to US based, Lazard's levelized cost of energy (LCOE) analysis version 12, the lowest LCOE of power plant is Gas Combined Cycle ($41-$74) and coal is the second lowest with LCOE on the range of $60 to $143[3]. However, many countries are still relying on coal as the fuel for power plants as its reserve is abundant. Coal exhibits a 109 year reserve to production ratio, means that from today, coal will be lasted in the next hundred years [4]. Nevertheless, if the coal consumption is remain increasing from year to year, its deposit will be lasted in less than a hundred years. Therefore, there should be a collaborative effort among countries that highly exploite coal to reduce its consumption by mixing it with renewable biocoal, as an option that will be presented in this study.
Waste to energy
The energy potential of waste has become attention of many researchers. One of those is McKendry who examined the potential of a restored landfill site to act as a biomass source. He asserted that like other purpose-grown biomass, waste biomass is also potentially economical for power generation [5]. Waste, particularly municipal waste will create problem to both the people and the environment. Originally, people treat their waste by either burning it into incenerator or take a cheaper way by throwing the waste away to landfill site. But, currently, many big cities in Indonesia including Jakarta, Bandung, Denpasar, Medan and other high populated towns in Indonesia are facing a problem of limited landfill space. As an option, there is new approaches to treat municipal waste by converting trash into energy, namely waste-to-energy (WTE) that may leave mass-burn incineration method. Waste to energy (WTE) is a terminology commonly used to describe the conversion of waste by-products into energy such as steam-generated electricity [6].
The most common model of WTE includes gasification, plasma gasification, and pyrolysis, which are potentially cleaner in emissions and provides more flexible end product in terms of energy output. WTE could eliminate landfilling used because waste can be consumed directly in term of thermal or electricity. Among those approaches, if the main purpose is to produce electricity, so far any combustion-based systems is better. But if the main goal is to strongly reduce waste material that should be sent to landfill, the best way is gasification [7]. However, municipal wastes are by nature very heterogen that can make it hard to be used for power plants without assessing its materials. Assesment is needed to sort which material is sufficient for instance, recycling, composting, reducing, or redesigning. However, WTE technologies are still facing economic challence because this process is challenged by some problems including operational inexperience, high costs, lack of financing, and concerns about toxic emissions.
TOSS : Local waste processing unit
Recently, a new invention of waste treatment that can be carried out by ordinary people in their own communities was declared by the School of Technology in Jakarta, STT PLN. This local waste processing unit has a copy right name , TOSS, which logo was patented as shown in Figure 3. TOSS is originated from Indonesian language, like for instace the Japanese original name for Kaizen in management field or Osaki in the waste treatment field. Although there have already some waste to energy methods including incenerator, digester, gasifier, and pyrolisis, such technologies are still challenged by several problems including high capital expense and the large required processing area, which is usually conducted in the landfil [9], [10]. For example, there are at least two projects, one in Bali and another in Bekasi, that were failed because the process was depending on available methane gas from landfil, while in reality was not sufficient. Even worst, such digester concept could not solve the piled of remaining waste because it took only the gas produced by organic waste, not the solid waste. As the intention to solve municipal waste problems in many big cities, around two years ago, the government of Indonesia declared large scale waste to energy projects to convert waste in the landfill to become electricity in 12 big cities. But until recently, none of those expensive projects have been started, because PLN as offtaker company has not agreed to the price offered by the investors. Similarly, the local governments are also reluctance to the proposed tipping fee.
Actually, TOSS as a simple and people friendly way of local waste treatment unit can be used as an alternative to solve that problem [11], [12], [13]. As shown in Figure 1, the TOSS process begins with collecting valuable waste such as bottle, box, and can that has valuable selling price that can be sold for additional income. Then, put the waste into 2m x1m x1 m bamboo container and spray it layer by layer using special made bioactivator to reduce waste moisture, waste odor, and to create increas the calory. After the bamboo box full, then cover it by plastic sheet and keep it for about a week and control the waste temperature to comfort the bactery (should be more than 40 o C but could not exceed 70 o C).
After 3 days the waste has already died, no more odor and the volume will be reduced about half, but keep it in the bamboo until 7 to 10 days before crushing it using shredded machine. The shredded waste is then screened before put it into briqupellet machine. Now the pellet or briquette is ready to be used as biocoal for the heating or electricity generation purpose. The TOSS pellet has been tested several times in the laboratory and reported that the gross calorific value of the TOSS pellets are vary around 2800 to 3200 kcal/kg. Figure 4. TOSS process at Pondok Kopi and Duri Kosambi STT PLN has undertaken pilot project of TOSS at Pondok Kopi and Duri Kosambi Jakarta and since January 2018, TOSS has been implemented at Klungkung District. The project has successfully generated electricity using 15 kW Yanmar diesel genset and 30 HP Trilion gasifier feeding by TOSS pellet produced by Klungkung village people. Unlike previous way that must send waste to the landfill, TOSS can treat municipal waste in their own place. Therefore TOSS can eliminate the frequency of waste truck to the landfill. In addition, TOSS is different from other small scale waste treatment that needs to separate organic and non organic waste, TOSS needs no preliminary sorting and can solve the plastic and other non organic could process the whole mix waste .
Benefit and cost of TOSS
In their study, Legino and Arianto depicted that the unit size of TOSS is maximum 10 ton of waste per day because TOSS is dedicated for local people and affordable by the small and medium enterprice [13]. The investment cost of the smallest unit of TOSS (3 ton of waste per day) is USD 74,360 and the annual operational cost is USD 14,435. This unit can produces 144 kWh energy per day. For the largest unit of 10 ton of waste per day, the investement cost is USD 223,156 and its associated operational expediture is USD 47,546 per day, that may potentially produce 1440 kWH of electrical energy [13].
TOSS provides intangible benefit in the form of social benefit and environmental benefit. Social benefit can be earned from people opportunity to get income and wellbeing improvement under the zero waste milieu. Environmental benefit can be valued from fossil fuel cost saving and carbon reduction.
Potential Waste Energy calculation
This study is a case study analysis by using simulation to calculate the possible energy amount that can be produced by TOSS model to reduce the exploitation of coal. The cost and benefit of TOSS then compared to that of coal. The primary data is taken from the pilot project of TOSS that have been conducted at three locations, Pondok Kopi, Duri Kosambi, and Klungkung areas. The secondary data is taken from a formal document including Electricity Business Plan, Statistic Book published by government institution such as Ministry of Energy and Mineral Resources, PLN State owned enterpriace for electricity, and Central Statistic Bureau (BPS).
Data
Selected data for related fuel prices are taken from National Electricity Business Plan 2018-2027 as follows: -Sub Bituminous coal (5200 kcal/kg) with the price of USD 70 per ton, -Lignite coal (4400 kcal/kg) is USD 50 per ton -Low rank coal at mine mouth (<3800 kcal/kg) is USD25 per ton.
Municipal waste energy potential in Indonesia
The main data that will be used in this study will taken from the statistic book of New and Renewable energy, Ministry of Energy and Mineral Resources, including the landfill capacity along with its energy potential and the city population in each town. However, in reality, there are still a lot of waste that could not be sent to landfill, so there will be additional waste potential as well as its energy potential that can be calculated from the city population, as summarized below [6].
-The national population is 253 million people -The total capacity of landfill around the country is 8,42 millio ton per year.
-The breadkown data for each city can be see in Table The capital expenditure and operational expenditure of a full cycle of TOSS process, from raw waste to become pellet up to generating energy are taken from previous study based on direct observation from the pilot projects as described above [12], [13].
The simulation is calculated with the follows: -One ton of mixed municipal waste can be converted to 200-300 kg pellet -The calorific value of 3000 ± 200 kCal/kg. -For the purpose of this simulation, we put pellet production factor (p) = 0,25, means one ton of waste can produce 250 kg pellet -The pellet calorific value that will be used for calculation, c = 3000 kCal/kg. -To calculate the amount of energy, this simulation will use the average value that observed during implementation of TOSS at Klungkung, which is 1 kg of pellet may produced around 1 kWh electrical energy. -Electricity efficiency generation (he ) is assumed 0,6 -The waste to energy potential can be calculated as follows: where Ea is annual energy potential in MWH, W is a raw waste volume in ton, and he is the electricity efficiency.
Landfill energy potential
As an example, let us calculate energy potential from Jabar area landfill. If we assume that all wastes in landfill will be converted into electricity, than the estimated energy potential Ea = p x W x he =0,25x1,866,000x0,6 = 279,900 MWH per year. The power plant capacity can be calculated by using this formula: P = Ea/(Hx360); where P: electrical capacity in MW, Ea is annual energy potential in MWH, H is hour of operation per day (6 hours for peaker, 24 hours for base load). If the plant is used for base load 24 hours, the capacity is : P = 279,900/(24x360) = 32 MW. If the plant is used as a peaker, than the potential capacity is: P= 307,890/(6x360) = 130 MW. The potential capacity of the rest areas are presented in Table 1.
Non landfill waste capacity estimation
Actually, not all municipal waste can be put into the landfill area and some of them are thrown into the water way, street, and any places. Every human create around 0,5 kg waste, so we can estimate the total waste volume in any provences and for the purpose of simulation of this study we put 0,4 kg waste per person per day. A non land fill waste estimation is calculated by substracting the landfill waste from the estimated total waste in each provence. For instance, the total waste of Jabar provence is 0.4 multiply by the population of 47,38 million people equal to 11.845 million ton per year. Non landfill waste volume is calculated by substracting the landfill volume from the total waste volume or 11.845 million ton minus 1.866 million ton equal to 9.979 million ton waste per year or 2.495 ton pellet per year. With he = 0.6, the energy potential is 1.496 GWH. It can be used for base power plant with capacity of 173.2 MW or as peaker 6 hour power plant with capacity of 693 MW. The rest calculation for the whole areas are presented in Table 2.
TOSS pellet as coal substitution
The potential saving of coal consumption by waste pellet substitution will be calculated by using an equal value of lignite as a lowest rank of coal. For the purpose of this study simulation, the lignite calorific value will be put on 3800 kCal/kg. If we take an average calorific value of TOSS pellet as 3000 kcal/kg, then any kg of coal is similar to 3800/3000 kg of coal, or TOSS pellet to coal ratio (p) is 1.3. From the power plant company point of view, TOSS pellet is still economical if the price of 1.3 ton pellet is below the price of 1 ton of coal price. For example, if the price of low rank coal is USD 300 per ton, than the pellet should be around 300/1,3 x USD300 =USD237/ton. Estimated revenue can be taken from selling pellet, tipping fee, and selling valuable waste such as bottle, corrugated paper, etc. Revenue from selling pellet can be calculated as follows: where R is revenue in USD, W is waste weight in ton, p is waste to pellet factor, and t is the pellet price in USD. For the purpose of simulation we use the package of 3 ton of waste with p = 0,25, and t, :=USD30/ton. TOSS owner is still getting revenues from tipping fee that relatively lower than the existing waste management cost in each region. If the tipping fee is USD 7 per ton, than the local business company still enjoy around 17% IRR, higher than its used discount rate (10%). It means that TOSS pellet could potentially be used as coal substitution and could potentially reduce the coal exploitation.
Coal potential saving by using TOSS pellet
If the whole waste from landfill can be converted to pellet using TOSS process, the annual coal saving potential can be calculated as follows.
-The annual pellet potential as seen in Table 1 is 2,106,245 ton , rounded to 2 million ton -Pellet to coal ratio (p) is 1.3 -The annual coal potential saving from landfill waste is 1.54 million ton coal equivalent Similarly if the non landfill waste included (Table 2), the annual pellet production is 13.7 ton or coal equivalent potential saving will be 10.5 ton coal equivalent.
Other potential benefit of TOSS
If the government of Indonesia applies TOSS concept across the country, than the coal consumption will be reduced around 100 million ton pe year. The coal consumption projection based on national business plan (RUPTL) 2018-2027 as shown in Figure 1 can be reduced as presented in Figure 5. In other words, the national coal reserve will be lasted longer if the government applies TOSS as the way of waste to energy model in every cities.
By applying TOSS in the whole nation as cofiring for any use of coal, the government can accomplisth the target of 23% Renewable Energy (RE) of the national energy mix faster than the target of 2023. For example if 5% cofiring policy can be applied, it is similar to build RE plants that can produce 5 GWH of RE annually, equivalent to around 700 MW of Hydro Power Plant.
Conclusion
Since the process of making biocoal pellet by TOSS process can be undertaken simultaneously by ordirnary people across the country, this study shows that TOSS can potentially save coal consumption around 10 million ton equivalent per year. In other words, TOSS may potentially save coal usage for power plan around 100 million for 10 year as the period of the national business plan. TOSS is also beneficial for environment, since it may save fossil fuel reserve for the next generation and reduce the methane gas emission that could harm the Ozone. In addition, TOSS provides social benefit as it may create opportunity for local people to run small business to produce pellet from domestic solid waste. Last but not least, TOSS may create clean and fresh neighbourhood since there will be less waste truck come and go from communities to the land fill. | v2 |
2020-08-13T10:05:30.602Z | 2020-08-01T00:00:00.000Z | 221294669 | s2orc/train | Physical Performance Improves With Time and a Functional Knee Brace in Athletes After ACL Reconstruction
Background: Athletes who return to sport (RTS) after anterior cruciate ligament reconstruction (ACLR) often have reduced physical performance and a high reinjury rate. Additionally, it is currently unclear how physical performance measures can change during the RTS transition and with the use of a functional knee brace. Purpose/Hypothesis: The purpose of this study was to examine the effects of time since surgery (at RTS and 3 months after RTS) and of wearing a brace on physical performance in patients who have undergone ACLR. We hypothesized that physical performance measures would improve with time and would not be affected by brace condition. Study Design: Controlled laboratory study. Methods: A total of 28 patients who underwent ACLR (9 males, 19 females) completed physical performance testing both after being released for RTS and 3 months later. Physical performance tests included the modified agility t test (MAT) and vertical jump height, which were completed with and without a knee brace. A repeated-measures analysis of variance determined the effect of time and bracing on performance measures. Results: The impact of the knee brace was different at the 2 time points for the MAT side shuffle (P = .047). Wearing a functional knee brace did not affect any other physical performance measure. MAT times improved for total time (P < .001) and backpedal (P < .001), and vertical jump height increased (P = .002) in the 3 months after RTS. Conclusion: The present study showed that physical performance measures of agility and vertical jump height improved in the first 3 months after RTS. This study also showed that wearing a knee brace did not hinder physical performance. Clinical Relevance: Wearing a functional knee brace does not affect physical performance, and therefore a brace could be worn during the RTS transition without concern. Additionally, physical performance measures may still improve 3 months past traditional RTS, therefore justifying delayed RTS.
As many as 250,000 anterior cruciate ligament (ACL) injuries occur in the United States each year. 14 Most athletes will undergo ACL reconstruction (ACLR) surgery in hopes of restoring knee stability and allowing for return to sports (RTS). 19 However, even after surgical reconstruction and 6 to 12 months of rigorous physical therapy, many athletes with ACLR are unsuccessful when attempting RTS. 2,32 In a recent meta-analysis, Ardern et al 2 reported that although 82% of ACLR patients returned to some level of sport postoperatively, only 63% successfully returned to their preinjury level of sport and 44% returned to competitive sport. These low rates of successful RTS are also found among collegiate and professional athletes, who are expected to have excellent access to physical therapy and both the time and the motivation needed for successful recovery. 5,45 In athletes who do successfully return to their sport after ACLR, marked decreases in performance have been noted. 5,17,45 A more complete understanding of the factors that govern physical performance in athletes recovering from ACLR is necessary to optimize the RTS transition.
The early RTS period is a stressful time for athletes recovering from ACLR, as they transition back to sport after being in a physical therapy setting. Recovering athletes may want to perform well when returning to sport, but they must understand that they have not finished their recovery and that the RTS transition needs to be gradual to prevent further injury. 11 The later stages of physical therapy often focus on recovery of surgical limb strength and power, with the goal of returning to sports and preventing secondary injuries. 4,27,47 The ability to complete sportspecific tasks without deficits should also be a focus to ensure that patients will be able to achieve RTS at an appropriate level of performance. 27 However, little research has been conducted regarding how physical performance can change in the early RTS transition in athletes recovering from ACLR.
Many orthopaedic surgeons prescribe a knee brace for their ACLR patients to wear during activity, 10,24 but low brace compliance remains an issue. Despite moderate evidence suggesting that braces improve movement mechanics and reduce the risk of reinjury in athletes such as skiers, 42 many athletes choose not to wear their brace owing to concerns about its impact on their physical performance. 23,28,35,37 Although the impact of brace wear on sport performance has been a potential concern, previous studies have reported conflicting results when examining whether a functional knee brace improves, 8,33 hinders, 9,48 or has no effect 3,26,44 on physical performance. Most studies did not provide time for participants to get accustomed to the brace, 3,9,26,33,44,48 but the impact of brace wear on performance has been shown to decrease as participants become acclimated to the brace in healthy control populations. 36 Furthermore, healthy individuals 3 and patients with ACL deficiency 8,26,44 have been the subject of previous studies on the effects of bracing on physical performance, which cannot necessarily be generalized to patients who undergo ACLR. A more complete understanding of the effects of brace condition, in addition to time since surgery, on physical performance measures would help surgeons determine the best methods to help athletes achieve RTS.
Physical performance tests are widely used to both assess recovery and retrain athletic ability in patients recovering from ACLR. 4,13,15,27 Unilateral hop tests are the most widely used physical performance tests in athletes recovering from ACLR, but although hop testing has proven to be important for determining readiness for RTS from an injury prevention perspective, 20,34 other measures may better address the question of whether athletes are prepared to return from a performance standpoint. For example, the modified agility t test (MAT) is widely used by athletic trainers and coaches for quantifying agility. The MAT incorporates acceleration, deceleration, change of direction, side shuffling, and backpedaling, which are fundamental movements in many sports. 30,39 Because jumping is another fundamental movement in sports, the countermovement jump (CMJ) test is widely used to quantify an athlete's explosive power. 30 Scores on the MAT are not correlated with CMJ height or 10-m straight sprint times, suggesting that these physical performance measures quantify multiple independent aspects of sports-related movement ability. 39 The MAT and CMJ tests have also been incorporated into clinical RTS test batteries to assess physical performance, suggesting that results on these tests may be important with regard to injury prevention. 11 Despite the importance of these tests, limited research has been conducted on these performance metrics, with the exception of 1 study which found that MAT time improved between 4 and 6 months after ACLR surgery while vertical jump height had a minimal improvement. 40 However, there is a clear need for further testing on the factors that affect performance measures and how they could be incorporated into future RTS evaluations.
The purpose of the present study was to determine whether physical performance would change in athletes recovering from ACLR during the first 3 months after returning to sport participation and while wearing a custom-fit, extension constraint, functional knee brace. We hypothesized that wearing the brace would have no impact on physical performance, as previous literature has found that brace condition does not affect hop distance, 3,26,44 but that physical performance measures would improve over the 3-month period.
METHODS Patients
A total of 30 participants (9 male, 21 female; age, 19.4 ± 4.2 years; height, 1.73 ± 0.07 m; mass, 72.4 ± 13.5 kg) recovering from primary unilateral ACLR completed institutional review board informed consent documents and were enrolled in the study between May 2016 and May 2017. All participants had been involved in competitive sports before injury, had no previous knee injury or surgery, and did not have any additional ligament injuries. All participants had undergone physical therapy, had been released to RTS by their surgeon, and had completed similar formal rehabilitation protocols designed to prepare them for RTS. Decisions on RTS release and bracing protocols were made by the surgeon and were not collected as part of this study design. In total, 17 participants had injured their dominant limb and 13 had injured their nondominant limb, which was defined as the limb used to kick a soccer ball. Most patients received an autograft (8 hamstring, 21 patellar tendon), except for 1 patient who received an allograft. All participants were given a custom-fit functional knee brace (DonJoy Orthopaedics) with extension resistance in the last 30 and were instructed by their surgeon to wear the brace while doing anything more strenuous than walking. The participants were tested according to the study protocol upon RTS as well as 3 months later (RTSþ3).
Procedure
Before the surgical procedure, participants completed the ACL Return to Sport after Injury (ACL-RSI) scale, which quantifies the psychological aspects of recovering from an ACLR and returning to sport, 46 and the Marx activity score, which quantifies physical activity. 21 Participants also completed these scores at RTS and RTSþ3. Before the performance testing session, all participants were asked to wear a neutral cushioned running shoe (Air Pegasus; Nike Inc) provided by the laboratory and were given time to become accustomed to the shoe before testing. Tests of the single hop, triple hop, and crossover hop on the surgical and nonsurgical limb were performed to document functional ability in the nonbraced condition at the time of RTS. 34 At each visit, the participant completed the MAT and a maximum vertical jump. These were done with and without a knee brace on the surgical leg, and the order (braced and unbraced) was randomized. The participant was given a 5-minute break between each task to prevent fatigue.
Agility Testing
The MAT was used to quantify agility; this test incorporates straight sprinting, directional changes, lateral movement to both the left and right, as well as backpedaling. Figure 1 shows the agility course setup for the MAT. Participants began with their feet behind a line at cone A. When they were ready, they first sprinted to cone B, shuffled left to cone C, shuffled right to cone D, shuffled left back to cone B, and then backpedaled through the same line they started from at cone A. 39 All participants were instructed to touch the base of each cone with their hand, to not cross their feet while shuffling, and to face forward throughout the entire test. If these conditions were not met, the trial was not scored and was repeated. A timing gate (Brower Timing Systems) was placed immediately in front of the start/finish line, which measured completion time to the nearest hundredth of a second. The t test and the MAT have been previously found to have high between-session reliability. 30,39 The MAT was completed 3 times in both the braced and the nonbraced conditions at each visit. The 3 trials in each condition (braced vs nonbraced) were averaged by condition at each visit.
Maximum Vertical Jump
The vertical jump test was used to quantify power. We began by measuring the participant's standing maximal reach height with his or her dominant limb followed by measuring a maximal vertical jump with the same arm reaching upward, both measured to the nearest tenth of a centimeter (Brower Vertical Jump). 38 Maximum jump height was then taken as the difference between maximal height of the hand while the participant was standing and jumping. This test was completed 3 times in both the braced and the nonbraced conditions and was averaged for both conditions at each visit.
Statistical Analysis
All statistics were completed by use of SPSS (Version 24; SPSS Inc) with a significance level of .05. Repeatedmeasures analyses of variance were performed to determine the main effects of time (RTS and RTSþ3) and brace (braced and unbraced) and the average values for each task (MAT times and vertical jump height). In addition to determining the total MAT time, we also evaluated the times to complete the sprinting, side shuffling, and backpedaling portions of the test. Paired t tests were completed to compare the ACL-RSI and Marx scores between the testing visits. Effect sizes were calculated with Z 2 , which is the proportion of the dependent variable (speed or jump height) that can be attributed to the independent variable (time or brace). Therefore, a larger effect size indicates a stronger relationship between the 2 variables, or the fact that the independent variable (eg, brace) has a large effect on changes in the dependent variable (eg, speed). These effect sizes were considered small, medium, and large if they were above 0.04, 0.25, and 0.64, respectively. 12
RESULTS
Of the 30 initial patients, 2 female participants did not complete the study and were therefore excluded from this analysis. Initial testing was completed at a mean ± SD of 6.95 ± 1.27 months after surgery when the patient was returned to sport (RTS), and follow-up assessments were completed approximately 3 months (3.46 ± 0.49 months) after the initial visit (RTSþ3). The limb symmetry index for participants at the time of RTS, calculated as the distance ratio between the surgical and nonsurgical limb (100% indicates perfect symmety), was 80.7% ± 12.4% for the single hop, 77.0% ± 13.8% for the triple hop, and 79.5% ± 17.2% for the crossover hop. ACL-RSI scores significantly increased between the 2 visits (RTS, 87.2 ± 20.1; RTSþ3, 93.9 ± 16.9; P ¼ .028); however, no significant difference in Marx scores was found between the testing sessions (RTS, 14.3 ± 2.6; RTSþ3, 13.8 ± 3.6; P ¼ .867).
DISCUSSION
The purpose of the present study was to compare the effect of time since surgery (RTS and RTSþ3) and of wearing a functional knee brace on physical performance measures in patients recovering from ACLR. We hypothesized that physical performance would improve in the 3 months after RTS independent of knee brace condition and that physical performance would be similar between the braced and nonbraced conditions. Agility and jump height significantly improved between the RTS time point and the RTSþ3 assessment, which indicates that physical performance improved after patients were released from physical therapy and returned to sport participation. Although improvements in physical performance measures were seen across this time period, the results of this study did not indicate whether these athletes had any improvement in other aspects of sport performance. Additionally, the results indicate that functional knee braces do not hinder sports performance, based on the measures involved in the present study.
Many surgeons consider range of motion and knee stability when determining whether to allow a patient to RTS, 32 but other surgeons have suggested that RTS decisions should also consider physical performance measures. 13,15,27,29 The MAT and CMJ have been added to RTS test batteries to further quantify physical performance and provide additional measures for determining when athletes are ready for RTS. 27,40 The present study used the MAT and CMJ to quantify agility and power, respectively. This examination of physical performance found that patient agility improved in 24 of 28 participants and jump height improved in 20 of 28 participants with time since surgery. This suggests that ACLR patients were more physically prepared for the athletic demands of their sport 3 months after RTS. A previous study found that total MAT time moderately improved with time and rehabilitation, but unlike the results in the current study, those authors found that there was minimal improvement in jump height. 40 This difference could be because the previous study tested participants at 4 and 6 months after surgery, 40 whereas the current study tested participants at approximately 6 and 9 months after surgery, suggesting that the 1.37 ± 0.14 1.39 ± 0.17 1.32 ± 0.13 1.33 ± 0.14 P ¼ .051 measure of jump height may improve in later stages of recovery. Although further research would allow for more thorough conclusions, the results of the present study suggest that delaying RTS, focusing on sports performance tasks during rehabilitation, and transitioning to full activities gradually may make the RTS transition more successful.
The RTS transition should be gradual, and athletes should work to improve performance-based tasks to optimize their chances of successful RTS. Physical therapy and rehabilitation protocols focus on muscle strength, 4,47 improving speed in athletic movements, movement symmetry, power production, and endurance, 27,40 but little is known about what happens to these measures after patients are released from physical therapy and return to sport participation. Unfortunately, due to a variety of factors, athletes are often cleared for RTS even though their performance levels are still being recovered and they may still have some functional deficits. 22 High expectations despite functional deficits can cause psychological stress for these athletes, which has been shown to be present throughout the rehabilitation process and the RTS transition. 1,18,25,46 The current study evaluated psychological changes through ACL-RSI scores and found that they improved between the first and second testing sessions. This agrees the findings from another study that ACL-RSI scores linearly increase after ACLR. 18 These results indicate that athletes may feel more confident about their knee and returning to sport at their second visit, which may also be related to their improved physical performance.
In addition, the present study focused on the effect of brace condition on physical performance measures throughout the RTS transition and found that physical performance was not affected by wearing a brace. Many orthopaedic surgeons prescribe a functional knee brace for ACLR patients to wear during activity. 10,24 The findings from the present study agree with other studies which have found that brace condition does not affect physical performance tests, quadriceps or hamstring strength, knee function, or static knee stability. 16,23,35 There is evidence that brace wear may improve movement mechanics during walking and running by increasing knee flexion 8,41 and during cutting by increasing knee flexion velocity at initial contact. 8,9 Additional research has shown that braces improve jumping mechanics such as bilateral landing symmetry 6 and vertical jump height. 33 One study found that hop distance symmetry improved with time and a braced condition, which suggests that both time and bracing may not only improve physical performance but also help prevent additional injury. 31 Finally, previous studies have shown that athletes may be more confident wearing a brace on their surgical limb during physical activity. 7,43 These findings combined with the results from the present study suggest that physical performance measures are either unaffected or improved when athletes wear a brace. Future work should also focus on the effects of braces outside of the laboratory setting during sport as well as determine whether there are responders and nonresponders to brace wear in order to target bracing interventions.
Some limitations were associated with the present study. One potential limitation is the fact that no information about RTS criteria, physical therapy activities, or specific rehabilitation protocols was collected from the participants. Additionally, the number of male and female participants was not evenly split, and we did not account for the potential influence of graft type. Controlling RTS, rehabilitation, sex, and graft type could allow for a more homogeneous participant population. Another potential limitation is that participants were asked to wear the brace during any activity more strenuous than walking, but this was not monitored or controlled. Furthermore, the study did not include a control group that was not provided a brace to wear between testing sessions. Such a control condition could have allowed for conclusions about whether improvements over time were strictly due to time since surgery. One final limitation is the fact that based on the enrollment date, there was an overlap in time since surgery for the testing sessions (some participants underwent their second session before other participants had their first session). However, this is unlikely to have affected the results of this study.
CONCLUSION
Agility and vertical jump height in patients who had undergone ACLR improved in the 3 months after RTS, independent of brace condition. The mechanism for this improvement and the risk of second ACL injury are not fully understood and should be investigated further. Additionally, even though the long-term effects of brace wear on movement mechanics are unclear, the results from this study indicate that braces can be worn without a major impact on physical performance. | v2 |
2017-10-01T08:24:51.033Z | 2006-06-16T00:00:00.000Z | 9617199 | s2orc/train | Intracellular Actions of Group IIA Secreted Phospholipase A2 and Group IVA Cytosolic Phospholipase A2 Contribute to Arachidonic Acid Release and Prostaglandin Production in Rat Gastric Mucosal Cells and Transfected Human Embryonic Kidney Cells*
Gastric epithelial cells liberate prostaglandin E2 in response to cytokines as part of the process of healing of gastric lesions. Treatment of the rat gastric epithelial cell line RGM1 with transforming growth factor-α and interleukin-1β leads to synergistic release of arachidonate and production of prostaglandin E2. Results with highly specific and potent phospholipase A2 inhibitors and with small interfering RNA show that cytosolic phospholipase A2-α and group IIA secreted phospholipase A2 contribute to arachidonate release from cytokine-stimulated RGM1 cells. In the late phase of arachidonate release, group IIA secreted phospholipase A2 is induced (detected at the mRNA and protein levels), and the action of cytosolic phospholipase A2-α is required for this induction. Results with RGM1 cells and group IIA secreted phospholipase A2-transfected HEK293 cells show that the group IIA phospholipase acts prior to externalization from the cells. RGM1 cells also express group XIIA secreted phospholipase A2, but this enzyme is not regulated by cytokines nor does it contribute to arachidonate release. The other eight secreted phospholipases A2 were not detected in RGM1 cells at the mRNA level. These results clearly show that cytosolic and group IIA secreted phospholipases A2 work together to liberate arachidonate from RGM1 cell phospholipids in response to cytokines.
There is current interest in phospholipases A 2 (PLA 2 ) 2 because of their involvement in the liberation of arachidonic acid from membrane phospholipids for the biosynthesis of the eicosanoids (prostaglandins, leukotrienes, and others). There is general consensus that cytosolic PLA 2 (cPLA 2 -␣, also known as group IVA PLA 2 ) plays a critical role in arachidonic acid release in mammalian cells (1,2). For example, studies in mice have shown that disruption of the gene coding for cPLA 2 -␣ eliminates or greatly reduces arachidonic acid release in a number of cells, including agonist-stimulated macrophages and neutrophils (3)(4)(5)(6).
The mouse and human genomes also contain genes encoding 10 distinct secreted PLA 2 s (sPLA 2 s) (7,8) suggesting multiple physiological functions for sPLA 2 s. The role of these enzymes in promoting arachidonic acid release in mammalian cells is much less clear than for cPLA 2 -␣ and is under active investigation. In mouse blood platelets, disruption of the cPLA 2 -␣ gene leads to a significant reduction in the amount of thromboxane release when collagen is the agonist but not when cells are triggered with ATP, suggesting that another PLA 2 could be involved in arachidonate release (9). In mouse peritoneal macrophages, it has been shown recently that disruption of the group V sPLA 2 gene leads to an ϳ50% reduction in zymosan-stimulated leukotriene C 4 and prostaglandin E 2 (PGE 2 ) production (10), yet in the same cell/agonist system disruption of the cPLA 2 -␣ gene nearly completely abrogates eicosanoid generation (5). This demonstrates that there is coordinate action between sPLA 2 and cPLA 2 -␣ in mammalian cells. This has also been observed in mouse mesangial cells and human embryonic kidney cells (HEK293) transfected with various sPLA 2 s (11,12) and in human neutrophils treated with exogenously added group V sPLA 2 (13). The molecular basis for this sPLA 2 -cPLA 2 -␣ coordinate action is unknown.
In our recent studies of arachidonate release in transfected HEK293 cells, we found that forcible overexpression of group IIA sPLA 2 led to arachidonate release, that the sPLA 2 was acting prior to externalization from the cell, and that highly specific cPLA 2 -␣ inhibitors significantly reduced arachidonate release showing that this latter enzyme was also involved (12). Key observations were that exogenously added human group IIA sPLA 2 was inefficient at liberating arachidonate and that the cell-impermeable and potent group IIA sPLA 2 inhibitor Me-indoxam ( Fig. 1) was not able to reduce arachidonate release. The results of this recent study (12) are inconsistent with the earlier model proposed for the action of group IIA sPLA 2 in transfected HEK293 cells in which the enzyme is first secreted into the extracellular medium where it binds to cell surface proteoglycan and finally is internalized into caveolae-like compartments where it acts on membrane phospholipid to liberate arachidonate (14). We also showed that Me-indoxam fails to block arachidonate release in zymosan-stimulated mouse peritoneal macrophages and in agonist-stimulated p388D 1 macrophage-like cells (12) despite convincing evidence that group V sPLA 2 , a Me-indoxam-sensitive enzyme, augments arachidonate release in mouse macrophages (10). This again suggests that the sPLA 2 acts prior to release from these cells (12).
Based on our findings with Me-indoxam described above, we were intrigued by the report of Akiba et al. (15) that the sPLA 2 inhibitor indoxam, which is structurally similar to Me-indoxam ( Fig. 1), blocks PGE 2 generation in a rat gastric mucosa cell line (RGM1) stimulated with TGF-␣ and IL-1. As pointed out by Akiba et al. (15) RGM1 cells produce PGE 2 via cyclooxygenase-2 when stimulated by TGF-␣. The latter is known to induce the proliferation of gastric epithelial cells as part of the gastric lesion wound healing process (16). Also, PGE 2 generated in gastric cells, including epithelial cells and fibroblasts, promotes healing of gastric mucosal lesions and the maintenance of gastric mucosal integrity as shown by several studies (for example see Ref. 17). More recent studies by Akiba et al. (15) showed that IL-1 synergizes with TGF-␣ to elicit PGE 2 production from RGM1 cells and that this cytokine may also be involved in healing of gastric lesions through generation of PGE 2 .
We now report additional studies on the production of arachidonate and PGE 2 from RGM1 cells, with particular focus on the use of cellpermeable and cell-impermeable sPLA 2 inhibitors. In addition, we studied the coordinate action between sPLA 2 and cPLA 2 -␣ in these cells. Finally, we have carried out additional studies with group IIA sPLA 2transfected HEK293 cells that provide further evidence that the sPLA 2 acts prior to release from the cells. Similarities were found for the coordinate action of group IIA sPLA 2 and cPLA 2 -␣ in transfected HEK293 cells and in nontransfected RGM1 cells.
Materials-[ 3 H]Arachidonic acid is from PerkinElmer Life Sciences.
Fatty acid free bovine serum albumin is from Sigma (catalog number A6003). Recombinant human IL-1 and TGF-␣ are from R & D Systems (catalog numbers 201-LB-005 and 239-A-100). IL-1 and TGF-␣ stock solutions were made according to the manufacturer's recommendation. Sterile phosphate-buffered saline containing 0.1% bovine serum albumin was used to prepare a stock solution of 10 ng/l of IL-1, and 10 mM acetic acid containing 0.1% bovine serum albumin was used to prepare a stock solution of 50 ng/l of TGF-␣. Recombinant rat group IIA sPLA 2 was obtained as a gift from M. Janssen (University of Utrecht, The Netherlands) (18). RGM1 cells were obtained from the Riken Cell Bank (cell number RCB0876) with permission from Dr. Hirofumi Matsui (University of Tsukuba). The preparation of anti-mouse group IIA sPLA 2 antiserum will be described elsewhere. Wyeth-1 was prepared as described (19), and its structure and purify were confirmed by reversephase high pressure liquid chromatography, 1 H NMR, and electrospray ionization mass spectrometry (not shown). Pyrrolidine-2 (also known as pyrrophenone), Me-indoxam, and indoxam were prepared as described (12,20,21) and characterized as for Wyeth-1.
Cell Culture-RGM1 cells were maintained in DMEM/F-12 medium (catalog number 11320-033, Invitrogen) containing 20% heat-inactivated, fetal bovine serum, 100 units/ml penicillin, and 100 g/ml streptomycin (22) in plastic dishes in a humidified atmosphere of 5% CO 2 at 37°C. Cells were routinely split with trypsin/EDTA. Arachidonate Release and PGE 2 Production-Cells were plated at 2 ϫ 10 5 cells/well in 24-well plates in DMEM/F-12 containing 20% FBS and were allowed to adhere for 5-7 h. The medium was replaced with serum-free DMEM/F-12 containing 0.01% fatty acid-free bovine serum albumin and 0.1 Ci/ml [ 3 H]arachidonic acid and incubated for 20 -24 h. The labeled cells were covered with 1 ml of DMEM/F-12 containing 0.01% fatty acid-free bovine serum albumin. After about 10 min, the medium was removed. For time course experiments, the labeled cells were covered with 0.5 or 1 ml of DMEM/F-12 medium containing 5 ng/ml IL-1, 100 ng/ml TGF-␣, or their combination for the desired time period (0, 12, and 24 h). For the experiment to check cPLA 2 -␣ and sPLA 2 inhibitors, recombinant rat group IIA sPLA 2 or heparin, the labeled cells were covered with 0.5 or 1 ml of DMEM/F-12 medium containing the combination of 5 ng/ml IL-1 and 100 ng/ml TGF-␣ in the absence or presence of various concentrations of cPLA 2 -␣ and sPLA 2 inhibitors or heparin for 24 h. In studies with exogenously added recombinant rat group IIA sPLA 2 , cytokines were omitted in some studies. As noted under "Results," cPLA 2 -␣ and sPLA 2 inhibitors were added either at the time of cytokine addition or added 12 h after the addition of cytokines. In both cases, arachidonate release was measured 24 h after the addition of cytokines.
The medium was removed and centrifuged for 7 min at 3,000 rpm to pellet any dislodged cells. A 0.25-or 0.5-ml aliquot of the supernatant was submitted to scintillation counting. Trypsin/EDTA (0.5 ml) was added to the cell pellet in the well, and the plate was placed in the incubator for about 30 min. The cells were resuspended by pumping the solution up and down several times with a pipette, and all of the liquid was submitted to scintillation counting to give the cell-associated counts/min. The percentage of [ 3 H]arachidonate release to the medium was calculated as 100 ϫ (dpm in medium)/(dpm in medium ϩ cellassociated dpm) (12).
For PGE 2 production measurements, cells were plated at 1 ϫ 10 6 cells/well in 6-well plates in DMEM/F-12 containing 20% FBS, and allowed to adhere for 5-7 h. The medium was replaced with serum-free DMEM/F-12 containing 0.01% fatty acid-free bovine serum albumin and 1 Ci/ml [ 3 H]arachidonic acid and incubated for 20 -24 h. The labeled cells were washed two times with DMEM/F-12 containing 0.01% fatty acid-free bovine serum albumin and a further time with DMEM/F-12. The labeled cells were covered with 1 ml of DMEM/F-12 medium containing 5 ng/ml IL-1 and 100 ng/ml TGF-␣ in the absence or presence of various cPLA 2 -␣ or sPLA 2 inhibitors for 24 h. For PGE 2 analysis, 1 ml of culture medium was centrifuged to pellet any dislodged cells, and 0.8 ml of supernatant was mixed with 0.2 ml of 2 mM aqueous EDTA (adjusted to pH 3.0 with HCl). Lipids were extracted twice with 2 ml of ethyl acetate, and ethyl acetate was evaporated under a nitrogen stream. Samples were spotted onto a aluminum-back silica gel plate (20 ϫ 20 cm) with small amounts of ethyl acetate and separated by thin layer chromatography using the upper phase of ethyl acetate/isooctane/ acetic acid/H 2 O (9:5:2:10, v/v) (15,28). Authentic PGE 2 was added to each sample before spotting, and the plate was placed in a glass tank containing I 2 to visualize the PGE 2 -containing plate region. The appropriate regions of the plate were removed with scissors, and the slices were added to scintillation fluid in a vial for liquid scintillation counting.
Quantification of sPLA 2 Activity-Cells were plated at 1 ϫ 10 6 cells/ well in 6-well plates in DMEM/F-12 containing 20% FBS and allowed to adhere for 5-7 h. The medium was replaced with serum-free DMEM/ F-12 containing 0.01% fatty acid-free bovine serum albumin, and cells were incubated for 20 -24 h. For time course experiments, the cells were covered with 1 ml of DMEM/F-12 medium containing 5 ng/ml IL-1 and 100 ng/ml TGF-␣ for 0, 12, or 24 h. For 24 h inhibition experiments, the cells were covered with 1 ml of serum-free DMEM/F-12 containing 5 ng/ml of IL-1 and 100 ng/ml of TGF-␣ in the presence or absence of various concentrations of sPLA 2 or cPLA 2 -␣ inhibitors for 24 h. The sPLA 2 activity was measured using the fluorometric assay using 50 l of culture medium as described previously (12). The assay was calibrated with a standard amount of recombinant rat group IIA sPLA 2 (23). In some studies we also measured the sPLA 2 activity in cells after lysis with the same lysis buffer used for Western blotting (see below).
Immunoprecipitation and Western Blotting-Protein A-Sepharose slurry (50 l; catalog number 17-0780-01, Amersham Biosciences) was incubated with 10 l of antiserum against mouse group IIA sPLA 2 or preimmune serum in phosphate-buffered saline for 2 h with gentle shaking on ice. The gel was pelleted and washed six times with phosphate-buffered saline. Culture medium (100 l) was incubated with 5 l of washed protein A-Sepharose (containing either preimmune, immune serum, or nothing), and the solution was gently shaken for 4 h on ice. The sample was centrifuged, and 50 l of supernatant was subjected to the fluorometric sPLA 2 assay.
For detection of cPLA 2 -␣ by Western blotting, RGM1 cells were treated with cytokines as above and then washed three times with icecold phosphate-buffered saline and scraped into ice-cold buffer (50 mM Hepes, 150 mM NaCl, 1 mM EGTA, 1 mM EDTA, 10% glycerol, 1% Triton X-100, 1 mM phenylmethylsulfonyl fluoride, 10 g/ml aprotinin, 10 g/ml leupeptin). The lysate was centrifuged at 16,000 rpm at 4°C for 15 min, and the supernatant was used for Western blot analysis. Each lane of the 10% SDS-polyacrylamide gel was loaded with 40 g of cell protein (Bio-Rad Bradford assay). Proteins in the gel were electrotransfered to a nitrocellulose membrane, and the membrane was blocked overnight at 4°C with 5% (w/v) milk protein in buffer (per liter, 3 g of Tris, 8 g of NaCl, 0.2 g of KCl, 1 ml of Tween 20, pH 7.4). The membrane was probed with anti-cPLA 2 -␣ polyclonal IgG (1:5,000 dilution; catalog number Sc-438, Santa Cruz Biotechnology), and detection was carried out with ECL (Amersham Biosciences).
RNA Purification and Quantitative PCR-Cells were plated at 1 ϫ 10 6 cells/well in 6-well plates and were stimulated with TGF-␣ and IL-1 in the absence or presence of various concentrations of cPLA 2 -␣ or sPLA 2 inhibitors for 24 h (see above for stimulation and inhibition procedures). Total RNA was extracted using an SV total RNA isolation kit (catalog number Z3100, Promega) according to the manufacturer's instructions. Total RNA (1 g, based on the absorbance at 260 nm) was reverse-transcribed with the first-strand cDNA synthesis system for quantitative RT-PCR kit (catalog number 11801-025, Marligen Biosciences) following the manufacturer's protocol. Real time PCR was carried out on a DNA Engine Opticon 2 real time PCR detection system (MJ Research, Inc.) using a DyNAmo SYBR Green qPCR kit (catalog number F-400L, Finnzymes) following the manufacturer's instructions.
PCR primers are as follows (numbers in parentheses are the expected length in base pairs for the PCR product): group IB (139) For each target gene, different primer concentrations and annealing temperatures were tested to optimize PCR amplification and to reduce the possible formation of primer-dimers. In all cases the optimal primer concentration was 200 nM. Thermal cycling parameters were as follows: 10 min at 95°C for initial denaturation followed by 36 cycles (94°C for 10 s, 64°C for 20 s, 72°C for 10 s, temperature ramp to 79°C over 1 s followed by fluorescence data collection at 79°C). After the last cycle, the sample was heated at 72°C for 7 min for final extension and then submitted to a melting curve analysis to check for the formation of primer-dimers and nonspecific products. The temperature of 79°C for data collection is below the T m of products and above the T m of possible primer-dimers; this helps to eliminate detection of primer-dimers. Each measurement was run in duplicate. PCR efficiencies for target and reference genes (GAPDH) were determined by generating a standard curve using serial dilutions of cDNA. Real time PCR data were analyzed using a relative quantification method as described previously (24). Briefly, the relative expression ratio of a target gene is calculated based on PCR efficiency and threshold cycle deviation of an unknown sample versus a calibrator (12 h of cells without stimulation), which was normalized to the reference gene (GAPDH). After quantitative PCR analysis, a portion of the amplification products was separated by electrophoresis on a 3% ultra-agarose gel (catalog number 15510-019, Invitrogen).
Studies with HEK293 Cells-HEK293 cells stably transfected with human group IIA sPLA 2 were cultured and labeled with [ 3 H]arachidonic acid as described previously (12). After labeling the cells with 0.1 Ci of [ 3 H]arachidonic acid per well for 18 -24 h, the medium was removed, and the cells were covered with complete medium containing 15 g/ml brefeldin A or Me 2 SO vehicle. The cells were incubated at 37°C with 5% CO 2 for 1 h and then the medium was removed, and the cells were covered with complete medium containing 1 mg/ml heparin (catalog number H3149, Sigma) with brefeldin A or with vehicle. After incubation as above for 30 min, the medium was removed, and cells were washed twice with complete medium without heparin and with brefeldin A or vehicle and then covered with complete medium with brefeldin A or vehicle. Finally, the cells were covered with complete medium containing 2 ng/ml IL-1 (added from a 0.1 ng/l stock in phosphate-buffered saline, 0.2% bovine serum albumin (catalog number A6003, Sigma)) and with brefeldin A or vehicle. After incubation as above for 4 h, the medium was removed and centrifuged at 3,000 rpm in a table top centrifuge for 7 min. A 500-l aliquot of the supernatant was submitted to scintillation counting. 0.5 ml of trypsin/EDTA was added to the remaining cell layer, and the cells were incubated 30 min as above, and the cell suspension was submitted to scintillation counting. The percent [ 3 H]arachidonate release is obtained as 100 times the total tritium in the culture medium divided by total tritium (culture medium ϩ cell-associated). In some studies, cells were washed with high salt-containing medium to remove extracellular sPLA 2 prior to brefeldin A treatment. In this case, the above procedure was used except that heparin was replaced with 1 M NaCl.
To measure secretion of human group IIA sPLA 2 , cells were cultured, pretreated with brefeldin A or vehicle, and pretreated with heparin (or 1 M NaCl) in complete medium as for the arachidonate release studies. After removing the heparin-containing medium, cells were covered with complete medium containing 2 ng/ml IL-1 and with 15 g/ml brefeldin A or vehicle. After incubation for 4 h, a 200-l aliquot of the cultured medium was withdrawn and centrifuged as above, and the supernatant was submitted to the fluorometric sPLA 2 assay (12). The remaining medium was removed, and the cells were covered with complete medium containing 1 mg/heparin (or 1 M NaCl) with brefeldin A or with vehicle. After incubation for 30 min, a 200-l aliquot of medium was removed, centrifuged, and submitted to the fluorometric sPLA 2 assay.
RESULTS
Studies with RGM1 Cells-Sato and co-workers (15) have shown previously that stimulation of RGM1 cells with IL-1 and TGF-␣ leads to a synergistic generation of PGE 2 . Fig. 2 H]arachidonate release, and fatty acid release with IL-1 alone was similar to that seen without cytokine stimulation (Fig. 2). The data show that TGF-␣ and IL-1 act synergistically to promote arachidonate release.
As shown in Fig. 3A, cytokine stimulation of RGM1 cells leads to a marked increase in the amount of sPLA 2 enzymatic activity measured in the culture medium. The increase occurs mainly during the 12-24 period. sPLA 2 enzymatic activity was detected with a real time, fluorometric assay using a pyrene-labeled phosphatidylglycerol analog. We also measured the amount of sPLA 2 activity in the washed cells 24 h after cytokine stimulation. We found that 25% of the total sPLA 2 activity is in the cell lysis, whereas 75% is in the culture medium. No activity was measured in the presence of 1 mM EGTA (not shown), consistent with a Ca 2ϩ -dependent sPLA 2 being responsible for the hydrolysis of the fluorometric phospholipid. Phosphatidylglycerol is the most preferred substrate for all mammalian sPLA 2 s (25). If it is assumed that all of the released sPLA 2 is the group IIA enzyme (see below), it can be deduced that 10 6 RGM1 cells release 3.8 ng of sPLA 2 into the culture medium after stimulation with cytokines for 24 h (based on the measured specific activity of recombinant rat group IIA sPLA 2 of 42.3 pmol/(min ϫ ng)).
As shown in Fig. 3B, after 24 h of stimulation, TGF-␣ alone led to less than a 2-fold increase in sPLA 2 activity in the culture medium; IL-1 alone led to a 4-fold increase in sPLA 2 activity, and both cytokines together led to a 29-fold increase in sPLA 2 activity. Thus, as for arachidonate release, there is a synergistic action between TGF-␣ and IL-1 in promotion of sPLA 2 enzymatic activity secreted into the culture medium. Fig. 4 shows that the lipolytic enzymatic activity released into the culture medium from cytokine-stimulated RGM1 cells is highly sensitive to Me-indoxam and indoxam. Both compounds display virtually identical dose-response curves with a concentration required for 50% inhibition (IC 50 ) of 3 nM. Previous studies have shown that Me-indoxam is a potent inhibitor of human and mouse groups IIA, IIC, IIE, and V sPLA 2 s with IC 50 values in the 5-60 nM range (25). Thus, the results in Fig. 4 are consistent with the enzymatic activity being detected with the fluorometric phospholipid substrate arising from an sPLA 2 . The results in Fig. 5 show that essentially all of the sPLA 2 enzymatic activity secreted from cytokine-stimulated RGM1 cells is due to the group IIA enzyme. In this experiment, the culture medium from cytokine-stimulated RGM1 cells was treated with anti-mouse group IIA sPLA 2 antiserum bound to protein A-Sepharose. Virtually all of the sPLA 2 enzymatic activity was removed, whereas half of the activity remained in the medium when protein A-Sepharose alone was used or when protein A-Sepharose loaded with preimmune serum was used. Previous studies have shown that the anti-mouse group IIA sPLA 2 antiserum detects this protein at the 0.1-1 ng level in Western blots but does not give a signal for 50 ng of each of the other mouse sPLA 2 s. 3 All together, the results clearly show that virtually all of the sPLA 2 activity released during the 12-24-h period post-stimulation of RGM1 with cytokines is due to the rat group IIA sPLA 2 . This is also supported by quantitative PCR studies described below. In order to explore the contribution of group IIA sPLA 2 and cPLA 2 -␣ to arachidonate release and PGE 2 production in cytokine-stimulated RGM1 cells, studies with highly selective, and potent inhibitors were carried out. As shown in Fig. 6A, 1, 10, and 20 M indoxam dose-dependently inhibited arachidonate release; 20 M caused 75% inhibition of the cytokine-dependent arachidonate released. Remarkably, 1, 10, and 20 M Me-indoxam had almost no effect (18% inhibition of cytokinedependent arachidonate release with 20 M Me-indoxam). Both cPLA 2 -␣ inhibitors caused significant dose-dependent inhibition of arachidonate release; pyrrolidine-2 inhibited cytokine-dependent arachidonate release by 68% at 2 M and Wyeth-1 by 100% at 6 M. When 10 M indoxam was added 12 h after cytokine addition, cytokinedependent arachidonate released was reduced by 16 Ϯ 1% (three independent experiments; data not shown) compared with 43% inhibition seen when the inhibitor was added along with cytokines 24 h before arachidonate release was measured (Fig. 6A).
To address whether Me-indoxam was being destroyed by cultured RGM1 cells, we added 1 M Me-indoxam to the culture medium of RGM1 cells along with cytokines. After 24 h, a 50-l aliquot of culture medium was added to the fluorometric sPLA 2 assay. The amount of enzymatic activity was 7% of that measured in an aliquot of medium from cells that were not treated with Me-indoxam (not shown). In this experiment, the concentration of Me-indoxam in the fluorometric sPLA 2 assay would be 50 nM if it was not destroyed upon incubation with cytokine-stimulated RGM1 cells. This experiment shows that most, if not all, of the Me-indoxam remains after a 24-h incubation with RGM1 cells. The same was found with indoxam (not shown).
We also measured the permeability of Me-indoxam and indoxam across tight monolayers of Caco-2 cells as a way to estimate the ability of these compounds to cross the plasma membrane of mammalian cells. As found in our earlier study (12), Me-indoxam displayed a very low permeability across these cells of P app ϭ 0.1 ϫ 10 Ϫ6 cm/s. In contrast indoxam displays increased permeability compared with Me-indoxam, with P app value of 0.3 ϫ 10 Ϫ6 cm/s. Arachidonate release induced by TGF-␣ alone was also inhibited by pyrrolidine-2 (65% inhibition at 2 M), by Wyeth-1 (100% inhibition at 6 M), and by indoxam (78% inhibition at 20 M) (not shown).
Consistent with the reduction in arachidonate release caused by addition of cPLA 2 -␣ inhibitors, Western blot analysis showed the presence of cPLA 2 -␣ in RGM1 cells (Fig. 7). In the absence of cytokine stimulation, most of the cPLA 2 -␣ migrated as the faster, nonphosphorylated form. After 12 or 24 h of culture in the absence of cytokines or in the presence of IL-1 alone, cPLA 2 -␣ remained mostly in the nonphospho-
from the medium of RGM1 cells stimulated with TGF-␣ and IL-1 for 24 h.
Culture medium was treated with nothing, with protein A-Sepharose (Prot A-Seph) alone, with protein A-Sepharose loaded with pre-immune serum, or with protein A-Sepharose loaded with anti-mouse group IIA sPLA 2 immune serum as described under "Experimental Procedures." The sPLA 2 enzymatic activity in the supernatant above the Sepharose pellet was assayed using the fluorometric assay. FIGURE 6. A, effect of PLA 2 inhibitors on arachidonate release from cytokine-stimulated RGM1 cells. PLA 2 inhibitors were added at the indicated concentrations at the time of addition of TGF-␣ and IL-1, and arachidonate release to the medium, expressed as a percent of total cellular arachidonate, was measured after 24 h (additional details are given under "Experimental Procedures"). The average and standard deviations from four independent experiments are shown. The vehicle used to add inhibitors was Me 2 SO in all cases, and vehicle alone had no effect on arachidonate release (not shown). B, same as for Fig. 6A except PGE 2 release from 1 ϫ 10 6 RGM1 cells was measured. Pyrr-2, pyrrolidine-2. rylated form. In contrast, after 12 or 24 h of stimulation with TGF-␣ with or without IL-1, cPLA 2 -␣ mostly shifted to the phosphorylated form (Fig. 7).
Group IIA sPLA 2 s are highly basic proteins and are known to bind tightly to anionic polymers, including heparin (26). Addition of extracellular heparin to the culture medium of cells has been used to reduce the amount of extracellular group IIA sPLA 2 that is bound to cell surface proteoglycans (12,27). As shown in Fig. 8, addition of heparin to the culture medium of cytokine-stimulated RGM1 cells had little, if any, effect on arachidonate release. The highest dose of heparin used (1 mg/ml) has been more than sufficient to remove most of the group IIA sPLA 2 bound to the cell surface of various mammalian cells (12,27).
We also tested if exogenously added recombinant rat group IIA sPLA 2 could elicit arachidonate release from RGM1 cells. As shown in Fig. 9, addition of up to 1000 ng of this enzyme to the culture medium in the absence or presence of cytokine stimulation failed to elicit arachidonate release. High amounts of recombinant group IIA sPLA 2 caused some reduction in the cytokine-dependent arachidonate release for reasons that are not known. Based on the amount of endogenous group IIA sPLA 2 produced by these cells (ϳ4 ng, see above), it can be concluded that exogenously added enzyme is at least 3 orders of magnitude less efficient than endogenously produced enzyme at eliciting arachidonate release.
The effect of the cPLA 2 -␣ inhibitors on the appearance of group IIA sPLA 2 in the extracellular medium following cytokine stimulation was studied. Pyrrolidine-2 or Wyeth-1 added at the time of cytokine addition caused significant reduction in the amount of sPLA 2 enzymatic activity measured in the culture medium taken at 24 h (Fig. 10). Nearcomplete inhibition was seen with 6 M pyrrolidine-2, whereas 6 M Wyeth-1 was less effective at reducing the amount of sPLA 2 secreted to the medium (Fig. 10). These cPLA 2 -␣ inhibitors do not directly inhibit sPLA 2 enzymatic activity; when they were added to the fluorometric assay of recombinant rat group IIA sPLA 2 at twice the concentration present in the cell culture studies, no inhibition was observed. The doseresponse data for the ability of pyrrolidine-2 and Wyeth-1 to reduce the amount of group IIA sPLA 2 released from cytokine-stimulated RGM1 cells (Fig. 10) corresponds to the dose response for the effect of these compounds on arachidonate release (Fig. 6A). We also found that treatment with 6 M pyrrolidine-2 or Wyeth-1 leads to an 85 and 70% decrease, respectively, in the amount of sPLA 2 enzymatic activity in washed cells (in the cell lysate) (not shown). Thus, inhibition of the amount of sPLA 2 in the culture medium and in the cells by the cPLA 2 -␣ inhibitors are approximately the same.
We examined the expression of group IIA sPLA 2 in RGM1 cells at the mRNA level by using quantitative PCR. As shown in Fig. 11A, stimula-tion of RGM1 cells with TGF-␣ and IL-1 led to a dramatic increase in the amount of group IIA sPLA 2 mRNA (6-fold in the first 12 h and then 460-fold after 24 h; all data are given as mRNA level relative to the minus cytokine/12-h mRNA level). Thus, group IIA sPLA 2 mRNA levels track with the amount of sPLA 2 enzymatic activity seen in the RGM1 culture medium (Fig. 3). Both pyrrolidine-2 and Wyeth-1 dose-dependently reduced group IIA sPLA 2 mRNA levels (Fig. 11B). Pyrrolidine-2 was more potent than Wyeth-1; 6 M leads to an ϳ100-fold reduction, whereas 6 M Wyeth-1 reduces the mRNA level by 3-fold. This is consistent with pyrrolidine-2 being more potent than Wyeth-1 at reducing arachidonate release and group IIA sPLA 2 enzymatic activity in the culture medium (Figs. 6 and 10). Indoxam at 20 M had no effect on the group IIA sPLA 2 mRNA level (Fig. 11B).
We also carried out quantitative RT-PCR for all known rat sPLA 2 s (groups IB, IIC, IID, IIE, IIF, III, V, X, and XIIA). As shown in Fig. 11A (top panel), group XIIA sPLA 2 mRNA was detected along with IIA sPLA 2 mRNA, but its level was much lower than the level of IIA at 24 h, and its level was not influenced by cytokines or the presence of cPLA 2 inhibitors. For both group IIA and XIIA sPLA 2 , gel analysis of the PCR mixtures showed the expected size DNA band, and no other reaction products were observed. The PCR efficiencies for GAPDH and groups IIA and XIIA sPLA 2 s were measured to be 1.71, 1.84, and 1.81, respectively. mRNA for all of the other rat sPLA 2 s was not observed in RGM1 cells either before or after 24 h of cytokine stimulation (not shown). However, as shown in Fig. 11 (bottom panel), the correct size PCR bands were observed for all of these sPLA 2 s when cDNA from a mixture of rat tissues was used as a target.
We also used RNA interference to knockdown the level of rat group IIA sPLA 2 and cPLA 2 -␣ in RGM1 cells followed by arachidonate release studies. As shown in Fig. 12A, group IIA sPLA 2 RNAi led to an ϳ3-fold decrease in the amount of sPLA 2 enzymatic activity secreted into the culture medium after cytokine treatment. Western blot analysis shows a significant reduction in cPLA 2 -␣ protein after transfection of cells with the cPLA 2 -␣ RNAi (Fig. 12B). As shown in Fig. 12C, knockdown of cPLA 2 -␣ led to a small but statistically significant reduction in the amount of group IIA sPLA 2 secreted into the culture medium. Fig. 12C shows that RNAi knockdown of cPLA 2 -␣ or group IIA sPLA 2 led to a reduction in the amount of cytokine-stimulated arachidonate release. These results are consistent with the studies with PLA 2 inhibitors described above but are less dramatic, presumably because a higher level of PLA 2 inhibition was achieved with inhibitors compared with the level of knockdown achieved with RNAi.
Studies with Transfected HEK293 Cells-Arachidonate release from both cytokine-stimulated RGM1 and sPLA 2 -transfected HEK293 cells share the common phenomenon of insensitivity to the potent sPLA 2 inhibitor Me-indoxam (Fig. 6A) (12). Based on this and a large body of additional data, we suggested that group IIA sPLA 2 contributes to arachidonate release prior to externalization from HEK293 cells. In this study, we have used brefeldin A to block cellular secretion and to study the consequence of this on cytokine-stimulated arachidonate release. Prior to blocking secretion of group IIA sPLA 2 with brefeldin A, we wanted to remove enzyme that was already externalized, which is mainly bound to the cell surface by electrostatic interaction with anionic cellular components including, but not limited to, proteoglycan (12,27). Two independently obtained HEK293 cell clones that overexpress human group IIA sPLA 2 were used in parallel. Cells were labeled with [ 3 H]arachidonic acid and then washed with complete medium containing soluble heparin to remove cell surface-bound sPLA 2 (12,27). Cells were then treated with cytokines with and without brefeldin A. Heparin was omitted from this brefeldin A treatment period because we have FIGURE 11. A, detection of sPLA 2 mRNAs by RT-PCR. After quantitative RT-PCR analysis, a portion of the reaction mixture was examined by gel electrophoresis (top panel). RGM1 cells were stimulated with TGF-␣ and IL-1 (ϩCyt) or without cytokines (ϪCyt) for 24 h. NTC is the no-template control (bottom panel). B, cDNA from a mixture or rat tissue was used as a positive control for PCR. B, the level of mRNA for either group IIA or XIIA sPLA 2 is plotted relative to the signal obtained after culturing the cells for 12 h in the absence of cytokines. In some experiments, pyrrolidine-2 (Pyr-2), Wyeth-1 (Wy-1), or indoxam (Indox) was present during the 24-h cytokine stimulation period. The reduction seen with 6 M Wyeth-1 is 3-fold, and the error bar is so small that it does not display well. Data shown are the averages and standard deviations of two independent experiments.
shown that heparin causes inhibition of arachidonate release by a unknown and nonspecific mechanism (12).
When cells were first cleared of their extracellular human group IIA sPLA 2 by heparin treatment, the amount of sPLA 2 enzymatic activity secreted into the culture medium taken after the 4-h cytokine stimulation period and in the presence of brefeldin A was 20 Ϯ 5% that measured in the absence of brefeldin A (triplicate analysis). For this measurement, cells were treated with heparin-containing medium for a 30-min period after the 4-h cytokine treatment period so that total extracellular enzyme (free and cell surface-associated) was measured. As expected, these results show that brefeldin A treatment inhibits the secretion of most of the human group IIA sPLA 2 from HEK293 cells. Despite this inhibition of enzyme release, [ 3 H]arachidonate release was not affected by brefeldin A treatment, being 4.35 Ϯ 0.34% release without brefeldin A and 4.92 Ϯ 0.13% with 15 g/ml brefeldin A (triplicate analysis).
Medium containing 1 M NaCl has also been used to remove cell surface-bound sPLA 2 (27). When cells were cultured in complete medium with IL-1 with or without brefeldin A for 4 h and then washed with complete medium containing 1 M NaCl, the amount of sPLA 2 enzymatic activity measured with or without brefeldin A was not statistically different, and the activity was 5-6-fold higher than that measured when heparin was used to wash the cells. These results show that 1 M NaCl is causing disruption of cells leading to the release of extracellular and intracellular pools of sPLA 2 . Thus, studies with 1 M NaCl cannot be interpreted.
DISCUSSION
A key feature of this study is that we study arachidonate release in a cell line that has not been treated exogenously with sPLA 2 s or transfected with sPLA 2 s. Fig. 13 shows a summary of the pathway of arachidonate release and PGE 2 production in cytokine-stimulated RGM1 cells. TGF-␣ and IL-1 act synergistically to promote PGE 2 production, as first reported by Akiba et al. (15,28), and we also report that these two cytokines lead to a synergistic release of arachidonate. A delayed phase of arachidonate release (12-24 h) occurs concomitantly with a large increase in the amount of mRNA for group IIA sPLA 2 and the appearance of group IIA sPLA 2 protein in the culture medium. Group XIIA sPLA 2 is also present in these cells, but its level does not change with cytokine treatment. The other eight known mammalian sPLA 2 s are not expressed in RGM1 cells, at least based on mRNA analysis. Group XIIA sPLA 2 probably does not contribute to arachidonate release because recombinant mouse and human group XIIA sPLA 2 s are not inhibited by Me-indoxam (25), and the structurally related compound indoxam substantially inhibits arachidonate release from RGM1 cells (Fig. 6A).
It is generally thought that cPLA 2 -␣ is responsible for arachidonate release in agonist-stimulated mammalian cells. However, the results with highly specific and potent inhibitors show that both cPLA 2 -␣ and group IIA sPLA 2 contribute to arachidonate release and PGE 2 production in cytokine-stimulated RGM1 cells. Because both of these enzymes can liberate arachidonate from the sn-2 position of cellular phospholipids, it is not possible to know from the present results what fraction of the total liberated arachidonate is contributed by each enzyme (i.e. factor ␣ in Fig. 13).
Treatment with TGF-␣, but not IL-1, leads to cPLA 2 -␣ phosphorylation and presumably its activation. However, cPLA 2 -␣ cannot act alone in RGM1 cells because arachidonate release and PGE 2 production is substantially blocked by the sPLA 2 inhibitor indoxam. The fact that both the early and late phases of arachidonate release are blocked by indoxam shows that group IIA sPLA 2 is involved in both phases of lipid mediator production. The failure to observe significant amounts of group IIA sPLA 2 in the culture medium after 12 h of cytokine stimulation could be due to binding of this highly basic protein to anionic proteoglycans on the external face of the plasma membrane (12). Perhaps only after sPLA 2 protein accumulates to a higher level during the late phase does it accumulate in the culture medium. The data also show that group IIA sPLA 2 does not act alone to liberate arachidonate release because two structurally distinct and potent cPLA 2 -␣ inhibitors substantially block arachidonate release. TGF-␣ or IL-1 did not lead to an alteration in the amount of cPLA 2 -␣ protein. Finally, IL-1 synergizes with TGF-␣ to cause maximal induction of group IIA sPLA 2 mRNA and protein by an unknown mechanism even though IL-1 does not lead to cPLA 2 -␣ phosphorylation or induction of cPLA 2 -␣ expression.
Studies with the cPLA 2 -␣ inhibitors also show that blocking cPLA 2 -␣ action substantially prevents the induction of group IIA sPLA 2 mRNA and protein. Previous studies using less specific and less potent cPLA 2 -␣ inhibitors have shown a similar requirement for cPLA 2 -␣ in the induc-tion of sPLA 2 s (29,30). The mechanism behind this cPLA 2 -␣-dependent alteration of sPLA 2 mRNA level remains to be determined.
Referring to Fig. 13, if ␣ is 0 (arachidonate release is solely due to the direct action of cPLA 2 -␣ on phospholipids), the role of group IIA sPLA 2 is to somehow augment the action of cPLA 2 -␣. On the other hand, if ␣ is 100 (arachidonate release is solely due to the direct of group IIA sPLA 2 on phospholipids), the role of cPLA 2 -␣ is to augment the action of sPLA 2 . It is also possible that both lipases are directly responsible for arachidonate release.
A remarkable finding in this study is the fact that group IIA sPLA 2 acts in cytokine-stimulated RGM1 cells prior to its release into the culture medium. The results with the structurally very similar compounds Me-indoxam and indoxam clearly show that not all sPLA 2 inhibitors block the action of these enzymes in cells despite the fact that both compounds bind to rat group IIA sPLA 2 with affinities that are indistinguishable experimentally. Furthermore, addition of recombinant rat group IIA sPLA 2 to RGM1 cells in the presence and absence of TGF-␣ and IL-1 did not lead to a detectable increase in arachidonate release above that measured in the absence of added enzyme, even though the amount added was 2-3 orders of magnitude more than the amount of group IIA sPLA 2 produced by these cells. Finally, trapping of secreted group IIA sPLA 2 with extracellular heparin did not reduce the amount of arachidonate released. The only reasonable hypothesis is that group IIA sPLA 2 is acting prior to externalization from cells and that indoxam permeates RGM1 cells more than Me-indoxam does.
Although we did not directly measure the ability of indoxam and Me-indoxam to cross membranes of RGM1 cells, we did compare the ability of these two compounds to cross the tight-junction monolayer of Caco-2 cells. This permeability assay is commonly used to predict the passage of drug candidates across the intestinal epithelial cell layer. The Caco-2 cell data indicated that both Me-indoxam and indoxam display low permeability, with indoxam being about 3-fold more permeable than Me-indoxam. In the case of Me-indoxam, we prepared this compound in the 14 C-labeled form and showed by direct cell uptake studies that it did not penetrate into HEK293 cells (12). Such studies were not carried out with indoxam because of the difficulty in preparing radiolabeled compound. It appears that indoxam does not rapidly cross the membranes of RGM1 cells based on the fact that less inhibition of arachidonate release was seen when the compound was added at 12 h rather than at 0 h. This suggests that the compound requires several hours for cell uptake, which is consistent with the low permeability seen with the Caco-2 cell assay. We also found that indoxam did not block arachidonate release from HEK293 cells transfected with human group IIA sPLA 2 regardless of whether the compound was preincubated with cells or not (data not shown). This shows that indoxam is not permeable to all mammalian cell types. It further underscores the need to discover a highly cell permeable sPLA 2 inhibitor that can be used as a more reliable tool to probe the role of sPLA 2 s in cellular functions such as arachidonate release. We are currently trying to develop such compounds.
Although we did not examine whether externalized rat group IIA sPLA 2 is taken up into RGM1 cells, we must conclude that arachidonate and PGE 2 production are not the result of an sPLA 2 re-uptake mechanism. If the sPLA 2 passes through the culture medium prior to action, it will become exposed to cell-permeable and cell-impermeable inhibitors (i.e. Me-indoxam) and become inhibited. Thus, the lack of inhibition seen with Meindoxam rules out sPLA 2 action after possible cell re-uptake.
Although we provided a vast amount of data in our earlier paper showing that arachidonate release when cells are transfected with human group IIA sPLA 2 requires the action of the enzyme prior to FIGURE 13. Arachidonate release and PGE 2 production in TGF-␣-and IL-1-stimulated RGM1 cells. See text for discussion. externalization from the cells (12), it occurred to us that the use of brefeldin A provides additional insight into the location of sPLA 2 action. In the present study, we found that treatment of transfected HEK293 cells with brefeldin A blocked most of the release of the human group IIA sPLA 2 in the culture medium and yet did not result in a reduction in the amount of arachidonate release. This provides additional strong evidence that the sPLA 2 acts in these cells prior to externalization. Studies with RGM1 cells and brefeldin A were not carried out simply for historical reasons.
It seems that workers in the sPLA 2 field assumed for many years that arachidonate release by these enzymes would be the result of extracellular action of sPLA 2 . In fact all of the studies prior to ours on the assessment of indole-type sPLA 2 inhibitors such as indoxam by workers at Lilly and others reported the blockade of arachidonate release when inhibitor was added to cells that were treated with exogenously added group IIA sPLA 2 (for example see Ref. 31). In these studies, relatively large amounts of group IIA sPLA 2 were used, typically Ͼ1-2 g of enzyme per 1 ml of culture medium. Such concentrations of group IIA sPLA 2 are found in synovial fluid during inflammation and in serum during pancreatitis, for example (32). Indeed, in our hands, Me-indoxam fully blocked arachidonate release from HEK293 cells treated with the large amount of recombinant human group IIA sPLA 2 needed to elicit release from the cells by exogenous enzyme (12). However, these sPLA 2 concentrations are much higher than those that accumulate in cultured RGM1 and HEK293 cells stimulated with cytokines for example. It must be noted that the concentration of sPLA 2 that accumulates in the culture medium of cultured cells depends on the volume of culture medium used; thus, the concentrations may not be physiologically relevant. However, the present studies show that relatively small amounts of intracellularly acting rat group IIA sPLA 2 can cause increased arachidonate and PGE 2 production in cultured RGM1 cells. The findings are very significant as they suggest that inhibitors of sPLA 2 that are cell-permeable should be used to best test the role of these enzymes in inflammation. The recent clinical trial failure of compounds structurally similar to indoxam to produce beneficial results in rheumatoid arthritic and sepsis patients (33,34) may in part be due to issues of poor cell permeability.
The cross-talk between cPLA 2 -␣ and sPLA 2 seen in RGM1 cells has also been observed in other mammalian cell types. Studies with sPLA 2transfected HEK293 cells and PLA 2 inhibitors as well as with mesangial cells from wild type and cPLA 2 -␣-deficient mice clearly show that the sPLA 2 can act together with cPLA 2 -␣ to maximize arachidonate release (11,12). Also, treatment of human neutrophils with exogenous group V sPLA 2 leads to cPLA 2 -␣-dependent leukotriene production as shown by studies with cPLA 2 -␣-deficient mouse neutrophils (35). A final study showing clear coordinate action of these PLA2s is the recent work of Arm and co-workers (10) showing that arachidonate release in zymosan-stimulated mouse peritoneal macrophages is reduced about 50% in cells from group V sPLA 2 -deficient mice, and yet arachidonate release is fully blocked in the same cell/agonist system when cells are isolated from cPLA 2 -␣-deficient mice (5). We have shown that Me-indoxam does not modify arachidonate release levels in zymosan-stimulated mouse peritoneal macrophages. 4 So it seems that the intracellular action of group V sPLA 2 works together with cPLA 2 -␣ leading to maximal arachidonate release in these cells. | v2 |
2018-04-03T04:24:33.359Z | 2013-02-05T00:00:00.000Z | 10327989 | s2orc/train | Dynamics of the Rhomboid-like Protein RHBDD2 Expression in Mouse Retina and Involvement of Its Human Ortholog in Retinitis Pigmentosa*
Background: RHBDD2 is distantly related to rhomboids, membrane-bound proteases. Results: In retina, RHBDD2 exists as a monomer in all cells throughout life and a homotrimer only in cone outer segments; a mutation in RHBDD2 possibly leads to retinitis pigmentosa. Conclusion: RHBDD2 plays important roles in development and normal retinal function. Significance: This is the first characterization of RHBDD2 and its association with retinal disease. The novel rhomboid-like protein RHBDD2 is distantly related to rhomboid proteins, a group of highly specialized membrane-bound proteases that catalyze regulated intramembrane proteolysis. In retina, RHBDD2 is expressed from embryonic stages to adulthood, and its levels show age-dependent changes. RHBDD2 is distinctly abundant in the perinuclear region of cells, and it localizes to their Golgi. A glycine zipper motif present in one of the transmembrane domains of RHBDD2 is important for its packing into the Golgi membranes. Its deletion causes dislodgment of RHBDD2 from the Golgi. A specific antibody against RHBDD2 recognizes two forms of the protein, one with low (39 kDa; RHBDD2L) and the other with high (117 kDa; RHBDD2H) molecular masses in mouse retinal extracts. RHBDD2L seems to be ubiquitously expressed in all retinal cells. In contrast, RHBDD2H seems to be present only in the outer segments of cone photoreceptors and may correspond to a homotrimer of RHBDD2L. This protein consistently co-localizes with S- and M-types of cone opsins. We identified a homozygous mutation in the human RHBDD2 gene, R85H, that co-segregates with disease in affected members of a family with autosomal recessive retinitis pigmentosa. Our findings suggest that the RHBDD2 protein plays important roles in the development and normal function of the retina.
large loop between transmembrane domains 1 and 2 and an extended cytoplasmic N terminus (13,14). Little is known about their function. Other rhomboid-like homologues that lack catalytic residues but do not cluster with the iRhoms are scattered across evolution.
Rhomboids have been implicated in a variety of human diseases as a result of their distinct functions. As a component of the mechanism of parasitic invasion in toxoplasmosis and malaria, these cell surface molecules obligate human pathogens to invade the host cells by forming irreversible junctures between the plasma membrane of the invading parasite and the host cell (15)(16)(17). As another example, the mitochondrial Rhomboid-7 is required to cleave the precursor forms of both Pink1 and Omi, proteins that are mutated in Parkinson disease (18). In addition, an identified mutation in PARL may be associated with insulin resistance and type 2 diabetes (19). Recently, two independent groups reported that iRhom2 is required for tumor necrosis factor release in mice. iRhom2 interacts with TNF␣-converting enzyme and regulates shedding of soluble, active TNF␣. Thus, iRhom2 may represent an attractive therapeutic target for treating TNF␣-mediated diseases (20,21).
The focus of this study is the novel intramembrane, growth/ development-associated, rhomboid-like protein RHBDD2.
Here we describe the distribution of Rhbdd2 transcripts in mouse tissues and their developmental expression as well as that of the RHBDD2 protein in retina. We show that two forms of RHBDD2 are present in mouse retina and demonstrate that one of these forms is specifically expressed in cone photoreceptor outer segments. Most importantly, we report a novel recessive missense mutation in the RHBDD2 gene, which maps to the human 7q11 locus, in members of a family affected with autosomal recessive retinitis pigmentosa (arRP). This mutation co-segregates with the disease and links the RHBDD2 gene to the arRP phenotype.
EXPERIMENTAL PROCEDURES
Animals-C57BL/6J mice were obtained from our colonies bred from stock originated at The Jackson Laboratory (Bar Harbor, ME). Mouse eyes were quickly enucleated after death, and the retinas were dissected and frozen. In addition, other tissues were obtained from these animals and immediately frozen. All experiments were conducted in accordance with the approved UCLA Animal Care and Use Committee protocol and the Association for Research in Vision and Ophthalmology statement for the use of animals in ophthalmic and vision research.
Embryo Collection-Pregnant C57BL/6 mice were sacrificed at 12.5, 15.5, and 18.5 days postcoitum, and embryos were removed from the embryonic sac. Theiler staging was used to confirm the phenotype of the 12.5-, 15.5-, and 18.5-day-postcoitum embryos. Embryonic heads were fixed and embedded in OCT for cryosectioning and immunohistochemistry, whereas embryonic eyes were collected for RNA extraction and quantitative PCR analysis. For the former experiments, embryo heads were fixed in 4% paraformaldehyde in 10 mM phosphate-buffered saline (PBS) overnight. The next day, the tissue was rinsed three times with PBS and infiltrated with 30% sucrose in PBS overnight at 4°C with gentle rotation. The following day, embryonic heads were placed overnight in a 1:1 ratio of 30% sucrose in PBS to OCT. Heads were then embedded in OCT and stored at Ϫ80°C. The embedded tissue was sectioned (10 m) and used for immunohistochemistry experiments.
RNA Isolation and Northern Blot Analysis-Total RNA was extracted from mouse retinas (TRIzol, Invitrogen). Poly(A ϩ ) RNA was obtained with an mRNA purification kit (Oligotex, Qiagen, Valencia, CA). RNA was quantified with a NanoDrop ND-1000 spectrophotometer (NanoDrop Technologies, Wilmington, DE) and stored at Ϫ80°C. Two micrograms of poly(A ϩ ) RNA were electrophoresed on 1.2% denaturing formaldehyde-agarose gels and transferred to a Hybond N ϩ membrane (Amersham Biosciences). An 821-bp mouse retinal cDNA fragment containing part of the coding region and 3Ј-UTR was amplified with forward (5Ј-CTCATCTGACTC-CAAGTTATC) and reverse (5Ј-AGAAGCCCAGGAGCCT-CAAGAC) primers from the sequence of Rhbdd2, labeled with [ 32 P]dCTP, and used as a probe to hybridize the retinal and multiple mouse tissue (Ambion, Austin, TX) Northern blots.
Quantitative Real Time RT-PCR and Statistical Analyses-To analyze the levels of Rhbdd2 transcripts, quantitative PCR was performed on first strand cDNAs. Primers were designed using Primer 3 Internet software (Whitehead Institute, Massachusetts Institute of Technology, Cambridge, MA) and synthesized by Integrated DNA Technology (San Diego, CA). The primer pair for total Rhbdd2 mRNA was: forward, 5Ј-TCCCT-CAGACCTCCTTCCTC and reverse, 5Ј-GTCAGATGAGGG-TGGCAACTC. The primer pair for the long 3Ј-UTR Rhbdd2 mRNA was: forward, 5Ј-GATGTGGGCTCTTAGGCAAG and reverse, 5-ЈATCTAGGGGCAGTCCATCAG. Total RNA was isolated from embryonic eyes and retinal samples using the RNAqueous 4PCR protocol (Ambion) and treating twice with DNase I. RNA concentrations were measured using the NanoDrop ND-1000 spectrophotometer, and 1 g of RNA from each age was reverse transcribed. The quantitative PCRs were performed in SYBR Green Master Mix with the corresponding primer sets using a quantitative PCR system (MX3000P, Stratagene, La Jolla, CA). The melting curves of the PCR products were monitored to ensure that a single melting curve was obtained for each of the samples. To analyze the real time PCR data, signals from each sample were normalized to values obtained for -actin cDNA, which was assayed simultaneously with the experimental samples. Analysis of variance using Monte Carlo bootstrapping was performed to analyze the possibility of a significant time effect in the expression of the total and long 3Ј-UTR Rhbdd2 RNAs. Post hoc analyses were applied to determine significant changes in the total mRNA levels among the different age points. Bootstrap analysis was used because of the lack of a normal distribution in our data.
Expression Construct-To produce the full-length protein, the full-size Rhbdd2 cDNA was amplified via RT-PCR from mouse retinal RNA using a primer set (5Ј-GGATCCATGGC-GGCCCCGGGCCCCGCGAGT) and (5Ј-GAATTCCTTAG-GGCAT GGCTACCTTGGAAGA) containing the desired restriction enzyme recognition sites (BamHI and EcoRI). The digested and purified PCR product (PCR gel extraction kit, Qiagen) was cloned into the pcDNA4/HisMax C vector between its BamHI/EcoRI cloning sites and sequenced for confirmation. The CMV promoter of this vector drove expression of the RHBDD2 fusion protein with the Xpress epitope at its N terminus.
Site-directed Mutagenesis to Modify the Glycine Zipper Motif-All glycine zipper motif mutant cDNAs were created using site-directed mutagenesis (QuikChange II site-directed mutagenesis kit, Stratagene) to change the glycines in the zipper to leucines. S185L, G189L, G193L, and G197L are the single leucine substitution mutants (SSMs). In the triple substitution mutant, the Gly-189, Gly-193, and Gly-197 were collectively replaced with leucines. To delete 27 nucleotides from the glycine zipper motif sequence, two overlapping primers were made, each having 36 nucleotides. The forward primer con-sisted of 9 nucleotides upstream of the motif and 27 nucleotides downstream of the motif. The reverse primer had 27 nucleotides upstream of the motif and 9 nucleotides downstream. The two primers had an overlapping region of 18 nucleotides lacking the 9 amino acid residues of the motif. The PCR mixture was set up according to the QuikChange II XL site-directed mutagenesis kit. Once the PCR was completed, 10 units of DpnI were added, and the reaction mixture was incubated at 37°C for 1 h to degrade the original plasmid used for PCR. EcoRI and BamHI sites were introduced into the 5Ј-and 3Ј-ends of the wild-type and mutant cDNAs to directionally subclone them into the pEGFP-N3 vector (BD Biosciences Clontech). The RHBDD2-EGFP fusion constructs were verified by sequencing. The list of specific primers used in site-directed mutagenesis is presented in Table 1.
Transient and Stable Transfections-HEK293 cells obtained from the American Type Culture Collection (Manassas, VA) were grown and transfected using the pcDNA4/HisMax-RHBDD2, each of the pEGFP-N3-RHBDD2 expression constructs, and the PolyFect transfection reagent (Qiagen). All experiments included the pcDNA4/HisMax plasmid without the insert as an internal normalization control. Transiently transfected cells were harvested for analysis of newly synthesized protein. Linearized plasmid was used to achieve stable transfection of HEK293 cells, and positive clones were selected (Zeocin, Invitrogen).
Protein Extraction and Immunoblot Analysis-Nuclear and cytoplasmic protein extracts from transfected cells, mouse retinas, and mouse brains were prepared using nuclear and cytoplasmic extraction reagents (NE-PER, Pierce). For total protein extraction, the harvested cells and tissue samples were lysed with lysis buffer (50 mM Tris-HCl, pH 7.5, 150 mM NaCl, 10% (w/v) glycerol, 100 mM NaF, 10 mM EGTA, 1 mM Na 3 VO 4 , 1% (w/v) Triton X-100, 5 M ZnCl 2 ) and the Complete EDTA-free protease inhibitor mixture (Roche Applied Science) and then centrifuged at 10,000 ϫ g for 20 min at 4°C. Fifty micrograms of the extracted proteins were separated by SDS-PAGE on 7.5% gels (Pierce). Blots were incubated with primary antibody (1:7000 dilution) and secondary anti-rabbit IgG antibodies labeled with alkaline phosphatase (1:5000 dilution; Vector Laboratories). Western blots were visualized with either of two kits (the Amplified-Alkaline Phosphatase kit from Bio-Rad or the enhanced chemiluminescence (ECL) kit from Amersham Biosciences). Immunostaining of Cultured Cells-Transfected HEK293 cells on coverslips were permeabilized with 100% methanol for 6 min at Ϫ20°C, rinsed three times in PBS, blocked with 3% BSA in PBS containing 0.1% Triton X-100 (PBST) for 45 min, and incubated for 2 h with 7Rc rabbit polyclonal antibody (1:200 dilution) and subsequently for 1 h with fluorescein-or rhodamine-conjugated goat anti-rabbit antibody (1:200; Santa Cruz Biotechnology, Santa Cruz, CA). Next, the transfected cells were incubated for 1 h with a mouse monoclonal anti-Xpress-FITC antibody (1:100 dilution; Invitrogen). The cells were then washed three times in PBST, stained with propidium iodide or 4Ј,6-diamidino-2-phenylindole (DAPI) for detection of nuclei, and viewed with fluorescence microscopy.
Immunostaining for Golgi markers GM130 and TGN38 was carried out following the same protocol described above using antibodies from Abcam (Cambridge, MA) at 1:200 and 1:500 dilutions, respectively. The co-localization of these Golgi markers and wild-type and mutant RHBDD2 was quantified using the program Olympus Fluoview Version 3.0. Transfected cells were outlined to quantify the degree of overlap. Co-localization was reported as a Pearson's coefficient, which measures how well the pixels from two different color channels fit to a linear relation. The Pearson's coefficient can have values from Ϫ1 to 1 with 1 indicating absolute overlap, 0 showing that there is no co-localization, and Ϫ1 representing an inverse correlation (relatively high scores on one variable paired with relatively low scores on the other variable).
Immunostaining of Mouse Retinal Sections-All tissue processing, image acquisition, and analysis procedures were as described (23)(24)(25). Central sections (10 m thick; three nonadjacent sections per slide) were located 200 -400 m from the optic nerve. Before immunofluorescence staining, the sections were incubated with the same primary and secondary antibodies described in the previous section. To show the co-localization of RHBDD2 with both cone and Müller cells, sections were also incubated with cone opsin antibodies (short wavelengthsensitive opsin (S opsin) (OS2) and midwavelength-sensitive opsin (M opsin) (COS1) kindly provided by Dr. Agoston Szél, Semmelweis University Medical School, Budapest, Hungary) and glutamine synthetase antibody (1:5000; Sigma-Aldrich G2781, rabbit polyclonal), respectively, with the corresponding secondary antibodies. Confocal images were acquired using a Leica TCS SP2 laser-scanning confocal microscope (Leica Microsystems, Exton, PA). Images were processed using Adobe Photoshop software. The results shown are representative of five separate immunolabeling experiments from three different adult C57BL/6 mouse retinas.
For unknown reasons, retinal sections incubated with GM130 antibody in the conditions used for cultured HEK293 cells showed no results. Therefore, we obtained retinal sections following the fixation times and temperatures described by Kerov et al. (26). Briefly, the enucleated mouse eyes were poked through the cornea with a 21-gauge needle and fixed with 4% paraformaldehyde in PBS for 1 h at 25°C. After fixation and removal of the cornea and lens, the eyes were hemisected, submersed in a 30% sucrose solution in PBS for 5 h at 48°C, then embedded in OCT, frozen, and sectioned. The 10-m sections were incubated for 2 h with 7Rc (1:200 dilution) and GM130 antibody (1:50 dilution) and subsequently for 1 h with Alexa Fluor 488 goat anti-rabbit or Alexa Fluor 568 goat anti-mouse secondary antibodies (1:500; Invitrogen).
Screening the DNA of Patients with Retinal Degenerations for Variants in the RHBDD2 Gene-The DNA of 110 unrelated patients of mixed ethnicities who were diagnosed with various retinal diseases such as cone dystrophy, cone-rod dystrophy, Stargardt disease, and autosomal dominant and recessive retinitis pigmentosa was screened for variants in the RHBDD2 gene. The DNA of 95 control individuals with similar ethnic distribution (56% white, 15% black, 22% Asian, and 7% Hispanic) was also screened for RHBDD2 variants. PCR products resulting from the targeted amplification of RHBDD2 coding sequences were subjected to dideoxy sequencing and analyzed using Sequencher software (Gene Codes Corp., Ann Arbor, MI).
RESULTS
Cloning and Characterization of the Rhbdd2 cDNA-To identify novel cone photoreceptor genes, we carried out representational difference analysis using mRNAs from adult cone degeneration (cd) and normal dog retinas as described previously (27). One of the clones isolated in the screen was predicted to encode a part of a protein containing the rhomboid domain. With this clone as a probe, we then screened a dog retinal cDNA library and isolated an ϳ2.0-kb cDNA clone that appeared to have the entire open reading frame of the predicted rhomboid domain-containing protein. Using public database information, the sequence of this clone was found to be highly homologous to sequences of mRNAs from three different species: mouse (GenBank TM accession number BC018360, identified as rhomboid domain-containing 2 (Rhbdd2)), rat (Gen-Bank accession number XM_341058, called rhomboid veinletlike 7 (Rhbdl7)), and human (GenBank accession numbers AF226732 and BC069017, identified as NPD007 and RHBDD2, respectively). We then isolated the corresponding mouse cDNA from a mouse retinal cDNA library. Because the database mouse mRNA sequence and that of our isolated mouse clone did not have an in-frame termination codon upstream of the first methionine and the N termini of the human AF226732 and rat clones encode predicted proteins that are 45 and 27 amino acids longer than BC018360, respectively, we needed to confirm whether our mouse clone was full length or a fragment. We searched for cDNAs with a longer 5Ј-end using 5Ј RACE (rapid amplification of cDNA ends) (Invitrogen) but were unable to isolate a longer transcript after multiple attempts. Therefore, we concluded that this mRNA contains the complete ORF that encodes a protein of 361 amino acids. We also identified another mouse Rhbdd2 transcript with the same ORF but with a very long 3Ј-UTR and isolated two human transcripts that are splicing variants produced by insertion of 122-or 126-bp fragments into the first intron of RHBDD2. Both isoforms contain a shorter ORF than that of BC069017 and code for a human protein of 223 instead of 364 amino acids. Using an InterProScan algorithm, we found that RHBDD2, an uncharacterized 39-kDa protein that is highly conserved between species, contains five transmembrane helices; however, TMpred parameters revealed seven possible transmembrane helices.
Distribution of Rhbdd2 mRNA in Mouse Tissues and within the Retina-Northern blots from different mouse tissues hybridized with the mouse Rhbdd2 cDNA show a major transcript of ϳ2.0 kb in all tissues studied with very intense signals in brain, kidney, testis, and ovary and weaker bands in heart, liver, spleen, embryo, and lung (Fig. 1A). Two other minor transcripts, one of ϳ2.5 kb with tissue distribution identical to that of the 2.0-kb band, and a second of ϳ4.0 kb found mainly in brain, liver, kidney, and ovary but hardly present in testis, are also observed. The retinal Rhbdd2 mRNA (lane 11) displays the same hybridization pattern (Fig. 1A). The 2.0-and 4.0-kb transcripts are in agreement with the length of the isolated Rhbdd2 cDNAs, but the 2.5-kb mRNA had not been described previously.
In situ hybridization using a Rhbdd2 antisense riboprobe to assess the expression of Rhbdd2 mRNA in mouse retina shows its presence in cell bodies located in all layers of the retina (Fig. 1B) as well as in the inner segments of photoreceptor cells.
To define the developmental expression pattern of Rhbdd2 mRNA in the mouse retina, we performed qRT-PCR on mRNA from eyes of E12.5, E15.5, and E18.5 embryos and in retinas from mice at birth (P1), P5, P8, P10, P14, P21, and P30 and compared the levels at all ages with that observed at P1 (Fig. 1C, black bars). The obtained data were subjected to repeated oneway analysis of variance to check for a significant effect of time on mRNA expression. Success in the Omnibus test indeed suggested significance in the effect of time on mRNA transcription (F statistic ϭ 7.44). Our results indicate that the amount of transcript at E12.5 is higher than at the more advanced embry-onic stages, and it is very similar to that of total mRNA measured in the adult retina at P30 (Fig. 1C). Rhbdd2 mRNA levels after birth (P1) and until P5 are further reduced from those at E15.5 and E18.5 and are not significantly different (p ϭ 0.93). At P8 and P10, Rhbdd2 mRNA levels are double that at P5 (p ϭ 0.016). The strongest expression is observed at P14 (p ϭ 0.023), and it decreases thereafter with a significant change between P21 and P30 (p ϭ 0.015). We also measured by qRT-PCR the levels of the long 3Ј-UTR transcript at all the different postnatal developmental stages (striped bars) and determined the levels of the short 3Ј-UTR mRNAs (white bars) by subtraction from the total mRNA. Except for the P1 and P30 samples, both short and long 3Ј-UTR transcripts seem to contribute similarly to the total amount of Rhbdd2 mRNA.
Expression of the RHBDD2 Protein in Mouse Retina during Development-A rabbit antiserum against RHBDD2 (referred to as 7Rc) was generated using a synthetic C-terminal peptide of RHBDD2 (ProSci Inc.). To determine the specificity of this antiserum, we prepared an Xpress-tagged RHBDD2 expression vector in the pcDNA4/HisMax plasmid (Invitrogen), which contained the predicted open reading frame of Rhbdd2 cDNA inserted downstream of the polyhistidine (His 6 ) tag and the Xpress epitope region ( Fig. 2A). This vector was transfected into HEK293 cells. Immunoblotting analysis of protein extracts from these cells demonstrated that 7Rc strongly bound a ϳ44-kDa protein, also recognized by the Xpress antibody (Fig. 2B). The molecular mass of this protein, ϳ5 kDa heavier than that estimated from the RHBDD2 primary structure (39 kDa), may result from the additional 32-amino acid vector sequence that was fused to the N-terminal region of RHBDD2 as well as from possible modification(s) of the protein. In addition, the immunocytochemical localization of RHBDD2 in transfected HEK cells using 7Rc or the Xpress antibody was identical (thus, only 7Rc staining is shown in Fig. 2C). The staining was strong in a distinct perinuclear region, and it resembled a bead necklace around the nucleus. Further confirmation of the specificity of 7Rc for RHBDD2 was obtained by the disappearance of all labeling in RHBDD2-transfected HEK cells incubated with 7Rc that had been preabsorbed with the polypeptide used to generate it.
Because the same Rhbdd2 transcripts were observed in all mouse tissues studied, we initially examined the expression of the RHBDD2 protein only in retina and brain. As shown in Fig. 2D, 7Rc recognized the 39-kDa protein band (RHBDD2 L ; monomer) in both tissues at P21 and P90 and an additional protein band of 117 kDa (RHBDD2 H ; trimer) only in retinal extracts. We then tested other mouse tissues for RHBDD2 H but did not detect it (data not shown).
To determine the expression of RHBDD2 from mouse retina during development, we obtained retinal protein extracts from E12.5, E15.5, and E18.5 embryonic eyes and from retinas of P5, P10, P14, P21, P30, and P90 animals and subjected them to Western blot analyses using 7Rc. Interestingly, the RHBDD2 L and RHBDD2 H proteins recognized by the 7Rc antibody respond in an opposing manner as mice age. The data indicate that although RHBDD2 H is not seen at embryonic stages (Fig. 2E), only increases from P5 to P30, and remains at similar levels at P90 (Fig. 2F), the levels of RHBDD2 L increase from E12.5 to E15.5 (Fig. 2E) and are thereafter similar until P14, decrease between this time and P30, and remain the same until P90 (Fig. 2F). Furthermore, the total level of the two retinal RHBDD2 proteins (RHBDD2 L plus RHBDD2 H ) is higher at P14 compared with other ages. This is in agreement with the expression profile of mRNA in developing retina (Fig. 1C).
Golgi Localization of RHBDD2-As seen in Fig. 2C, 7Rc strongly labeled the region where the Golgi apparatus resides in Rhbdd2-transfected HEK293 cells. Therefore, we used Golgi markers to analyze the localization of RHBDD2 in this structure of the cells. Fig. 3A shows that the RHBDD2 signal exhibits a high degree of overlap with the signal of the cis-Golgi matrix protein GM130 (Pearson's coefficient, 0.95 Ϯ 0.01; top panel). On the other hand, 7Rc staining appears to overlap to a much lesser extent with that of anti-TGN38, a trans-Golgi marker protein (Pearson's coefficient, 0.32 Ϯ 0.03; bottom panel). These results suggest that the overexpressed RHBDD2 is localized predominantly to the cis-side of the Golgi apparatus and may function in the early stages of the endosomal sorting pathway rather than during the later stages.
It has been observed before that upon treatment of cells with brefeldin A (BFA), a Golgi-destabilizing agent, the components of the cis-Golgi matrix intersperse in the cytoplasm and appear as puncta (28). Furthermore, other studies have demonstrated that Golgi-specific spectrins associate with Golgi membranes in a BFA-sensitive manner (29). Therefore, as an alternative approach to evaluate the Golgi localization of RHBDD2, we tested its sensitivity to BFA to examine whether BFA treatment would disrupt the association of RHBDD2 with the Golgi complex as in the case of spectrins. Fig. 3B shows that treatment of Rhbdd2-transfected HEK293 cells with BFA for 30 min results in disruption of the stacked Golgi structure, inducing a rapid spreading of RHBDD2, as demonstrated by staining with anti-GM130. This result indicates that RHBDD2 associates with the Golgi membranes and that this association is BFA-sensitive.
The structural integrity of the Golgi apparatus is compromised during mitosis as part of the normal inheritance process. We analyzed the localization of RHBDD2 in the cells during mitosis to determine in which compartment it resides. Exponentially growing and stably transfected HEK293 cells were fixed and double stained with 7Rc and anti-GM130 antibodies. Fig. 3C shows co-localization of RHBDD2 and GM130 even when the Golgi apparatus is dispersed during anaphase (Pearson's coefficient, 0.741 Ϯ 0.008). In telophase, the Golgi apparatus reformed in each of the daughter cells that display a high degree of RHBDD2 and GM130 co-localization (Pearson's coefficient, 0.852 Ϯ 0.008). These results suggest that RHBDD2 associates with Golgi membranes during the cell cycle.
As mentioned above, prediction algorithms revealed that RHBDD2 possesses potential transmembrane domains, one of which, either the fifth or the sixth (depending on the algorithm used), contains a conserved glycine zipper motif. A rigorous analysis of sequence patterns has indicated that the GXXXG (GG4 motif) is the most highly biased sequence motif in naturally occurring transmembrane domains. Its glycine residues are usually separated by three large (Val, Leu, and Ile) or small (Gly, Ala, and Ser) residues in the transmembrane domains (30). Among glycine zipper motifs, the (G/A/S)XXXGXXXG and GXXXGXXX(G/S/T) are the most significant (31). Analysis of RHBDD2 sequences from different species revealed that they have glycine zipper motifs that are highly conserved (Fig. 4). Furthermore, it is interesting to note that the identified sequences contain the most common glycine zippers, which have two or more glycines and in which glycine occupies the central position (31), but that a separate set that includes those RHBDD2 sequences from human, chimpanzee, orangutan, mouse, rat, and other mammals exhibits extended glycine zipper motifs (Fig. 4, top panel); chicken and Xenopus tropicalis also contain these extended motifs. Of interest, some membrane proteins with homooligomeric bundle structures have extended glycine zipper motifs containing four glycine residues (GXXXGXXXGXXXG). It has been reported that mutation of one or more of these conserved glycine residues is in many cases deleterious to function (31).
To further analyze whether the glycine zipper motif in RHBDD2 is important for the packing of the protein into membranes, we used the pEGFP-N3 vector (BD Biosciences Clontech) to generate several mutant RHBDD2 proteins fused to the N terminus of EGFP and allow their localization in vivo. The mutant proteins had Ser-185, Gly-189, Gly-193, and Gly-197 individually or collectively (Gly-189, substituted with leucine, disrupting the glycine zipper packing interface. We also generated a fusion protein that had amino acid residues 189 -197 deleted from the RHBDD2 protein. HEK293 cells were then transfected with pEGFP-N3 containing the Rhbdd2 or mutant cDNAs, and the expressed proteins were analyzed either by the green signal from EGFP or by immunocytochemistry using 7Rc and GM130 antibodies. Our results revealed that EGFP fused to RHBDD2 did not affect localization of the RHBDD2 protein (Fig. 5A) and that each of the Ser/Glyto-Leu SSMs behaved differently (Fig. 5B). Changes in protein distribution as well as the lack of co-localization with GM130 were most notable for the G189L (Pearson's coefficient, 0.22 Ϯ 0.01) and the G193L mutants (Pearson's coefficient, 0.31 Ϯ 0.02). In contrast, the S185L and G197L mutants did not induce significant changes (Pearson's coefficients, 0.70 Ϯ 0.04 and 0.69 Ϯ 0.02, respectively). Surprisingly, RHBDD2 carrying the three Gly-to-Leu substitutions had a stronger co-localization with GM130 (Pearson's coefficient, 0.42 Ϯ 0.01; Fig. 5C, TSM) than the G189L and G193L SSMs. The mutant resulting from the deletion of the amino acid residues 189 -197 did not co-localize with GM130 (Pearson's coefficient, 0.14 Ϯ 0.07; Fig. 5C, GZMD), but because we noticed that the green fluorescence excitation of this motif-deleted protein faded very fast, we used 7Rc to detect its localization (Fig. 5C, GZMD). In both cases, a dispersion of RHBDD2 in the cytoplasm was observed, indicating a dislodgment of RHBDD2 from the Golgi apparatus and suggesting that RHBDD2 might be anchored on this organelle. To support this idea, we next fractionated by ultracentrifugation the subcellular organelles/membrane vesicles of retinal tissue using a discontinuous sucrose gradient as described by Gangalum et al. (32). Western blots of the fractions were then incubated with 7Rc and GM130 antibodies. The results indicate that only the S 2 fraction (the Golgi membrane fraction) contains an appreciable amount of RHBDD2 protein and that upon recentrifugation of S 2 a significant amount of RHBDD2 can be found in the stacked Golgi fraction 3 (Fig. 6A, SGF 3 ), further supporting that RHBDD2 is localized to the Golgi apparatus of retinal cells.
Immunohistochemistry experiments confirmed that RHBDD2 and GM130 co-localize in the retina. Double stained images obtained using antibodies against RHBDD2 and GM130 (Fig. 6B) show the co-localization of these proteins in the inner segment of photoreceptor cells (Pearson's coefficient, 0.48 Ϯ 0.03), mainly in cone inner segments, as well as in the perinuclear Specific Pattern of RHBDD2 Immunoreactivity in Developing and Adult Mouse Retina-We used immunohistochemistry to follow the expression of RHBDD2 in mouse retina during development from E12.5 to adulthood (Figs. 7 and 8). Our results show that the distribution of RHBDD2 is more complex than our previous in situ hybridization data suggested (Fig. 1B). We detected a weak but clearly positive signal in E10.5 (data not shown), the youngest tissue in which we examined RHBDD2 expression, although it is possible that the RHBDD2-positive cells appear at an even younger age. A stronger signal is observed at E12.5, and the intensity of the staining increases in E15.5 and E18.5 retinas. At these embryonic stages, all cells are positive for RHBDD2 (Fig. 7A). The gradual, age-dependent appearance of the RHBDD2 protein in the layers of the retina begins at birth and continues throughout development (Fig. 7B). At postnatal day 1, a small number of positive cells are observed at the level of the ganglion cell layer (GCL) and in the inner edge of the ventricular zone of the retina. Between P1 and P5, the labeling of RHBDD2 shows a similar spatial pattern, but by P8 to P11, immunostaining has progressively increased and entirely occupies the GCL and inner nuclear layer. However, no significant RHBDD2 immunoreactivity is observed in the photoreceptor cell bodies of the outer nuclear layer (ONL). At P21, RHBDD2 is localized to all nuclear layers of the retina (Fig. 7B).
In the ONL, the RHBDD2 signal is primarily present in the cytoplasm of the cell soma adjacent to the plasma membrane of the photoreceptor cells, and it is different from that observed in embryonic and early retinal developmental stage cells where it is localized both in the nucleus and cytoplasm. The consistent perinuclear signal present in the later ages may suggest that compartmentalization is an essential process for the normal functioning of RHBDD2 in more mature retina. Interestingly, a small number of strongly stained RHBDD2 cells with very short outer segments (OS) and an irregular distribution first appear in the outer retina at P12 (Fig. 8A). To determine the identity of these RHBDD2-positive cells, we double labeled the photoreceptors of the mouse retina at P13-14 with both 7Rc and antibodies against S opsin, which is abundant in the cones of the mouse ventral retina, or M opsin, which is concentrated in the dorsal retina cones (33). Our results showed that RHBDD2 immunoreactivity consistently co-localized with S and M opsins in the OS of both types of cones (Fig. 8B, only M opsin is shown).
The characteristics of RHBDD2 staining in the ONL were seen more thoroughly with confocal microscopy images of adult retinal sections. Fig. 8C shows that all photoreceptor cells stained positive for RHBDD2; however, a small number of short OS were stained with 7Rc. These OS corresponded to cells that after differentiation and migration aligned near the outermost region of the ONL; their nuclei contained several clumps of irregularly shaped heterochromatin. The positional and morphological characteristics of these cells are hallmarks of mature cone photoreceptors (34,35). As also seen in Fig. 8C, RHBDD2 immunostaining was restricted to the cytoplasm surrounding the nuclei, but we were unable to determine whether it was or was not on the nuclear membranes.
In addition to the photoreceptors, the staining pattern of RHBDD2 in adult retinas showed strong RHBDD2 expression in the GCL and inner nuclear layer. In the latter, a small number of cells were more intensively labeled than others. Double labeling with 7Rc and the Müller cell marker glutamine synthetase antibody revealed co-localization of RHBDD2 and glutamine synthetase in these same cells, identifying them as Müller cells (Fig. 8D, arrows). Very weak co-localization also was present on some radial processes of Müller cells in the ONL (arrowheads).
A Missense Mutation in RHBDD2 Co-segregates with Disease in a Family Affected with arRP-A systematic mutational analysis was initiated to determine whether there are disease-causing mutations in the human RHBDD2 gene. We screened the DNA of a small group of patients with different types of retinal degeneration by direct DNA sequence analysis of all exons and exon-intron boundaries and found variants of RHBDD2 in arRP patients. One of them, proband RP176 (Fig. 9A), had a G to A transition in exon 2 that changed arginine to histidine at codon 85 (Fig. 9B). This R85H allele co-segregated with the clinical phenotype in the patient's family. In addition, 95 control individuals (190 chromosomes) did not carry the R85H mutation. Arg-85 is located in the intracellular loop between the second and third predicted transmembrane domains of the RHBDD2 protein and is highly conserved among human, chimpanzee, cow, dog, mouse, rat, and many more species (Fig. 9C). Interestingly, in another unrelated individual, two different RHBDD2 transcripts were identified: a wild-type mRNA and a second transcript containing a 7-nucleotide deletion in exon 2 of RHBDD2 that leads to a frameshift and the creation of a premature stop codon. This individual also had a common single nucleotide polymorphism (SNP) at the Ϫ13 position upstream of exon 2. Unfortunately, no other family members are available for testing to determine whether the deletion is linked to any disease phenotype. We excluded the known disease-causing genes PDE6A, PDE6B, RPE65, ABCA4, and TULP1 from all the families studied using linkage analysis. All patients' DNAs were also screened for the CRX, RDS, NR2E3, and RHO genes, and no mutations were found.
DISCUSSION
Herein, we identified a cDNA encoding RHBDD2, a novel rhomboid-like protein, in a screen for mRNAs present in cone photoreceptors. However, we found Rhbdd2 transcripts in all major mouse tissues. The expressed RHBDD2 is an intramembrane protein with a localization restricted to the Golgi apparatus in transfected HEK293 cells. Retina is the only tissue where we have detected two forms of the RHBDD2 protein, RHBDD2 L and RHBDD2 H , which most likely is an RHBDD2 L trimer. Inspection of databases indicates that RHBDD2 is conserved among species.
This study is the first to examine the developmental profile and cellular localization of RHBDD2 in mouse retina. We found that RHBDD2 is expressed in early life (with immunoreactivity observed at E10.5) and becomes widespread and differentially distributed after all the retinal layers have formed. The lowest levels of RHBDD2 were seen at the beginning of postnatal life, and the maximum level was observed at P14.
At P1, RHBDD2 immunoreactivity is strongest in the GCL, and it spreads during development from the inner to the outer retina. In the ONL, the first cells stained by 7Rc are detected at P12 and are somewhat scattered. By P21, RHBDD2 is all over the retina, and it appears in many neuronal and glial Müller cells. Quantification of the expression of the two different forms of RHBDD2 indicated that RHBDD2 L minimally changed during development until P14 and then decreased as the age of the mouse progressed, whereas RHBDD2 H increased from P5 to P30 and remained close to this level thereafter.
One of the interesting issues concerning RHBDD2 expression is the appearance of homomultimers of this protein. Our results provide evidence that RHBDD2 L exists as a monomer in the inner retina neurons and the soma of photoreceptors, whereas RHBDD2 H seems to be found exclusively as a polymer in cone OS. Indeed, the levels of RHBDD2 H increase precisely at the time in development when the outer segments are elongating. We consider that RHBDD2 H is a trimer of RHBDD2 L because of its molecular mass and because it is recognized by 7Rc, a specific antibody for both forms of the RHBDD2 protein. RHBDD2 H detection is not appreciably altered by increasing mercaptoethanol/DTT, treating with different concentrations of SDS before electrophoresis, or boiling the samples before loading them on the gel. RHBDD2 H is present in S and M cone outer segments. It is most likely that the RHBDD2 L monomer is conserved between species.
Because the biochemical functions of RHBDD2 have not been determined and establishing them will require a more complete understanding of the protein, currently, we can only speculate that RHBDD2 H may be involved in the growth and maturation of the cone OS, whereas RHBDD2 L may be an active participant in the early development of retinal cells. The generation of RHBDD2 knock-out mice will provide evidence for the role of RHBDD2 in these processes.
As shown in Figs. 3 and 5, the recombinant RHBDD2 localized to the Golgi apparatus of HEK293 cells. Furthermore, in retinal sections, co-localization of RHBDD2 and the Golgi marker GM130 is clearly seen at the inner segment of photoreceptors and perinuclear region of ganglion cells (Fig. 6B), which is an obvious indication of the existence of RHBDD2 in the Golgi apparatus. This organelle is considered to be a distribution and shipment center for proteins and lipids inside the cell as well as for their export out of the cell. In recent years, discoveries have indicated that the Golgi complex can also be considered as a center of operations where cargo sorting/processing, basic metabolism, signaling, and cell fate decisional processes converge (36). Interestingly, cells overexpressing RHBDD2 had a compacted Golgi similar to that observed in non-transfected cells with normal reticular morphology. While performing transfection experiments with constructs containing different RHBDD2 glycine zipper mutations, we noticed that for some expressed mutant proteins the staining pattern of RHBDD2 was altered, but the Golgi remained intact. Because it is known that glycine zipper motif structures are directly involved in helix interactions, the existence of the GXXXG sequence in RHBDD2 suggests that this motif may not only be involved in the high affinity association of transmembrane helices but also in the association of RHBDD2 with the Golgi apparatus. In addition, our results suggest that the glycine zipper motif may play an important role in the oligomerization of RHBDD2 in OS of photoreceptors. The observation that RHBDD2 localization to the Golgi complex is glycine zipper motif-dependent even though there are no links between the function of RHBDD2 and the Golgi apparatus generates the appealing hypothesis that the formation of the trimer in outer segments of adult retina also relies on the glycine zipper motif.
Our studies also have provided evidence supporting the role of the RHBDD2 gene, which is localized on chromosome 7q11, in the pathogenesis of retinitis pigmentosa, an inherited retinal disease that results in the loss of photoreceptors and is characterized by pigment deposits predominantly in the peripheral retina. To date, 43 genes and 50 loci have been identified for autosomal dominant, autosomal recessive, and X-linked forms of non-syndromic retinitis pigmentosa (RetNet, The Retinal Information Network). We found a homozygous mutation in the RHBDD2 gene, R85H, that co-segregates with disease in a family affected with arRP but that does not exist in 95 controls. No other gene of the several that we investigated showed any mutation in the affected members of this family. Thus, it is possible that the RHBDD2 gene is a new disease-causing gene and that 7q11 is a new locus for arRP. Several studies suggest that arginine residues may form salt bridges with acidic residues to stabilize protein structure, bind cofactors, or interact with DNA (37)(38)(39). The observed substitution of His for Arg at position 85 may thus disrupt the structure of the RHBDD2 protein.
Recently, Abba et al. (40) determined that the RHBDD2 mRNA and its expressed protein are significantly elevated in breast carcinomas as compared with normal breast tissue samples or benign breast lesions with overexpression predominantly observed in advanced stages and that silencing of RHBDD2 expression results in a decrease of cell proliferation. The authors showed a strong association between high RHBDD2 expression and decreased overall survival, relapsefree survival, and metastasis-free intervals in patients with primary estrogen receptor-negative breast carcinomas. They suggested that RHBDD2 overexpression behaves as an indicator of poor prognosis and may play a role in facilitating breast cancer progression. Also, they recognized two RHBDD2 alternatively spliced mRNA isoforms expressed in breast cancer cell lines (40). We have found that these two isoforms are also expressed in normal retinal tissue samples. In addition, we identified a third isoform of RHBDD2 from human retinal samples, the rhomboid domain-containing 2, transcript variant 3.
Although Abba et al. (40) present solid mRNA data indicating strong correlation between expression of RHBDD2 transcripts and breast carcinomas, their Western blot and immunohistochemistry results are contrary to our results on RHBDD2. First, the polyclonal antibody that these authors used was raised against peptides that were synthesized based on the RHBDD2 protein sequence (NCBI Reference Sequence NP_065735) corresponding to residues 30 -43, 253-266, and 393-406. However, the record for NP_065735 has been permanently removed from the database because this sequence is a nonsense-mediated mRNA decay candidate. Second, these three peptide sequences are encoded by the ORF of human NPD007 mRNA (GenBank accession number AF226732) but are not found in the RHBDD2 protein sequence. NPD007 mRNA codes for a protein with an N terminus 45 amino acids longer than RHBDD2. Peptide 30 -43 is located in those 45 amino acids. Also, the NPD007 sequence is missing two nucleotides after position 724 (G and C), which leads to a reading frame change and the creation of a new protein. The 253-266 and 393-406 peptides are located in the diverging sequence downstream of the two missing nucleotides. This is why these peptides are not present in the RHBDD2 sequence. Third, from the Western blot analysis of normal and breast cancer cell lines using their polyclonal antibody, the authors identified a 47-kDa product that they assume is encoded by isoform1 as well as a smaller protein of ϳ40 kDa, a product of isoform2. However, the sequences presented by the authors in Fig. 3D of their study (40) contain an ORF encoding 223 amino acids for isoform1 and 364 amino acids for isoform2. Consequently, isoform1 would code for a much smaller protein than that encoded by isoform2. Therefore, it is not clear what proteins the authors are detecting with the polyclonal antibody that they generated. These proteins are definitely different from RHBDD2.
Very recently, a study has been published describing the possible role of RHBDD2 in colorectal cancer progression. The authors indicate that expression of RHBDD2 significantly increases in advanced stages of colorectal cancer. Also, they found a significant increase of RHBDD2 mRNA and protein after treatment with the chemotherapy agent 5-fluorouracil (41).
In conclusion, we have characterized RHBDD2, a ubiquitously expressed transmembrane rhomboid-like protein associated with the Golgi apparatus that exists as a monomer in all retinal cells and as a trimer only in retinal photoreceptor outer segments, mainly in cones. Our genetic findings indicate that a mutation in the RHBDD2 gene may lead to arRP. The association of RHBDD2 abnormalities with diseases like neurodegenerative retinitis pigmentosa and cancer is intriguing. The unknown function of the RHBDD2 protein in mammals together with the undiscovered molecular pathways in which RHBDD2 may participate in human disorders further substantiates future studies to determine the potential role of RHBDD2 in normal cells and tissues. | v2 |
2022-04-29T15:19:47.186Z | 2022-04-25T00:00:00.000Z | 248437319 | s2orc/train | Construction of 3D Design Model of Urban Public Space Based on ArcGIS Water System Terrain Visualization Data
On the premise of being familiar with ArcGIS Server technology, we build the architecture of the entire platform, including the basic support layer, data layer, service platform layer, and application layer, and build the entire environment of the platform. We make electronic maps through Arc Map, and collect, organize, and improve spatial data and attribute data, so as to achieve satisfactory accuracy and visual comfort. This study implements various map services under the Dojo framework, including basic map operations, information display and query, marker points, eagle eye diagrams, measurement, printing, and other functions, and uses JavaScript technology to improve user experience. We publish various services through ArcGIS Server, and realize fast and error-free invocation of each service. Based on the theory of runoff and runoff, ArcGIS software was used to study the hydrological information of the watershed, and to determine the catchment area threshold and hydrological response unit. Combined with the GIS spatial analysis method, the numerical simulation of rainfall and runoff in the study case area was carried out, and the variation of the annual rainfall-runoff coefficient was obtained. This study selects an area where stock planning was first proposed as the object of this research. Briefly, we introduce the construction of three-dimensional public space in a certain area, select thirteen typical three-dimensional public spaces as representatives for public evaluation, and explore their existing problems, mainly including the lack of adaptability of space functions and the lack of diversity in space design, privatization of operation management, low level of public perception, etc. Since then, in response to the public problems of the three-dimensional public space in a certain area, a targeted three-dimensional public space optimization strategy is proposed from the four levels of planning policy, urban design, management subject, and user subject.
Introduction
At present, the research and application of GIS mainly focus on describing the plane with reference to the two-dimensional space [1]. However, with the continuous improvement of computer hardware performance, the rapid development of computer software technology, and the gradual improvement of 3D GIS theory and technology, geographic information systems represented by 3D GIS continue to emerge [2]. Compared with the traditional twodimensional plane GIS, the three-dimensional GIS system gives people a more real, natural, and intuitive feeling. Many mature commercial GIS systems have added 3D modules to meet the increasingly complex functional analysis needs of various industries, such as ArcGIS 3D Analyst, Map Info Engage3D, and Skyline Globe. ese 3D GIS modules can provide functions such as terrain analysis, spatial query, 3D roaming, and ight animation in a 3D spatial reference environment by creating 3D terrain data and overlaying and processing satellite remote sensing image data [3]. At the same time, in order to improve the three-dimensional sense and visual e ect of 3D scenes, many 3D modules of GIS software have added support for 3D models, so that GIS can not only display a wide range of 3D terrain data but also add houses, 3D models of roads, dams, bridges, etc., thus making the 3D scene more realistic.
Cities continue to carry out single-dimensional highdensity development or urban renewal, while ignoring the low e ciency of the original urban public space, resulting in the further deepening of the contradiction in land use [4]. e lack of public space limits the occurrence of residents' public activities, and residents' resistance to this contradiction is manifested in informal behaviors that are increasingly common in urban public spaces [5]. From the "informal" perspective of the city, the form of informal urban public space and the rich and diverse social behaviors of different groups of users are observed and recorded, and the informal intangible logic is analyzed.
rough the theoretical research on informality and the observation and research on the informality of public space, this study puts forward the design goals and principles of urban public space from the perspective of urban "informality" and proposes specific design strategies as a supplement to the construction of formal urban public space. From a new perspective, it studies the informal phenomenon of urban public space, pays attention to different groups and different lifestyles in the city, respects the diversity of the city, and reflects the spirit of humanistic care. rough the observation and analysis of the informal phenomenon of public space, the internal organizational logic and elastic adjustment mechanism are summarized, and the urban public space design strategy from the perspective of urban "informality" is proposed as a beneficial supplement to the formal urban public space design. It has practical significance for the renewal of urban public space and the optimization of stock space.
is study mainly describes the realization process of the mobile public GIS service platform based on ArcGIS Server systematically, which focuses on the production of electronic maps, the release of map services, the specific realization of map functions, and the modules of the whole system. With the help of the hydrological analysis tool of the spatial analysis technology of geographic information system, the water system information of the basin is extracted. Based on the extracted watershed water system, the subwatershed is cut and the selected case area is exported using ArcGIS. Based on the three-dimensional public space publicity evaluation model, this section evaluates the publicity of the three-dimensional public space in a certain area and summarizes the publicity problems of the three-dimensional public space in a certain area due to the lack of adaptability of spatial functions and the lack of diversity in space design.
ere are six specific aspects as follows: privatization of operation and management, low level of public perception, obvious sense of distance between spatial boundaries, poor spatial information disclosure, and unoptimistic use of space.
Related Work
e development of GIS in foreign countries has entered a mature stage, and a large number of commercial GIS software have been successfully applied in water conservancy, electric power, petroleum, transportation, land, and other industries [6]. Especially in the past ten years, with the gradual improvement of 3D GIS theory and 3D visualization technology, the fields of geology, ocean, minerals, and urban planning have successfully entered the 3D era [7]. ESRI's ArcGIS series software not only provides mature industry solutions in 2D GIS but also has a full-featured 3D Analyst extended analysis module and a 3D GIS visualization environment based on Scene and Globe in 3D GIS. Skyline proposed a 3D GIS solution from the perspective of Globe, displaying massive spatial data in a three-dimensional interactive way and providing true three-dimensional GIS application analysis functions not only in a large-scale scene but also in a small-scale high-precision sand table environment [8].
Erdas' 3D GIS visualization analysis tool can realize the interactive operation of virtual scenes and query spatial information such as 3D terrain surface coordinates and height, and view 3D object attributes and geometric information [9]. Stereo Analyst analysis software can acquire 2D and 3D geographic information data from a variety of different data sources without going through DEM. CC-GIS is a three-dimensional modeling software based on photogrammetric data developed by ETH Zurich University in Switzerland. e software uses consistent symbols to construct surface models of complex objects and manages threedimensional data in relational databases through V3D data structures based on 3DFDS models. MapInfo is a desktop geographic information system software solution for data visualization and information mapping [10]. e 3D GIS tool Engage3D has grid surface creation, grid display control, 3D analysis and query, terrain image browsing, realtime interactive navigation, 3D Walkthrough animation, and other functions. Cult3D is a web-based 3D software based on streaming 3D technology, which provides basic operations such as rotation, zoom in, and zoom out to view 3D models from different angles [11]. e city of Faro and New York have established a WebGIS urban tree management system to achieve tree loss assessment and postdisaster tree cleanup after a snowstorm [12]. Major League Baseball pioneered the use of spatial analysis and a spatial approach to sports-related problems by using GIS to analyze the distribution of radio station networks for the Kansas City Royals and St. Louis Cardinals baseball games [13]. Kansas City, Missouri, used GIS to centrally manage the data and business of departments at all levels and realized an enterprise-level 3D GIS city-wide connection platform [14].
Related scholars have used the 3D extension module of ArcGIS to realize the dynamic simulation of the flood inundation evolution process and surrounding scenes in ArcScene [15]. e researchers developed a flood inundation analysis system based on ArcScene, which realized the functions of classification and statistics of submerged area losses, animation simulation of flood inundation, and water depth query at any point [16]. Based on DEM, relevant scholars perform flood inundation analysis and calculation under given water level conditions through computer algorithms such as recursive algorithm and iterative algorithm [17]. e researchers used the feature line method of the Courant format to obtain the temporal and spatial relationships between the water level and discharge of the river channel, and based on this, the GIS-based flood evolution visualization method was studied [18]. Relevant scholars have realized the dynamic simulation of large-scale water surfaces through dynamic texture technology based on the channel boundary search algorithm based on the section and developed a 3D visualization simulation system that can effectively simulate the flood in the river basin [19].
According to the experience of western developed countries and the development of my country, since the reform and opening-up policies are introduced, the process of rapid urbanization is usually accompanied by the transformation of urban spatial form and the intensification of various conflicts within the society. It has been found that it is quite common for the quality of urban public life to deteriorate due to improperly designed physical spaces [20].
e "urban public space research" is also a hotly discussed topic in the field of architecture today, and the most common part is the design, construction, and development of some generally recognized and typed public spaces such as urban squares, pedestrian streets, and parks. Further research also includes a discussion on the necessity and possibility of the existence of urban public space and involves social issues other than technical issues, human behavior issues, etc. e purpose is to reveal the potential public space needs in the interpersonal environment. Since the service objects of urban public space cover almost all social classes, and its construction and use also involve many aspects of urban operation, various issues related to it must go beyond the scope of the field of architecture. In fact, many groups, including sociologists, urban managers, architectural planners, designers, and theoretical workers, are exploring and practicing urban public space and presenting diverse research perspectives.
Application of ArcGIS Hydrological Analysis Tools.
DEM is a simulation of terrain, but the original smooth surface of DEM will appear with some concave areas that do not exist in real terrain due to grid accuracy and interpolation. e existence of concave areas will lead to unreasonable or even unreasonable surface water flow simulation; therefore, when simulating surface water flow, the original DEM data should be processed to eliminate peaks and fill depressions, and then the water flow direction is calculated. e water flow direction refers to the direction of the water flow, as it leaves each grid cell. Hydrologists have done a lot of research on the influence of terrain factors on hydrological simulation and the divergence of water flow itself and proposed many different algorithms to determine the direction of water flow: mainly single-flow method and multidirection flow distribution method. e D8 method assumes that there are only 8 possible directions of water flow in a single grid cell, that is, it flows into the 8 adjacent grid cells. Its water flow direction is determined by the steepest slope method through 3 × 3 DEM grid cells, that is, on the grid cell, the slope between the central grid cell and each adjacent grid cell is calculated. e direction of the grid unit with the largest slope is taken as the outflow direction of the central grid unit, that is, the water flow direction of the central grid unit. If the slope changes within the search range are equal, the search range needs to be expanded outward. erefore, the processed DEM flow map will eventually have 8 directions due east, southeast, due south, southwest, due west, northwest, due north, and northeast.
In the D8 algorithm, the center point of the grid cell is considered to be the runoff center, the river channel is described by a one-dimensional line, the infinite possibilities of the water flow direction are ignored, and the water flow direction in the natural state is summarized into 8 possibilities.
e water flow direction of the grid unit is set according to the principle of the steepest gradient, that is, the direction of the maximum gradient between grid units is the water flow direction.
Catchment analysis is the process of generating a catchment raster from a determined flow direction raster. e value for each grid cell on the catchment grid represents the total number of grid points flowing into that grid cell from the upstream catchment area. Assuming that there is a unit water volume at the regular grid point, then how many grid points flow into the grid is determined according to the regional topographical water flow direction data; it means how many unit water volume flows into the grid unit, so as to obtain the flow through each point. For the calculation of the cumulative value of the catchment for each grid cell on the watershed surface, the order of routing from the highest point of the watershed to the lowest point of the watershed must be followed. e hydrological analysis is an important aspect of DEM data application. Catchment and flow networks generated using DEM are the main input data for most surface hydrological analysis models. e main content of surface hydrological analysis based on DEM is to extract the water flow direction, confluence accumulation, water flow length, river network, and watershed segmentation of the surface water runoff model by using hydrological analysis tools. In DEM, through the extraction of these basic hydrological factors and basic hydrological analysis, the hydrological analysis process is finally completed, and the flow process of water flow is reproduced. e schematic diagram of hydrological information extraction is shown in Figure 1. e water flow length refers to the projected length on the horizontal plane of the maximum slope distance from a point on the ground along the water flow direction to the starting point (endpoint) of its flow direction. e length of water flow is the direct cause that affects the velocity of surface runoff; therefore, the extraction and analysis of the length of water flow are very important. At present, there are two main ways to extract the length of water flow, namely, downstream calculation and upstream calculation. e calculation method of calculating the horizontal projection of the maximum ground distance from each point on the ground along the water flow direction to the watershed outlet where the point is located is called downstream calculation. e calculation method is called upstream calculation.
A watershed is the catchment area of a river or water system, from which the river receives its water supply. Various waterways of different sizes combine with each other to form a natural river network. Each waterway that constitutes the river network has its own characteristics, catchment range, and water outlet. Smaller watersheds combine to form larger watersheds. e catchment area refers to the total area of the river flowing from a certain outlet; the outlet is the outlet of the water flow in the basin, which is the lowest point of the whole basin; the dividing line between the basins is called the basin watershed.
Construction of Urban Public Space Map Service System.
Map service is to publish maps to the Web through ArcGIS, but before creating a map service, we need to make a map in Arc Map and then publish the map service on ArcGIS Server, so that users can easily use Web applications or use these map services in other applications. e server is logged into the computer as an administrator account so the administrator account needs to be given sufficient permissions to manage and control all ArcGIS services. e specific setting method is carried out in the computer management of the server. We find agsadmin in the local group and then add the account, which gives the administrator account permissions to manage ArcGIS services. e configuration file defines the file source, output path, cache path, and image parameters of the map service. Administrators can achieve the maximum optimization of program performance and server usage by modifying the corresponding parameters.
To use ArcGIS Server Manager to publish map services, the service definition file must be .sd. First, we need to create a service definition file (.sd) through ArcMap and then log in to ArcGIS Server Manager to publish. After successful publishing, we can see the map service in the corresponding service directory of manager. e basic operations of the map are mainly used to meet the basic requirements of users to browse maps, so that users can quickly find and obtain useful information from large maps according to their own needs, perform other related operations, and also get a list of specific map service resources. Figure 2 shows the process of publishing map services in ArcGIS Server.
In the case of a vector map service, it can be easily combined with other map data due to its transparent background. e user can select the vector map service when loading the page when using the electronic map and then cover the initially loaded map service by loading the image map service. Before performing map switching operations, the system often learns the display range of the map on the interface from the extent property of the map document.
Generally, the lower right corner of the map has maximum values of X, Y, whereas the lower left corner has minimum values of X, Y. After the map is switched, the system needs to set the display range of the new map on the interface. At this time, the setExtent method is used, which is different from the previous one. Since the slicing scheme used in this electronic map system is ArcGIS Online, there is almost no change in the scale of the map, so there will be no map zooming during the switching process, and the switching process is relatively natural.
Based on the entire information query, the system and the server use a no refresh callback technology but exchange data between information queries and perform partial refresh in the search information box, which significantly speeds up the response speed of system.
Estimation of River Network Density in Water System
Topography and Evaluation of Water System Accuracy. e river network density is the length of the river per unit area or the total length of natural and artificial channels per unit area, which is calculated as follows: where D is the river network density; Lwj is the length of the jth river in the w-th grade, j � 1, 2,. . .,N w ; N w is the number of the w-th grade rivers, w � 1, 2, . . ., Ω; A is the area of the watershed; L is the total length of the channel. e water system information is extracted according to the best threshold of DEM of different scales, the parallel water system information is extracted by Arc GIS software according to the DEM of different scales, and the length of the water system is counted and then divided by the best threshold value to extract the total length of the water system. Remember to refer to the evaluation formula of water system extraction accuracy in the study area. e formula calculation process is as follows: In the formula, P is the evaluation accuracy of the parallel water system, L ij is the length of the parallel water system in the i-th segment and j, and L is the total length of the extracted water system in the study area. e closer the value of P is to 0, the higher the accuracy of water system extraction, and vice versa.
Horton-Strahler method extracts water system based on 1:50,000 topographic map or 30 m resolution ASTER-GDEM digital elevation data using Arc GIS, then classifies the river network according to Strahler river network classification theory, and finally counts the river channels of each level of river classification. e fractal dimension is calculated as follows: where R b is the bifurcation ratio, R L is the river length ratio, N i is the number of channels at all levels, i is the channel grade, and D b is the fractal dimension of the Horton-Strahler water system. e box-counting dimension method is also known as the covering method or the grid method. It takes the square grid with side length r and the water system map to find the intersection, and the number of grids covered by the water system is N(r). When r changes continuously, a series of N(r) values will be obtained corresponding to it, and the relationship between the two is as follows: In the formula, points (lgr, lgN(r)) are taken as the coordinates to make a double logarithmic graph and the least-squares method is used to fit a straight line:
Mathematical Problems in Engineering
In the formula, r is the side length of the square grid; N(r) is the number of grids covered by the intersection of the grid with the corresponding side length and the water system map; b is the undetermined coefficient; D is the slope value of the double logarithmic curve.
In addition to Horton's theorem method and boxcounting dimension method, this study also chooses the method of relief ratio, which is easy to calculate and has reliable results, to study the landform development of karst dam-building areas.
e relief ratio method is Pike and Wilson's classical method for estimating the integral value of area elevation derived from a mathematical formula, which is calculated as follows: In the formula, HI is the integral value of the area elevation; H′ is the average elevation of the watershed; H min is the minimum elevation of the watershed; H max is the maximum elevation of the watershed. According to a large number of studies, it can be known that when HI > 0.6, the geomorphological development stage is juvenile; when 0.35 < HI < 0.6, the geomorphic development stage is the prime-age period; when HI < 3.5, the geomorphic development stage is the old age.
Overall Assessment of the Publicity of ree-Dimensional Public Space.
e survey on the control variables of the perception dimension is mainly conducted through questionnaires. In order to ensure the objectivity of the survey data, the citizens who are using the space and other citizens are surveyed in two ways: on-site distribution and online questionnaire collection. In order to improve the efficiency of online questionnaire distribution, it is classified according to the administrative division of the research site and distributed separately. e questionnaires that have not been to a certain place in the collected questionnaires are recorded as invalid questionnaires for that place. e survey of threedimensional public space in a certain area is listed in Table 1.
e final results of this research are listed in Table 2. For the basic situation of the 13 three-dimensional public spaces in a certain area, Table 2 provides specific information on various indicators of space publicity. From the general results, the average publicity score of the thirteen three-dimensional public spaces selected in this survey is 71.7 points. e south square of the railway station shows a very low publicity, mainly because it has a very low adaptability to other functions and can only be used as a traffic space and a short stay space. With a certain exclusivity, the user's perception of the square is not ideal, especially at the level of spatial comfort perception. e highest publicity score is the second-floor platform of Nanshan Commercial and Cultural Center, with a score of 83.9 points, showing a relatively high publicity.
By calculating the results of each control variable, the average value of each control variable of thirteen three-dimensional public spaces in a certain area is obtained, so as to understand the specific public quality of the three-dimensional public space in a certain area.
From the results, it can be inferred that these spaces are basically located in densely populated areas in the city (the average estimate of the location is 1.92) and also on the urban base with high publicity (the average estimate of the vertical position of the base is 1.92). As most of these spaces are considered to be open and accessible (publicity estimated at 1.6), there are no obvious prohibition signs in the space (regulation notices estimated at 2) and no shortage of recreational facilities (seats estimated at 1.54). ese spaces are also considered safer (1.25 for a safety estimate) and are visually open to connected streets (1.62 for a viewport integration estimate), and users will not experience motorized traffic in the spaces. However, the same results also reveal why three-dimensional public spaces often have boundaries that make people feel distant (the boundary is estimated to be 1). e average value of the imageability of the surveyed subjects is only 1.25, indicating that most of the three-dimensional public spaces in a certain area do not have special significance and have good urban intentions that can create a "sense of community." e survey results also show that these three-dimensional public spaces can support a limited range of activities and behaviors (the functional diversity estimate is only 1.08) and are not flexible to the changing needs of users. In addition, the spatial design of these threedimensional public spaces is not ideal. e average evaluation score of spatial form is only 1.23. Some spaces are limited by functions and have small scales. Although some spaces are large in scale and can accommodate various activities, the spatial form is single. It lacks the shaping of subspaces that can generate personalized experiences and diverse activity choices. e visualization of the average value of each control variable in the three-dimensional public space in a certain area is shown in Figure 3. e 10 control variables represented in the figure are as follows: preset function, functional diversity, spatial form, night lighting, vertical position of the base, physical environment, design elements that hinder the use of space, management organization, target beneficiaries, and publicity perception.
Publicity Assessment of All Dimensions of ree-Dimensional Public Space.
e estimates obtained from the seven common component dimensions of the thirteen survey objects were marked on the corresponding axis positions of the multiaxis coordinates to establish a common seven-axis model. From the seven-axis model of the publicity of thirteen three-dimensional public spaces and the seven-axis model of the average publicity evaluation, it can be seen that the degree of publicity presented by each component dimension of the three-dimensional public space in a certain area is not optimistic.
Under the circumstance that the presupposed dominant function of the space cannot be changed, improving the adaptability of the space to various nonpredetermined functions is the most effective way to improve the publicity of the space. However, it can be found from the survey results that the three-dimensional public space in a certain area is less inclusive of nonpreset functions. Most of the three-dimensional spaces can only adapt to a certain other function. e comparison of each dimension of the thirteen three-dimensional public spaces is shown in Figure 4. e diversity of space design is an important factor that directly reflects the degree of publicity and can also have an important impact on other compositional dimensions. Although from the mean point of view, the thirteen threedimensional public spaces reflect a high degree of publicity in the dimension of space design, but from the point of view of a single control variable, there are still some problems. First, more than half of the thirteen survey objects still have the problem of lack of spatial scale and subspace. e lack of subspace design fails to meet the flexibility of citizens in using the space; second, some spaces confuse subspace and hinder the use of space. In the design of space use elements, although some spaces have considered the shaping of subspaces in the design process, the inappropriate design of subspace boundaries and scales has instead cut off the continuity of space activities. e visualization of the control variable estimates of the spatial design dimension of the three-dimensional public space is shown in Figure 5.
Due to the fact that the selection of the research site has a certain type of equalization, the proportion of private development and construction and postmanagement and operation is equivalent to the proportion of development and management conducted by public institutions. However, looking at all the 13 three-dimensional public spaces in a certain area, companies or private individuals still account for the majority of development, construction, and postoperation management, and this model itself has a certain profit purpose. e management measures taken in more than half of these three-dimensional public spaces are mainly to protect the interests of managers themselves or those who can bring benefits to managers. In terms of management measures, some spaces are even more than a certain number of people. Management personnel will be dispatched to conduct inspections during gathering activities, some three-dimensional public spaces lack management, various service facilities in the space are not well Table 1: Survey of three-dimensional public space in a certain area.
Research site number
On -site distribution Network collection Valid questionnaire 1 120 220 315 2 132 109 236 3 100 156 247 4 110 167 270 5 145 180 336 6 116 189 300 7 165 195 347 8 126 200 311 9 114 170 280 10 106 195 294 11 108 206 302 12 115 156 267 13 129 184 298 maintained, the quality of the space environment is degraded, and eventually the vitality of the space is lost. e proportion of management agency types is shown in Figure 6. irteen three-dimensional public spaces have a high level of publicity in terms of control dimensions, with an average evaluation score of 81.5 points. But there are also parts of the space control that show a certain exclusivity. From the estimation of control variables, the main reason is the lack of intensive monitoring equipment and guidance signs.
Publicity Evaluation and Analysis of ree-Dimensional
Public Spaces with Different Functions. In this study, the three-dimensional public space is divided into three-dimensional public space dominated by traffic, three-dimensional public space dominated by business, three-dimensional public space dominated by office, and threedimensional public space dominated by recreational landscape. Now, a multidimensional comparative analysis of the publicity of these four different types of three-dimensional public space in a certain area is carried out, and the problems existing in different types of publicity are analyzed. From the comprehensive score of publicity, the commercial-led threedimensional publicity in a certain area is the highest, with an average score of 77.9; the publicity of the three-dimensional public space dominated by recreational landscapes ranks third, with an average score of 71.0. e least public space is the three-dimensional public space dominated by traffic, with an average score of only 61.5 points.
e spatial design dimensions of the four types of threedimensional public spaces in this survey also show a special phenomenon-the three-dimensional public space dominated by business and office has a significantly better consideration of space design than the one dominated by traffic and recreational landscapes. In the three-dimensional public space, the evaluation score of three-dimensional public space design dimension dominated by business and office is 87 and 76 points, respectively, while the evaluation score of threedimensional public space design dimension dominated by transportation and recreational landscape function is 83.6 and 75.9, respectively. Among them, the space design of the commercial-led three-dimensional public space reflects the highest publicity. Although the spatial scale of the commercial-led three-dimensional public space is limited, the design of seats and night lighting is relatively expensive. Compared with the three-dimensional public space dominated by business and office, the three-dimensional public space dominated by traffic and recreational landscapes is slightly monotonous, and there are obvious defects in seating and night lighting, thus reducing the cost and publicity score. e comparison of the publicity of different types of threedimensional public spaces is shown in Figure 7.
Usually, the three-dimensional public space dominated by business and office is developed by enterprises, groups, and other organizations, and these development units also organize the follow-up space operation and management. When these two types of spaces are managed, the people who have the ability to consume in the adjacent commercial area and work in the office building and can directly or indirectly create benefits for the management organization were often set as target beneficiaries. Outside the center, the remaining space target beneficiary control variables used in these two types of spaces are all undervalued (valued at 1). e threedimensional public space is dominated by traffic and recreational landscapes. Although most management agencies are state-owned units and government units, and the management benefit target is also the majority of citizens, but in terms of management measures, these two types of spaces have the phenomenon of excessive management and insufficient management, resulting in insufficient space vitality and reduced space popularity. e visualization of different types of three-dimensional public space management dimension estimates is shown in Figure 8.
3.76%
Three-dimensional public space 1 Three-dimensional public space 2 Three-dimensional public space 3 Three-dimensional public space 4 Three-dimensional public space 5 Three-dimensional public space 6 Three-dimensional public space 7 Three-dimensional public space 8 Three-dimensional public space 9 Three-dimensional public space 10 Three-dimensional public space 11 Three-dimensional public space 12 Three-dimensional public space 13
Conclusion
e platform of choice is ArcGIS Server. As a powerful enterprise-level GIS system development tool, it can convert resources into services and publish them. e study introduces in detail the implementation of system authority management based on RBAC, which can realize user management, role management, resource management, etc., and introduce the electronic map system implemented by combining JavaScript and nonrefresh callback technologies, including the realization of basic map operation functions, information query and display, mark point labeling, eagle eye map, area measurement, printing, and other functions, as well as basic GIS platform service functions and interface calls realized through REST technology. rough this platform, the data sharing can be realized among multiple departments and regions of the company, and by establishing a web geographic information service system in B/S mode, users can browse and operate map services through web connections anywhere. Based on the basin runoff theory, ArcGIS software was used to analyze the basin hydrological information, and the catchment area threshold and hydrological response unit were determined. On the basis of the theoretical framework, it proposes seven components of the urban space publicity and defines the defining characteristics of "public" and "private." At the same time, it is concluded that the special impact of three-dimensional public space on public life is mainly concentrated on two factors, namely, vertical position and vertical traffic, so that a complete evaluation model for the publicity of three-dimensional public space can be established, including seven public properties. High-speed urbanization has made Shenzhen a pioneer city in various studies of urban planning. In terms of the construction of three-dimensional public space, a certain area, as a typical high-density city, also has a certain perspective. erefore, this study evaluates the publicity of a three-dimensional public space in a certain area, summarizes its publicity problems, and based on the analysis, proposes a three-dimensional public space publicity improvement strategy with a certain universal value.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. Traffic-led Business-led Office-led Dominated by recreational landscape | v2 |
2022-06-03T13:39:25.362Z | 2022-06-02T00:00:00.000Z | 249284499 | s2orc/train | Patterns of activity correlate with symptom severity in major depressive disorder patients
Objective measures, such as activity monitoring, can potentially complement clinical assessment for psychiatric patients. Alterations in rest–activity patterns are commonly encountered in patients with major depressive disorder. The aim of this study was to investigate whether features of activity patterns correlate with severity of depression symptoms (evaluated by Montgomery–Åsberg Rating Scale (MADRS) for depression). We used actigraphy recordings collected during ongoing major depressive episodes from patients not undergoing any antidepressant treatment. The recordings were acquired from two independent studies using different actigraphy systems. Data was quality-controlled and pre-processed for feature extraction following uniform procedures. We trained multiple regression models to predict MADRS score from features of activity patterns using brute-force and semi-supervised machine learning algorithms. The models were filtered based on the precision and the accuracy of fitting on training dataset before undergoing external validation on an independent dataset. The features enriched in the models surviving external validation point to high depressive symptom severity being associated with less complex activity patterns and stronger coupling to external circadian entrainers. Our results bring proof-of-concept evidence that activity patterns correlate with severity of depressive symptoms and suggest that actigraphy recordings may be a useful tool for individual evaluation of patients with major depressive disorder.
INTRODUCTION
Wrist actigraphy is a technique that allows long-term recording of activity with minimal discomfort and safety challenges for the subject. The availability of off-the-shelf medical or research grade actigraphy devices has enabled larger scale collection of actigraphy data [1], and there is increasing consensus around feature engineering and biological interpretations of specific parameters derived from analysis of actigraphic recordings [2][3][4]. Analyses of rest and activity patterns have highlighted specific alterations in psychiatric disorders as compared to healthy controls, as well as among different disorders [5][6][7][8]. Patients suffering from an episode of major depressive disorder (MDD) display globally lower levels of activity [9], with shorter diurnal activity period and shorter bouts of activity [6,10], and flattened circadian fluctuations in activity levels [1,[11][12][13]. In addition, symptom severity has been shown to correlate with the amount of moderate intensity physical activity [14] and with the number of sedentary bouts [15], while increasing the level of activity by structured, supervised physical activity has been proven to be an effective antidepressant intervention [16][17][18].
The mechanisms regulating circadian patterns of activity involve complex interactions between environmental cues (i.e., light-dark cycle, social interactions, meals, and physical activity) and the internal clock (located in the suprachiasmatic nucleus of the anterior hypothalamus). It has been shown that gene level alterations intrinsic to the molecular clock mechanism (e.g., strength of coupling in circadian oscillations in clock gene expression) are associated with depression [19][20][21]. In addition, weaker coupling between the central clock and peripheral oscillators has been demonstrated in depressed patients and suicide victims [22], and has been verified in experimental models of depression [23,24]. However, correlations between activity patterns and symptom severity in depression have hitherto received very little attention.
The aim of his paper was to investigate whether features of activity patterns from actigraphic recordings correlate with the severity of depression symptoms (estimated using the interviewbased Montgomery-Åsberg Rating Scale (MADRS) for depression [25]) in adult patients with major depressive disorder before treatment. To this end we have analyzed actigraphy data using a battery of non-parametric and non-linear approaches for feature extraction. We then trained and validated linear models to predict symptom severity using the extracted features. Our data provide proof-of-concept support for correlation between symptom severity and activity patterns for patients with ongoing major depression episode. Lastly, we discuss the biological significance of the features with highest leverage in the models. dataset was collected as part of the previously published study on serotonin transporter availability in patients given cognitive behavioral therapy (CBT) for the treatment of a major depressive episode [26]. Briefly, 17 subjects were recruited by advertisements in local newspapers. Diagnosis of depression was established after full psychiatric assessment by a psychiatrist or by resident physician supervised by a senior psychiatrist. The study included patients with an ongoing major depressive episode according to DSM-IV criteria, with a history of at least one prior episode and that were not undergoing any psychopharmacological treatment for MDD. The included patients had a MADRS score between 18 and 35. Activity was recorded in 12 subjects for at least 7 consecutive days immediately prior to starting the CBT treatment program. Actigraphic recordings were acquired using GENEActiv Original wrist-worn actigraphs (Activinsights, Cambs, UK). The devices use threedimensional accelerometers (up to 8×g; resolution of 3.9 mg) at 30 Hz sampling rate to record wrist movement. The patients were instructed to wear the actigraph continuously on the wrist of the non-dominant arm and not remove it unless for personal safety reasons (e.g., sauna, or contact sports such as martial arts, rock climbing, or volleyball). The raw data was downloaded using proprietary software, then processed in Matlab (The Mathworks, Natick, MD, USA), using a modified version of the code (https://github.com/DavidRConnell/geneactivReader), as described earlier [27]. Briefly, the Euclidean norm of change in acceleration vector was first smoothed using a rolling Gauss window spanning 30 consecutive datapoints (1 s), then a high-pass filter was applied (threshold: 20 mg = 196 mm/s 2 ) before computing the sum of changes in acceleration vectors over 1 min epochs (1440 samples/24 h). A total of 12 recordings spanning between 6 and 12 consecutive days were included in the training dataset.
The external validation was performed on an independent dataset. The test dataset consisted of actigraphy data recorded as part of a clinical study addressing the effects of ketamine on serotonin receptor binding in patients with treatment-resistant depression [28]. Briefly, the study included 39 patients with an ongoing major depressive episode, with MADRS ≥ 20, resistant to selective serotonin reuptake inhibitor (SSRI) treatment in an adequate dose for at least 4 weeks. Ongoing antidepressant treatment was discontinued and actigraphy data was collected after a washout period of at least 5 times the half-life of the SSRI. The patients were instructed to wear the actigraph continuously on the wrist of the non-dominant arm and not remove it unless for personal safety reasons. The recording started prior to the first ketamine infusion and continued for the duration of the ketamine treatment program. For the purpose of this study, we cropped the recordings to include the period immediately prior to the first ketamine infusion (i.e., after drug washout period). Actigraphic recordings were acquired using Actiwatch 2 wristworn devices (Philips Respironics, Murrysville, PA, USA) set to record activity only integrated over 1 min epochs. The raw data was downloaded according to manufacturer's instructions (Actiware 6.0.9, Philips Respironics) then exported as text files. The text file was imported to Matlab™ using a custom function designed to yield an output similar to the one generated by the import function for GENEActiv devices. A total of 23 recordings spanning between 2 and 7 consecutive days were included in the test dataset.
Quality control and inclusion criteria
The recordings in both train and test dataset underwent the same screening procedure and were assessed by the same observer blind to the recording conditions. All recordings were first inspected visually using a standardized procedure designed to identify stretches of missing data, artifacts, and gross abnormal circadian patterns of activity (e.g., shift-work). Intervals containing suspected shift-work (not reported at the time of recording), suspected artifacts, or missing data, were cropped out. Individual cropped recordings were included if they fulfilled the following requirements: mild to moderate symptom severity (recordings from patients with MADRS > 40 were not included, given the MADRS range was limited to 35 in the training dataset); minimum recording length 5 days; the recording did not include shift-work periods or other exceptional events with potentially high impact on the subject's circadian patterns of activity (as identified on the actigraphy recording); the recording was continuous and did not include stretches of missing data longer than 2 h (e.g., due to not wearing the recording device for personal safety reasons). In the train dataset, all recordings passed the quality control procedure. In the test dataset, recordings were rejecting for the following reasons: MADRS > 40 (1); recording length <5 consecutive days (7); shift-work during recording time (2); missing data (2). This yielded a total of 24 recordings to use for further analyses: 12 for train and 12 for test datasets (see also Fig. 1 for description of workflow). All actigraphy recording originated from different patients (i.e., no patient provided more than 1 recording).
Pre-processing and feature extraction
All data processing was performed in Matlab using custom implementations of publicly available algorithms. The selection of features took into account the fact that the training and test datasets were acquired using different recording devices and required different pre-processing steps. Therefore, we aimed to include primarily features independent from the magnitude of the reported activity (average hourly activity level during the most active 10 consecutive hours and least active 5 consecutive hours-M10 and L5, respectively-depend on the output magnitude) and focused on features describing the regularity, fragmentation, and complexity of circadian patterns of activity. Feature extraction was performed on recordings cropped between first and last midnight to yield an integer number of 24 h periods. The following features were extracted: circadian period; scaling exponent [4]; intradaily variability; interdaily stability; circadian peak and trough; relative amplitude [1,2,27]. The features extracted and included as predictors for model development are listed in Table 1, and the correlation matrix for all predictors as well as outcome variable (MADRS) is shown in Fig. 2A.
Circadian period was estimated using the Lomb-Scargle algorithm optimized for Matlab implementation [29]. The Lomb-Scargle periodogram was preferred over the most commonly used Sokolove-Bushell [30] algorithm because the latter has been shown to yield period estimates biased towards periods below 24 h [31]. The circadian period was calculated over the entire recording using an oversampling factor of 10 to yield a minute-range resolution of the estimate. The scaling exponent for detrended fluctuation analysis was calculated for the magnitude of measured activity in 1-min bins using boxes equally spaced on a logarithmic scale between 4 min (4 consecutive samples) and 24 Fig. 1 Workflow for data analysis. All recordings were screened by the same investigator, blind torecording conditions. MR -multiple regression; * -see main text for details on reasons to not pass QC.
(1440 consecutive samples) as described by Hu et al. [4]. The scaling exponent is a feature of the intrinsic regulatory mechanisms controlling the rest/activity patterns. It has not been shown to be sensitive to extrinsic factors the subject is exposed to in normal daily activity, but is altered as a result of disease [4,6,7]. Intradaily variability estimate the fragmentation of activity patterns by calculating the ratio between mean squared differences between consecutive intervals and the mean squared difference from global mean activity per interval; it increases as the frequency and the magnitude of transitions between rest and active intervals increase, and decreases as the active and inactive intervals consolidate [2]. Interdaily stability evaluates the coupling between activity patterns and circadian entrainers as the ratio between variability around circadian profile and global variability. High values indicate consistent activity patterns across days, consistent with strong coupling between activity and circadian entrainers. The relative amplitude of circadian rhythms of activity (RA) estimates the robustness of average circadian rhythms [1,2]. The range of RA is bounded between 0 (no circadian rhythms) and 1 (robust circadian rhythms, with consistent timing of consolidated rest interval >5 h across days).
Model development and validation
To limit the risk of overfitting, the maximum number of predictors was limited to 6, which corresponds to a ratio of minimum 2 subjects/predictor [32]. First, we used a brute force method to explore all models possible in the given feature space. Models based on 1-6 predictors were generated using all combinations possible in the feature space. Second, we refined the procedure of generation and selection of models by using machine learning (ML) algorithms to train models of increasing complexity (forward stepwise multiple regression). We started by using the entire feature space, then we manually restricted the inclusion of features such as subject age and circadian period in the model before running the ML algorithm again. We iterated the entire procedure using F statistic or Akaike information criterion (AIC) as criterion for inclusion of predictors in the model. Next, the models were filtered using the following criteria: variable inflation factor (VIF) < 5 for any single predictor; coefficient of determination (R-squared) > 0.5; and root-mean-square error (RMSE) < 3. The models surviving filtering were then used for assessing the occurrence of individual predictors.
We then performed an external validation of filtered models. To this end we evaluated the performance of models validated as described above on an independent (test) dataset. The performance of individual models was assessed using the coefficient of determination (R-squared), and the RMSE for predicted vs. observed MADRS to evaluate the precision and the accuracy of the estimate (Fig. 2B). We filtered the models to be further analyzed as follows: significant correlation between predicted and observed MADRS (p < 0.05 corresponding to Pearson R > 0.576) and RMSE < 3 for test dataset.
To provide an internal reference for model performance, we generated a dummy model (predicted score = average score for the test dataset), and a random prediction dataset (1 million simulated sets of random integer values in the same range as the test dataset). The probability distribution of prediction accuracy (RMSE) is depicted in Fig. 2C.
The frequency of occurrence for individual predictors in validated models was calculated as the number of models including each unique predictor divided by the total number of validated models for each level of complexity. Average frequency for the most common predictors was calculated as the average occurrence for each level of complexity. This approach compensates for the fact that the number of validated models increases dramatically with the numbers of predictors included. The leverage of individual predictors in each model was evaluated using the standardized coefficients for each model.
RESULTS
Three features displayed significant correlation with the outcome variable (MADRS): scaling exponent for full range (4 min-24 h; alpha full, negative correlation), and intradaily variability for 5 min and 30 min bins (IV5, IV30, positive correlation) (Fig. 2A). The brute force approach generated 14,892 models, out of which 3837 survived the filtering for internal validation. The average RMSE and R-squared in the models surviving internal validation were 1.84 ± 0.35, and 0.67 ± 0.11, respectively. The frequency of occurrence of individual predictors in models developed by brute force varies greatly across levels of complexity. However, alpha full, IV5 and IV30 display consistent frequencies across complexity levels ( Fig. 2A). Next, we evaluated the performance of models surviving internal validation in predicting the MADRS score in an independent population. The filtering criteria for external validation further reduced the number of validated models to 192 (Fig. 2B). The average RMSE and R-squared in the models surviving external validation were 2.70 ± 0.24, and 0.59 ± 0.09, respectively. In comparison with the simulated dataset, the average RMSE for validated models corresponds to models with a probability of occurrence below 0.001 (Fig. 2C). The analysis of standardized coefficients for the models surviving external validation revealed good consistency across models for most independent predictors (Fig. 2D). We further found that alpha full, IS5, and RA were included on average in >50% of the models (Fig. 2E).
These results indicate that depression severity scores correlate with features extracted by analyzing the pattern of activity recorded over several consecutive days. The brute force approach age period M10 M10L L5 L5L RA alpha full alpha short alpha long IV5 IV30 IV60 IS5 IS30 IS60 MADRS age period M10 M10L L5 L5L RA alpha full alpha short alpha long IV5 IV30 IV60 IS5 IS30 (N = 192), sorted by RMSE on test dataset in ascending order. Each model is depicted by one column and corresponds to a point in the scatterplot in (B). E Occurrence of most frequently encountered individual predictors the models surviving external validation criteria. "period" and "age" are included based on potential biological relevance.
trains independent models and does not use information from previously generated models to increase the accuracy in subsequent steps. Therefore, we also used stepwise machine learning (ML) algorithms to train models of increasing complexity. This procedure yielded 18 models, which underwent internal and external validation as described above. Fourteen models survived the internal validation criteria and underwent external validation (Fig. 3A). Eight models had better accuracy than the dummy model, and 5 models survived filtering by external validation criteria (Pearson R > 0.576; RMSE < 3).
We addressed the distribution of estimation errors for all validated models and calculated the proportion of absolute residuals below 1 to 5 MADRS units in 1-unit increments (Fig. 3B). On average, the models surviving external validation criteria predict MADRS score within 2 units in 54% of the cases, within 3 units in 75% of the cases, and within 4 units in 87% of the cases. The distribution of estimation errors for the models trained by forward stepwise procedure was similar to the outcome of brute force approach (Fig. 3B). Next, we assessed the leverage of individual predictors in the models developed by forward stepwise procedure (Fig. 3C). The scaling exponent (alpha full) was present in all models surviving internal validation and had the largest absolute standardized score in all models, particularly in the models surviving external validation. In addition, interdaily stability calculated on 5-or 30-min bins (IS5, IS30) and the relative amplitude of circadian rhythms (RA) were included in the models surviving external validation.
Lastly, we selected three models (best, intermediate, and worst performance in test dataset) for evaluation of agreement between observed and predicted MADRS scores (Fig. 4). The Bland-Altman plots indicate good agreement between observed and predicted score in both training and test datasets (Fig. 4).
DISCUSSION
Levels of activity and depression symptom severity are linked in a bidirectional manner: on average, higher severity correlates with lower levels of activity, particularly in the moderate activity band [14]; and increasing the levels of activity by physical exercise may in many cases reduce depressive symptoms [16,18]. Here we show that symptoms of depression correlate with features of individual patterns of activity independently from actual activity levels. In addition, we bring proof-of-concept evidence that symptom severity can be predicted by analyzing the subject's activity recorded over several consecutive days. We developed several multiple linear regression models which performed satisfactorily on a dataset independent from the training population. The models were generated using either a brute force approach or using forward-stepwise semi-supervised procedure. We also identified a number of features with frequent occurrence in the models surviving the external validation procedure. Our data supports the use of actigraphy recordings as minimally invasive objective measurement for the evaluation of depression patients.
Wrist actigraphy is a technique that allows long-term recording of activity with minimal discomfort and safety challenges for the subject. For major depressive disorder, the heterogenous pathological mechanisms leading to depressed mood as core symptom raise the challenge of finding objective behavioral features that correlate with symptom severity as measured by MADRS scale. The features we identified as particularly relevant estimate the complexity of activity patterns (scaling exponents); the strength of coupling between activity and circadian entrainers (interdaily stability, IS); and the robustness of circadian rhythms (RA). The coefficients point towards higher depression severity scores correlating with less complex patterns of activity, stronger coupling of activity with circadian entrainers, and less robust circadian rhythms. This is in line with earlier reports on less complex patterns of activity in patients suffering from depression [33], and with higher likelihood to get diagnosed with depression in subjects displaying blunted difference between day-time and nighttime activity levels [1]. Our results also indicate that higher symptom severity is associated with higher IS, suggesting that stronger coupling with circadian entrainers is associated with more severe symptoms. Of note, stronger coupling with circadian entrainers can account for less complex patterns (estimated by scaling exponents), by reducing the contribution of high-frequency fluctuations. At the same time, stronger coupling with circadian entrainers does not warrant higher amplitude of circadian rhythms (estimated by RA), if the circadian fluctuations are of small amplitude (i.e., shorter, and less robust bouts of activity, as described previously [6,10] steer one's own activity in a circadian context, and instead passively follow circadian entrainers. For internal validation of models, we used a heuristic approach considering the required accuracy of MADRS estimation. We selected RMSE as measure of accuracy because it penalizes all deviations and is sensitive to outlier values, and did not use average deviation, where the leverage of large positive and negative outliers can balance out and yield a misleading small average. Reports available in the literature estimate 95% CI = 7 face-to-face vs. telephone interview [34] and up to 2.75 units difference across testing occasions [35]. In our datasets, restricting to RMSE < 3 yielded an average accuracy of 2.7, and an absolute error below 3 in 75% of the cases. Another reference value we considered is the minimum clinically important difference for response to treatment, estimated to 2 units on MADRS scale [36]. Thus, the accuracy of the predicted score should be lower than 2 so that clinically relevant effects of antidepressant treatments are not obscured by estimation error. Models developed by brute force surviving external validation criteria approximate on average 54% of the case within 2 MADRS units, and in >90% of the cases in the best models. Larger datasets are required for further refining the model training approach, including internal validation prior to external validation on independent datasets. Notably, the features we have extracted are sequencedependent, and do not include summary statistics (e.g., total time inactive, or similar). In addition, our analyses consider the 24-h cycle as a continuum and do not ascertain crisp distinctions between active and resting intervals, nor do we implement any classification of samples or segments based on intensity of activity recorded. Therefore, our results offer a novel perspective, complementary to earlier reports on correlations between levels of activity and symptom severity [14,15]. From a clinical perspective, our data may facilitate connections with molecular mechanisms behind onset of depressive episodes or the changes associated with response to treatment [26].
LIMITATIONS OF THE STUDY
All patients included in this study were diagnosed with a unipolar major depressive episode, and the model was trained to predict the MADRS score registered prior to the actigraphy recording, under the implicit assumption that activity was recorded in a stable state (i.e., no significant variations in symptom severity expected during recording time). Therefore, correlations between symptom severity and patterns of activity in other mood disorders (e.g., bipolar disorder or cyclothymia), or dynamic changes in response to treatment cannot be inferred from our results. In addition, correlations between patterns of activity and symptom severity in patients undergoing antidepressant treatment needs to be investigated separately. The impact of psychoactive drugs (acting on neurotransmitter systems which regulate circadian activity, e.g., glutamate, serotonin, noradrenaline, acetylcholine; reviewed in ref. [37]) on activity patterns of psychiatric patients is not fully understood [38].
A potential limitation of our study is the essential difference between inclusion criteria for train and test populations. Thus, the training dataset was collected form patients recruited for a clinical trial assessing the response to CBT. Implicitly, this leads to high variability in symptoms and pathological mechanisms. In contrast, the test dataset was acquired from patients included in the study only if they did not respond to SSRI drugs and would be eligible for ketamine treatment-hence a strong selection bias is expected. However, the fact that models trained on an intrinsically more variable population sample perform very well on the more strictly selected population support the applicability of our approach.
The availability of only two independent datasets with relatively low number of patients and narrow MADRS range further limits the extent of analyses due to the risk of overfitting the available pair of datasets. These limitations are particularly relevant when evaluating the models trained using the brute force approach. Nevertheless, the brute force approach highlighted the most frequently occurring features in externally validated models. The interpretation of the coefficients of individual predictors is consistent with clinical observations of alterations characteristic for depression. Further investigations focusing on increasing the number of recordings within study population, as well as on analyzing data from several independent populations are required for strengthening the biological significance of specific features.
Lastly, actigraphy data was collected using different devices between populations. This explains the significant between-group differences for features describing the magnitude of activity levels (M10; L5). This appears not to be a matter of concern, because these specific features are not identified as most relevant in the training dataset. This also implies that neither the actual peak of activity (i.e., most intense peak of activity during active phase), nor the trough of activity (typically associated with night-time activity or insomnia) are significant predictors of depression severity in our study populations. Interestingly, neither are the locations of the circadian peak and trough significant predictors for symptom severity. In contrast, these features appear relevant for distinguishing between healthy control and MDD patients [8,12]. These data suggest that while the magnitude and location of circadian peaks and troughs of activity may have diagnostic value, they do not correlate with symptom severity.
From a clinical application perspective, our results indicate that actigraphy could be a useful tool in the individual evaluation of patients with depression. Larger confirmatory studies are needed before clinical implementation.
CODE AVAILABILITY
The Matlab code used for feature extraction and model training is available upon request. | v2 |
2019-09-15T03:33:45.624Z | 2019-06-07T00:00:00.000Z | 242483739 | s2orc/train | Oil Palm Water Balance; a tools for Analysing Oil Palm Water Footprint and Root Water Uptake Distribution in Root Zone
The varying condition of climate, soil properties, crop stage, ground water existing in oil palm cultivation require the specific water balance model to perform the precision crop water use. The purposes for this research were to develop oil palm water balance model for calculating the hydrological parameter of oil palm and analysing oil palm water footprint and root water uptake distribution in root zone. The model of oil palm water balance was developed through the following step: oil palm root architecture study, instrument installation and data observation, model development and calibration. The oil palm water balance tool was developed by inputting the data base including climate, soil properties, crop stage, root density and root zone layer as well. The results in the case for 11th year oil palm tree on soil type ultisol in Central Kalimantan during the simulation climate data pointed out that the average root water is 3.46 mm/day and distributed 63% on 2st root zone. From the total water usage and the average production 14.19kg/month, it resulted the 1.053 m3/kg water footprint of FFB (76 % green water and 24% blue water).
Introduction
One of the common impact issue oil palm plantation expansion [1] is related to water problem [2] [3]. Hence, an accurate crop water balance analysis which shows the precise crop water use analysis in each stage of oil palm is substantial for better understanding the most efficient and precise crop water requirement to reach the optimal productivity. The varying condition of climate, soil properties, crop stage, ground water existing in oil palm cultivation require the specific water balance analysis model to perform the precision condition of crop water use. Water balance parameter in oil palm could be predicted through unsaturated water flow approach by Richard equation [6] [7] which involve the complex hydrological parameter in root zone. The result of hydrological factors in root zone could determine the root water uptake and sum up the value of water footprint in oil palm plantation as one of environmental sustainability parameters [4] [5].
Previously, the oil palm water balance tool has been developed to predict the water content distribution in root zone and has been tested during the observation time (April to July 2017) [8]. In order to accomplish the prior oil palm water balance tool and accustom to the water sustainability issues, this research was conducted to develop oil palm water balance model for calculating the hydrological parameter of oil palm and to analyse oil palm water footprint and root water uptake distribution in root zone, case study in Pundu, Kotawaringin Timur, Central Borneo, Indonesia.
Research procedure
2.1.1. Instrument Installation and Data Observation. Rainfall by rain gauge ECRN100, water content along the three root zone layer [9]by soil moisture sensor (type 10 HS decagon), automatic water level (Hobo U20L-04) and a set of AWS (automatic weather station) to cover the climate data.
Where ET Green and ET blue is root water uptake from rainfall (mm) and groundwater (mm) respectively,Y is average production of oil palm.
Structure of Oil Palm Water Balance Model
The oil palm water balance model was built through the structure illustrated in Figure 1 which describe the input data, calculation process and the output as the result of model. The script of model was built in R in finite different method. The oil palm water balance model was built by previously by assuming the boundary condition [8] including the free drainage, no flux bottom and groundwater contribution.
Data Input of Model
The data input of this model generally consisted of 3 types of data: 1. Climate data; a series of hourly climate data (rainfall, temperature, relative humidity, wind speed, and solar radiation) 2. The soil properties data; consisting the soil texture, bulk density (gr/cm3), porosity (%), permeability (Ks, cm/hour) and the generated Van-Genuchten parameters [10], [11 ]such as saturated water content (θs, cm 3 /cm 3 ), residual water content (θr, cm 3 /cm 3 ), air entry value (α) and soil gradient curve (n). 3. The crop properties; root density and root zone levelling from the oil palm root architecture study [9] and also the value of crop coefficient (Kc). The value of Kc as a determination factor of root water uptake was adjusted by calibrating the water content of model and observation. For running the oil palm water balance in this study, it took the case of 11 th year oil palm tree on ultisol soil type with ground water existing where the soil properties data performed on Table 1 and the root zone levelling [9] and root density on Table 2. The soil data properties of ultisol soil type used in running the model was performed in Table 1. Afterwards, the value of Kc for this 11 th year oil palm was 0.8 according to the calibration model during observation.
Water Content Distribution
The next output calculated by the oil palm water balance model is the water content change along the soil depth through the calculation of eq (1) - (6). The water content change in influenced by the water flux both from rainfall, capillary in the case of ground water existing and also the root water uptake as the sink factor. Figure 3. Simulation of water content distribution 11th year oil palm root zone in ultisol 10 th July -6 th September 2018 (with groundwater existing) Figure 3 pointed out the water content on three root zone level in line with the root zone levelling determined in previous study. The water content in 3 st layer (blue line) performed the highest water content and followed by the 2 nd layer (red) and the 1 st layer (green). The 3st layer showed the highest water content probably due to the capillary flux of ground water. For the 1 st layer of root zone that remains drier than other could be caused by the low rainfall during the observation climate data. Ones rainfall occurred, the water content in 1 st layer reach the value more than the rest. However, the assumption of uniformity of soil properties along the soil depth is less fit to the real condition. Therefore, the variation of soil properties on root zone layer remains a challenge for this model.
Oil Palm Root water uptake and water footprint
Based on the root distribution on Table 2, the total root water uptake of oil palm was distributed along the root zone as shown on Figure 4. The daily average root water uptake is 3.46 mm/day which consisting of 2.19 mm/hour from root zone 1, 0.25 mm/day from root zone 2 and 1.02 from root zone 3. The root water uptake along the simulation in the case study show the highest contribution from 1 st root zone and followed by the 3 rd root zone and the 2 nd root zone. This result performed that the root water uptake distribution depend on the root density distribution and the water availability both from rainfall and groundwater. As presented on Table 2, the highest root density of 11 th year of oil palm tree on ultisol is on the 1 st root zone followed by by the 3 rd root zone and the 2 nd root zone as the lowest. Since the root system of a tree spread to reach the water, then the availability of water on the upper (due to the rainfall) and the bottom (due to the groundwater) influence the root density of the crop. The result of root water uptake distribution as the oil palm water usage furthermore was used to calculate water footprint of FFB (fresh fruit bunch) oil palm. Regarding the productivity of observed oil palm is 14.19 kg/month/tree and assumption tree area is 71.71 m 2 / tree, the total water footprint of observed oil palm tree is 1.053 m 3 /kg FFB. Thus total water footprint value was contributed from green water footprint (rainfall) for 0.786 m 3 /kg and from blue water footprint (groundwater) for 0.268 m 3 /kg.
The crop water usage is commonly predicted using the Cropwat model by FAO [17]. Some water footprint analysis in recent time use crop water usage analysis by this model [18], [19, [20]. But, the real time input data for predicting the real time condition seems to be lacking in Cropwat model. The more detail crop water usage can also analysed by Hydrus (1D, 2D, or 3D) through water content change and root water uptake [21], [22], [23]. However, the Hydrus model is not developed for a general crop and condition. Therefore, the development of oil palm water balance in this study has been addressed to those required issue.
The precise water use resulted from the oil palm water balance model could be the easy tool to assess the temporal water use of oil palm and furthermore to determine what strategy to applied if the water deficit happened. During the dry season the productivity are low and needs the implementation of irrigation system and fertilisation to maintain [24] [25]. The high productivity is obtained by the mature plantation that consume more water [2] [26]. Hence, the rapid calculation of water use of oil palm could avoid the decreasing in productivity. The variation of water usage on oil palm crop age showed that the better understanding of temporal water use is required [27]. In the fact, to meet the crop water requirement during the dry season through precision irrigation system still need the detail temporal water use both for maintaining the productivity and for environmental sustainability.
Conclusions
1. The oil palm water balance model have been developed for calculating the hydrological parameter in oil palm root zone including root water uptake along the root zone and the water footprint. 2. The highest contribution of root water uptake is from 1 st root zone and followed by the 3 rd root zone which depend on the root density distribution and the water availability (rainfall and groundwater). 3. The total water footprint of observed oil palm tree is 1.053 m 3 /kg FFB which was contributed from green water footprint (rainfall) for 0.786 m 3 /kg and from blue water footprint (groundwater) for 0.268 m 3 /kg. | v2 |
2019-04-14T13:05:22.685Z | 2015-10-30T00:00:00.000Z | 55958639 | s2orc/train | The Evaluation of the Groundwater Influence on the Stress and Strain Behavior around a Tunnel by Analytical Methods
Email: [email protected] Abstract: The presence of an aquifer in the soil causes a change in a relevant way of the stability conditions of a tunnel. The flow of water in the pores, in fact, modifies the stress and strain state of the soil and causes an increase in the thickness of the plastic zone. As a result the loads transmitted on the lining increase. In this study a calculation procedure by finite differences method, able to determine the stress and strain state of the soil in the presence of the water flow in the pores, is presented. More particularly, using the proposed procedure, it is possible to determine the convergence-confinement curve of the tunnel and the trend of the plastic radius varying the internal pressure. A calculation example is used to detect the great influence that the presence of the aquifer has on the stress and strain state of the soil and on the pressure-displacement relationship of the tunnel wall.
Introduction
The groundwater presence in the soil has an important effect on the stability of a tunnel. The flow of water towards the tunnel, in fact, substantially alters the stress state around the void perimeter (Bobet, 2003;Fahimifar et al., 2014;Fernandez and Moon, 2010;Haack, 1991;Hwang and Lu, 2007), increasing the thickness of any existing plastic zone. As a result, the loads acting on the lining increase.
Even at the face, the static conditions are exacerbated by the presence of a groundwater flow towards the tunnel, which may lead to the soil breakage for a portion more or less extended ahead of the face (Oreste, 2013).
It is, therefore, fundamental to detect critical conditions for the tunnel stability, when an aquifer is present (Carranza-Torres and Zhao, 2009;Nam and Bobet, 2006;Li et al., 2014;Wang et al., 2008).
The numerical methods, both two-dimensional and three-dimensional ones (Do et al., 2014;2015;Oreste, 2007), are able to study in detail the stress and strain state around a tunnel and in the support structure. In the presence of groundwater and of water flow in a porous medium, it is necessary to provide coupled numerical analysis, in which the stress and strain analysis is developed in parallel to the water flow analysis. For these reasons, the numerical methods, especially the three-dimensional ones, appear to be very slow when they have to analyze the behavior of a tunnel in a porous medium with groundwater.
Through the analytical calculation methods, it is possible to approach the study of some aspects of tunnel static easily and fast. Several analytical methods are available in the literature and are able to assess the stress and strain state around a tunnel (Oreste, 2003), ahead of the excavation face (Oreste, 2009a), in the support and reinforcement structures (Oreste, 2008;Osgoui and Oreste, 2007;, when it is possible to introduce some simplifying assumptions of the problem in the calculation. These methods are characterized by a relatively high speed of calculation, which allows one to develop extensive parametric analyzes, probabilistic analyzes (Oreste, 2005a;2014a) or back-analyzes (Oreste, 2005b).
In this study is presented a numerical solution by finite difference method that allows to analyze in detail the stress and strain state of the soil around the tunnel, in the presence of a water flow in the pores. From the calculation it is possible to determine the convergenceconfinement curve of the tunnel, the evolution of the plastic radius varying the total internal pressure applied to the tunnel walls, the stresses and displacements that develop in the soil around the tunnel, for a certain total pressure applied to the walls.
A calculation example for a specific case will illustrate the influence of the groundwater flow in the soil pores on the convergence-confinement curve of the tunnel and on the stress and strain state of the soil.
Materials and Methods
The analysis of the behavior of a circular and deep tunnel can be developed through the convergenceconfinement method for the cylindrical geometry (Ribacchi and Riccioni, 1977;Lembo-Fazio and Ribacchi, 1986;Panet, 1995;Orestes, 2009b). This method allows to obtain the relationship between the radial displacement of the tunnel wall and the internal pressure applied to the tunnel perimeter. Proceeding with then the intersection of the convergenceconfinement curve of the tunnel with the reaction line of the support, one can determine the final displacement of the tunnel wall and the load acting on the support structure (Oreste, 2014b).
The convergence-confinement method is based on the following fundamental equations (Ribacchi and Riccioni, 1977;Lembo-Fazio and Ribacchi, 1986;Panet, 1995): The equation of axisymmetric equilibrium of forces in the radial direction (Equation 1) and the two equations of strain congruence (Equation 2). Also, for the soil with elastic behavior around the tunnel the two equations of the elasticity, for the plane field strain and axisymmetric geometry, can be applied; while, for the soil portion with elastic-plastic behavior (inside the plastic zone), the failure criterion and the relationship between the main plastic strains (the socalled flow law) are valid (Ribacchi and Riccioni, 1977;Lembo-Fazio and Ribacchi, 1986;Panet, 1995): Where: σ r and σ ϑ = The radial and circumferential stresses ε r and ε ϑ = The radial and circumferential strains u = The radial displacement of the soil (positive when directed towards the tunnel centre) The relationship between the radial displacement of the tunnel wall u R and the applied internal pressure σ R depends on the presence or not of a plastic zone around the tunnel (Oreste, 2014b): for soil with an elastic behavior absence of a plastic zon , , , , Where: E and ν = Elastic modulus and Poisson ratio of the soil R = Tunnel radius p 0 = Lithostatic pressure c and ϕ = Soil cohesion and friction angle ψ = Soildilatancy angle σ R = Radial stress applied to the tunnel perimeter In general, there is always a stretch of the convergence-confinement curve with an elastic behavior, defined by Equation 3a, for values of σ R greater than a certain critical pressure σ Rpl (Equation 4). For values of σ R lower than σ Rpl (if σ Rpl is positive), the relationship between u R and σ R is defined by Equation 3b: When σ Rpl is negative, the convergence-confinement curve is entirely described by Equation 3 and has a linear shape for its entire extension.
In the presence of groundwater in the soil, it is necessary to refer to effective stresses and no longer to the total stresses. In addition, the e Equation 1 is modified, as the forces acting on the infinitesimal soil element around the tunnel are different (Fig. 1). In fact, proceeding to the equilibrium of forces in the radial direction, we obtain: and, therefore: Finally, is obtained (Fahimifar et al., 2014;Li et al., 2014): To assess in detail the trend of the water pressure p w with the distance from the tunnel center, an analysis of the water flow has to be developed. It is useful to consider the steady-state condition (constant flow over time). More particularly, for the Darcy law is (Bear, 1972;Wang et al., 2008): Where: v = Water velocity in the radial direction k = Permeability coefficient of the soil i = Hydraulic gradient h = Piezometric height γ w = Specific weigth of water and, then, the flow of water towards the tunnel Q (considering 1 meter depth in the geometry of the problem) is given by the following relation: Which, integrated by R (tunnel radius), where the water pressure at the lining extrados is p w,ext , up to the generic distance r, where the water pressure is p w , leads to the following relation: To obtain the value of p w,ext , it is necessary to analyze the situation inside the tunnel lining, characterized by a thickness t and by a permeability coefficient k sup . The Equation 7 now becomes: The Equation 9, integrated by (R-t) (lining intrados, where p w is nil) to R (lining extrados, where the water pressure is equal to p w,ext ), provides the following equation: Assuming p w reaches the water pressure p w,0 (in undisturbed conditions) to the tunnel depth, for a distance r =α·R, the flow rate Q value is obtained: and, therefore, the expression of p w can be rewritten as: The water pressure on the tunnel wall, in correspondence of the lining extrados, therefore is: The derivative dp w /dr takes, therefore, the following form:
Results
To analyze the trend of stresses, strains and radial displacements around the tunnel varying r and in the presence of groundwater, it should proceed with a numerical solution using the finite differences method. This solution, starting from a great distance from the tunnel center (high value of r), for which is assumed nil the derivative dp w /dr and σ r is put equal to a certain percentage of p 0 , allows to calculate the radial and circumferential stresses and the radial displacements for concentric rings, moving towards the tunnel wall, where r = R. At the generic ring the following calculation steps are developed: • Evaluation of the radial stress σ r on the inner radius of the ring from the derived dσ r /drcalculated on the outer radius in the previous step • Evaluation of the radial displacement u on the inner radius of the ring from the derivative du/drcalculated at the outer radius in the previous step • Determination of the circumferential stressσ ϑ on the inner radius of the ring, on the basis of the following equations (Equation 16a if the elasticity theory is valid, Equation 16b if the ring has elastic-plastic behavior) (Ribacchi and Riccioni, 1977): • Determination of the derivative du/dron the inner radius of the ring, on the basis of the following equations (Equation 17a if the elasticity theory is valid, Equation 17b if the ring has elastic-plastic behavior) (Ribacchi and Riccioni, 1977): • Determination of the derivative dσ r /dron the inner radius of the ring, on the basis of Equation 5, both if the elasticity theory is valid and the ring has elasticplastic behavior (Fahimifar et al., 2014;Li et al., 2014) • Comparison of the circumferential stress σ ϑ calculated in step 3 with the soil strength for the existing confinement stress σ r : In the case the circumferential stress is greater than the soil strength, the starting of the soil plasticization is detected and the plastic radius (distance from the tunnel center of the extreme outer of the plastic zone)is evaluated Continuing the procedure for concentric rings until reaching the tunnel wall (r = R), a pair of values σ R -u is obtained; the series of the σ R-u pairs each corresponding to a different initial value of σ R at a great distance from the tunnel center, allows to draw the convergenceconfinement curve when the groundwater is present. It is also possible to identify the plastic radius, that is the value of the distance from the tunnel center where the transition from elastic to elastic-plastic behavior is detected.
By adopting the above solution to the case of a circular tunnel of radius R = 3 m and 75 m deep, it is possible to determine the influence of the groundwater presence on the convergence-confinement curve of the tunnel. The tunnel is excavated in a soil having cohesion c = 0.2 kPa, friction angle ϕ = 20°, dilatancyangle ψ = 20°, elastic modulus E = 350 MPa, Poisson's ratio ν = 0.3 and the specific weight of the soil γ = 20 kN/m 3 . It is assumed the presence of a lining with a thickness of 0.3 m and a permeability coefficient k sup equal to 0.1·k (with k permeability coefficient of the soil). The parameter α was assumed equal to 20: That is, it is considered that at a distance equal to 20·R the piezometric height of the water table reaches the initial unperturbed value.
Two conditions of absence and presence of groundwater (with the free surface coincident with the ground surface) were evaluated. In the first case, the litho static stress state in the undisturbed conditions is equal to 1.5 MPa; in the second case, it is equal 0.75 MPa in terms of effective stresses, since p w,0 = 0.75 MPa.
The water pressure at the lining extrados (p w,ext ), calculated by Equation 14, is equal to 0.195 MPa. Figure 2 shows the convergence-confinement curve of the tunnel obtained from the proposed calculation procedure for the case of absence (DC-dry condition) and presence (GF-groundwater flow) of groundwater. Figure 3 shows the trend of the plastic radius (extreme distance from the tunnel center of the plastic zone) varying the internal pressure on the tunnel perimeter. In Figure 4 is shown the trend of the soil stresses and the water pressure varying the distance from the tunnel center, for a nil total pressure on the tunnel wall; Fig. 5, finally, shows the trend of the radial displacements in the soil varying the distance from the tunnel center, for the two examined conditions (DC and GF) and for a nil total pressure applied on the tunnel wall.
Discussion
From the analysis of the convergence-confinement curve for the studied case (Fig. 2), it is possible to note that the presence of groundwater can significantly modify the stress and strain behavior of the soil around the tunnel. In fact, the convergence-confinement curve in the presence of a steady flow of water towards the tunnel, involves, at constant total pressure applied internally to the tunnel perimeter, values of the radial displacement of the wall much greater than in the case of water flow absent. This has as a consequence a greater load acting on the tunnel lining. The estimated value of the load on the lining can be made by taking the intersection of the convergence-confinement curve of the tunnel with the reaction line of the lining (Oreste, 2014b). Furthermore, reducing the internal pressure the plastic radius grow faster in the presence of groundwater (Fig. 3). For a nil internal pressure the plastic radius even varies from about 7 m (dry condition) to about 13 m (in the presence of the groundwater flow). The presence of a more extensive plastic zone around the tunnel involves a change in the pattern of the stresses (Fig. 4) and of the radial displacements in the soil (Fig. 5).
In conclusion, the presence of the groundwater flow involves a detensioning of the soil, an increase of the thickness of the plastic zone and a substantial increase of the radial displacements around the tunnel.
The effects of the presence of groundwater, therefore, are significant and cannot be neglected.
The calculation method proposed in this study, therefore, allows the analysis of the interaction between the groundwater flow and the soil in a simple and effective way, using a finite difference solution. The stresses and strains that are calculated in the soil around the tunnel are able to provide the convergenceconfinement curve of the tunnel in the presence of groundwater and also lead to the evaluation, with some precision, of the thickness of the plastic zone. Furthermore, proceeding with the intersection of the convergence-confinement curve with the reaction line of the lining, it is possible, in a simple way, to evaluate the magnitude of the loads transmitted from the soil to the tunnel lining when a groundwater flow is present.
Conclusion
The presence of a flow of water in the soil pores towards the tunnel involves a worsening of the stability conditions that cannot be neglected. It is, therefore, necessary to proceed to the evaluation of the stress and strain state that occurs in the soil in the presence of groundwater.
The numerical methods allow to study the complex mechanism of interaction between the water flow in the pores and the soil, but are generally too slow, especially when a the three-dimensional problem is studied.
The analytical methods, which introduce some simplifying assumptions to the problem, are widely used in tunneling and does lead to the evaluation of the stress and strain state in the soil around the tunnel, in a simple and effective way.
A new calculation procedure using a finite differences solution, able to evaluate the stresses and displacements that develop in the soil, in the presence of a water flow in the pores towards the tunnel, is presented in this study. This procedure allows to obtain the convergence-confinement curve of the tunnel in the presence of groundwater and to determine the thickness of the plastic zone varying the internal total pressure applied to the tunnel perimeter.
The implementation of the procedure to a specific case, has allowed to detect the influence that the water flow in the pores has on the stress and strain behavior of the soil and on the pressure-displacement relationship of the tunnel wall.
Funding Information
The author have no support or funding to report.
Ethics
This article is original and contains unpublished material. The corresponding author confirms that all of the other authors have read and approved the manuscript and no ethical issues involved. | v2 |
2021-09-09T20:46:46.160Z | 2021-01-01T00:00:00.000Z | 237461179 | s2orc/train | Prediction of Nitrous Oxide (N2O) Emission Based on Paddy Harvest Area in Lampung Province Indonesia using ARIMA on IPCC Model
Agricultural are significant sources of N2O emission. Lampung, Indonesia is an area dominated by agriculture including crops that emit N2O on their cultivation practices especially the fertilizers: paddy and vegetables. Last census in 2015 recorded that paddy fields were 1.321.120 ha and vegetables 99,284 ha with fertilizers recommendations were 200 kg/ha urea (without organic materials) and 150 kg/ha urea (if added with 2 tons/ha manure). This study aimed to estimate and predict N2O emissions based on the paddy field area using IPCC 2006 model. The IPCC model was applied to the paddy field data 1993 to 2012 from the Indonesian Ministry of Agriculture to estimate the N2O emission and then using Box Jenkins model to predict the emission for following years. The results showed that the prediction of N2O emission on the following years would be in the range of 0.2820.451Gg/year using only synthetic fertilizer and if added with organic fertilizers would be 5,846-9,359 Gg/year. These results were lower compared to some countries; however, this result was not implied that fertilizer recommendations in Lampung were safe since the results came from default numbers of the model. More researches should be conducted that local emission factors would be available that fertilizer recommendation could be evaluated.
I. INTRODUCTION
Both nitrous oxide (N2O) and nitric oxide (NO) are important components of the global biogeochemical nitrogen (N) cycle that contribute to global warming and the deterioration of the atmospheric environment. N2O concentration in the atmospheric is currently increasing at a rate of 0.2-0.3 percent yr −1 , which was mainly attributed to the expansion and intensification of agriculture production [1]. Moreover, Nitrous oxide (N2O) has a global warming potential (GWP) 298 times greater than that of carbon dioxide (CO2) on a 100-year horizon [2].
Agricultural production is a significant source of atmospheric N2O, which contributes approximately 60 and 10 percent of global anthropogenic from N2O and NO sources, respectively, largely due to increased fertilizer application in croplands. Specifically, agricultural soils are considered as an important source of N2O and NO emission entering the atmosphere; globally releasing approximately 2.8 Tg N yr −1 and 1.6 Tg N yr −1 . To feed the world's increasing population, considerable amounts of synthetic fertilizer will keep being applied to the soils to improve crop yield [1], [3], [4]. Although it resulted in increased N2O emissions, additional N is often applied either in the form of inorganic N fertilizers or organic amendments (e.g., crop residues, manure, compost, etc.) to prevent N limitations to crop growth [5].
Synthetic fertilizer is a major nitrogen supply for the agroecosystems in China. To produce enough food to feed its large population, with the intensively managed cropland area that only occupying 7 percent of the global total, the annual consumption of synthetic N fertilizer in China accounted for 30% of the total global consumption in 2004 and consumed 32.4 million tons of synthetic N fertilizer in 2007, constituting about one-third of the global total [3], [6], [7]. Compared to non-vegetable cropping systems such as rice-wheat rotations, much more nitrogen fertilizer is applied per unit area of vegetable fields. For instance, vegetable fields receive synthetic nitrogen fertilizers at a rate of approximately 1,000 kg N ha −1 yr −1 in the Tai-Lake region China, whereas the amount applied to rice-wheat or rice-oilseed rape rotations is only around 500 kg N ha −1 yr −1 . Besides, vegetable fields are usually treated with organic manure at the same time with a quantity equivalent to at least half of the amount of synthetic nitrogen applied [1]. However, estimation of N2O and NO emissions from croplands have large uncertainties since the sources and sinks of N2O and NO are not well characterized in different agroecosystems (rice paddies, grain upland croplands, vegetable cropping systems).
Rice is the staple food of the 95% of total Indonesia populations and ninety-five percent of rice is produced from paddy rice cultivation, mostly involves full wetting period. Technically irrigated paddy rice areas were 4.4 million ha throughout Indonesia, and 60.8% were located on Java island [9]. Nitrogen fertilizer (chemical and organic) for paddy in Lampung that recommended by the Government Agricultural Agency was 200 kg/ha N (without organic materials) and 150 kg/ha N with an additional 2 ton/ha organic fertilizers. All countries that produce rice realized that paddy field has a potential to emit greenhouse gas especially methane and nitrogen and tried to quantify them; in The Philippines [10], India [11], [12], Thailand [13], Japan [14], Ghana [15] and Latin America and Caribbean [16].
IPCC developed a mathematical model to estimate N2O emission from atmospheric deposition of N volatilized from managed soils based on fertilizer N applied to soils [17]. The model typically assumed that there is a linear relationship between N2O emission and nitrogen (N) input from fertilizer, and therefore using emission factors (EF). These IPCC national GHG inventory guidelines played an important role in fostering the incorporation of scientific evidence into national climate policy mechanisms [18]. In the current IPCC methodology, the total amount of N applied is considered as the major factor controlling N2O emission from agricultural soils. One single N2O emission factor of 1.25% of total N applied is used for all types of fertilizers and manures and application techniques. This suggests a linear relationship between the amount of N applied and the N2O emission [19]. However, a growing body of studies showed a non-linear, exponential relationship between N2O emission and N input [20]. Another research in Iowa, USA showed that the IPCC methodology may underestimate N2O emission in the regions where soil rewetting and thawing are common and that conditions predicted by future climate-change scenarios may increase N2O emissions [21]. Similar to that the average overall emission factor for Mediterranean agriculture was 0.5%, which is substantially lower than the IPCC default value of 1% [22].
Increasing nutrient use efficiency and reducing nutrient loss in agricultural systems while simultaneously improving crop yields is a critical sustainability challenge facing food sustainability. Therefore, this study was aimed to estimate N2O emissions based on paddy and horticulture field area and the recommended fertilizers for Lampung Province Indonesia.
II. METHODS AND DATA
A. Methods 1) Estimation of annual N2O emission N2O emission from the paddy field was calculated based on the mathematical model released by IPCC [17] Tier 1 for indirect N2O emission. Conversion from N2O (ATD)-N emission to N2O emission was done using the below equation: 2) Forecasting annual emission Paddy field area in Lampung was applied to IPCC mathematical model to get annual N2O emission in Lampung Province. Data from the estimation would be used as the database to forecast the N2O emission range for the near future using the Box-Jenkins method (ARIMA model). The Box-Jenkins method (ARIMA model) was developed through identification and estimation steps. This model was introduced by Box and Jenkins in 1960 which is used to forecast a single variable [23]. In identification, the model was tentatively categorized whether it was random, stationer, or seasonal and whether there was AR (autoregressive), MA (moving average), or both ARMA (autoregressive moving average) processes. The next step was estimating parameters of the tentative model, this step included nonlinear estimation, parameter test, and model fitness, and finally, with those approaches, the best ARIMA model for the forecasting would be achieved. Details were presented along with the results.
B. Data
This research used existing data which were: (1) data of paddy field and horticulture harvest in Lampung from 1993-2012, attained from the Lampung Statistical Bureau. Nitrogen fertilizer recommendation (synthetic, organic) for paddy field in Lampung Province by Dinas Pertanian (government agriculture office) based on which was 200 kg/ha urea (without organic materials) and 150 kg/ha urea (added with 2 tons/ha manure).
III. RESULTS AND DISCUSSIONS
The area of paddy field and emission of N2O was presented in Table I. The next step was determining the functions and the plots of autocorrelation and partial autocorrelation of the emission. The results were presented in Fig. 1 and 2. From the graphs, it should be continued with determining whether the data showed the pattern of random, stationer, cyclic, AR (autoregressive) and MA (moving average) processes. A. Random Pattern Test ACF could be used to determine whether some data collection was random or not. Data collection could be categorized as random when coefficients rk lied on the border: rk + Zα/2 (1/√n) where Zα/2 was values obtained from Table Z of the normal curve with α = 0.05 (95% level of confidence), and n was the number of observations. Using (3) with n = 20 and 0,025=1.96 then the upper and lower border could be determined, and the value was + 0,438 (see the strike dotted line on Fig. 1 and 2). Data could be considered random if the coefficient rk was inside the borders. The ACF showed that r1=0,579 higher than 0,438 meant that autocorrelation coefficients when k = 1 were significantly different from 0. When k > 1 all autocorrelations were not significantly different from 0. The same results were shown from PACF, for k = 1 and r = 0,579; higher than 0,438. With these results, it could be concluded that the data series was random
B. Stationary Test
Autocorrelation coefficient (ACF) after second time lag (k = 2) and the third (k = 3), which was r4 (k = 4) = 0,069, approached to zero. This meant ACF did not show a tendency to skew diagonally from left to right with the increasing time lags, indicated that the data was stationer, no differentiated data was needed. Therefore, in this research, the prediction of nitrous oxide emission would determine by order d = 0.
C. Circular Test
The autocorrelation in the ACF did not show any repetition; there was no indication in the ACF that the autocorrelation on both the second-or third-time lags was significantly different from zero. Therefore, it could be concluded no seasonal effect on the data. Then, it can be determined that a model would be used was ARIMA without seasonal effects.
D. The Autoregressive (AR) Test
The ACF also showed that the autocorrelation decreased exponentially (r1=0,579 > r2=0,346 > r3=0,266 > r4=0,069 > r5=0,036), approached zero on second and third time lags; a sign of autoregressive (AR) process. The order was determined by indicating number of partial autocorrelations which were significantly different from zero and since that was only one r1 (0,579 > + 0,438) than the prediction of the emission would be on order p = 1.
E. MA (Moving Average)
The moving average could be indicated by the values of partial autocorrelation that decreased exponentially. No such indicator happened in the data; therefore, the emission prediction, the MA order would be q = 0. From all identification steps above, it could be concluded that the ARIMA model which suitable to predict the nitrous oxide emission from the paddy field in Lampung was ARIMA (1,0,0). However, model order should also be compared by the trial and error process, so that the best model could be found. Therefore, other ARIMA models such as ARIMA (0,0,1) and ARIMA (1,0,1) would also be fitted as comparisons. The results were presented in Table II. The ARIMA (0,0,1). For α=0,05; | t | for MA (1) parameter was higher than t 0,025(24) = 2,064. This meant the estimating value of the model parameter was significantly different from zero (reject H0). The p parameter on MA (1) was 0,001; lower than α =0,05 (reject H0). In conclusion the model could be accepted. The ARIMA (1,0,1) model. For α=0,05; | t | for AR (1) parameter was higher than t 0,025(23) = 2,069. This meant the estimating value of the model parameter was significantly different from zero (reject H0). The p parameter AR (1) was 0,00; lower than α =0,05 (reject H0). However, for MA (1) parameter, | t | was lower than t 0,025(23) = 2,069 with α=0,05. This meant the estimating value of the model parameter was not significantly different from zero (accept H0). Similarly, p parameter of MA (1) was 0,300; higher than α = 0,05 (accept H0). In conclusion, this model was rejected. Based on those two steps there were two model candidates for predicting nitrous oxide emission which was ARIMA (0,0,1) and ARIMA (1,0,0); the results from these models were presented in Table III; while from ARIMA (1,0,0) model was presented on Table IV and both in Fig. 3. One model should be chosen to get the best prediction results; A comparison between these models was presented in Table V. Based on the criteria then the ARIMA (1,0,0) model was chosen since it had a smaller mean square, even though both models had simple model equations. From the IPCC model based on data of paddy field area in Lampung Province from 1993 to 2012 using only synthetic fertilizers, the emission would be 0.272-0.394 Gg/year. Using the chosen model, the prediction shortly would be in the range of 0.282-0.451Gg/year.
F. N2O Emission from Synthetic Fertilizers Combined with Organic Fertilizers
This study also attempted to estimate and predict N2O emission from synthetic fertilizers (150 kg/ha) combined with organic fertilizers (manure 2 tons/ha). Calculated from the same equations; the result was presented in Table VI. Following all ACF, PAF procedures as above, the results were presented as in Fig. 6 and 7. Strike lines on Fig. 4 and 5 were upper and lower borders for random series with a 95% level of confidence. Using (3) with n=20 and 0,025 = 1,96, upper and lower borders were + 0,438. On the ACF, r1 = 0,579 higher than 0,438 which meant autocorrelation coefficient when k = 1 significantly different from zero, while when k > 1, all coefficients not significantly different from zero. The same results were obtained on PACF, when k = 1, r = 0,579 higher than 0,438, meant it was significantly different from zero. When k>1, all partial coefficients were not significantly different from zero. it can be concluded that the data series was random.
G. Stationary Test
Autocorrelation coefficient (ACF) after second time lag (k = 2) and the third (k = 3), which was r4 (k=4) = 0,069, approached to zero. This meant ACF did not show a tendency to skew diagonally from left to right with the increasing time lags, indicated that the data was stationer, no differentiated data was needed. Therefore, in this research, the prediction of nitrous oxide emission would determine by order d=0.
H. Circular Test
The autocorrelation in the ACF did not show any repetition; there was no indication in the ACF that the autocorrelation on the second or third time lags was significantly different from zero. Therefore, it could be concluded no seasonal effect on the data. Then, it can be determined that a model would be used was ARIMA without seasonal effects.
I. The Autoregressive (AR) Test
The ACF also showed that the autocorrelation decreased exponentially (r1 = 0,579 > r2 = 0,346 > r3 = 0,266 > r4 = 0,069 > r5 = 0,036), approached zero on second and third time lags; a sign of autoregressive (AR) process. The order was determined by indicating number of partial autocorrelations which were significantly different from zero and since that was only one r1 (0,579 > + 0,438) than the prediction of the emission would be on order p = 1.
J. MA (Moving Average)
The moving average could be indicated by the values of partial autocorrelation that decreased exponentially. No such indicator happened in the data; therefore, the emission prediction, the MA order would be q = 0. From all identification steps above, ARIMA (1,0,0) model was determined as the tentative model suitable to predict nitrous oxide emission from the paddy field in Lampung. However, model order should also be compared by the trial and error process, so that the best model could be found. Therefore, other ARIMA models which were ARIMA (0,0,1) would be fitted as an alternative. The next step would be to estimate the model parameters and the results were presented in Table VII. For α = 0,05; | t | for MA (1) parameter was higher than t 0,025(24) = 2,064. This meant the estimating value of the model parameter was significantly different from zero (reject H0). The p parameter on MA (1) was 0,001; lower than α =0,05 (reject H0). In conclusion, the model could be accepted. For α = 0,05; | t | for AR (1) parameter was higher than t 0,025(23) = 2,064. This meant the estimating value of the model parameter was significantly different from zero (reject H0). The p parameter AR (1) was 0,00; lower than α =0,05 (reject H0); the model could also be accepted. The results of N2O emission estimation from model ARIMA (0,0,1) were presented in Table VIII while from model ARIMA (1,0,0) was presented in Table IX. One model should be chosen to get the best prediction results; the next criteria would be the composite mean square value and simplicity of the model. A comparison between these models was presented in Table X. Based on the criteria then the ARIMA (1,0,0) model was chosen, and it can be concluded that using the IPCC model based on paddy field area in Lampung Province from 1993 to 2012 using both synthetic and organic fertilizers, the emission would be 5.649-8,167 Gg/year. Then, using the chosen ARIMA (1,0,0) model the prediction shortly would be in the range of 5,846-9,359 Gg/year. There was no time-series data for horticulture commodities area, only the last data showed that Lampung had an area of 99,248 ha in total for horticulture. Following the IPCC model (2006) those horticulture areas would emit 0.062 Gg/year using only synthetic fertilizer and 1.294 Gg/year using both synthetic and organic fertilizer.
Since rice is the most important staple food in Indonesia, the government of Indonesia implemented a policy called Sustainable Land for Food Agriculture Protection. The land was protected and developed for producing stable food to maintain food independence, security, and self-supplied [24]. The concern related to N2O emission would be the intensive fertilizer applications to reach the production targets for the growing populations. In general, adjusting the fertilizer N rate to a suitable level is crucial for reducing both N2O and NO emissions. On the other hand, we should keep in mind the importance of improving N use efficiency by crops by changing fertilizer application methods, placement, and timing, such as deep placement of urea fertilizer [25], [26]. As de-nitrification and nitrification depend on a source of labile soil N, the higher emissions of NO and N2O will occur at higher concentrations of N in the soil.
IV. CONCLUSIONS
The N2O emission in Lampung Province using IPCC 2006 model was lower compared to some other countries. However, this result was not implied that fertilizer recommendations in Lampung were safe since the results came from the default number of the model. More researches should be conducted that local emission factors would be available that fertilizer recommendation could be evaluated. | v2 |
2021-05-07T00:04:10.499Z | 2021-03-01T00:00:00.000Z | 233842489 | s2orc/train | Experiential Learning Using Solar Tracker Prototype In Industrial Automation Course
Experiential Learnings is based on the understanding that the knowledge created as the result of an experience. Experiential Learning consisted of four stages, abstract conceptualization, active experimentation, concrete experience, and reflective observation. Experiential learning - Based learning is considered to have characteristics that are expected in the 21st Century Skills Learning. This study aims to develop experiential learning-based instructional design in Production Automation courses using a solar tracker system prototype. The effectiveness of the learning design is measured from the student learning outcomes and responses of the questionnaire provided. The control system subject in the production automation course is considered to have abstract and complex learning material so that it requires visualization to facilitate understanding. The development of the instructional designs is carried out using the ADDIE (Analysis, Design, Development, Implementation, and Evaluation) model. The instructional design of the course was implemented in a class consisting of 40 students in the department of Mechanical Engineering Education in a state university in Central Java, Indonesia. From a series of lectures during the course of Production Automation, the instructional design developed based on experiential learning approach received a positive response from the students. The student learning outcomes were also considered good with the average scores of more than 80. This study shows that experiential learning approach could stimulated student learning in mechanical engineering subject.
Introduction
The inclass learning is expected to achieve the 21 st Century Skill where students are encouraged to nurture the ability in communication, collaboration, critical thinking and problem solving, as well as creativity and innovation. The experiential Learnings which based on the understanding that the knowledge created is the result of an experience is considered appropriate in achieving the skills required in the 21 st century. Experiential Learning consisted of four stages, abstract conceptualization, active experimentation, concrete experience, and reflective observation [1]. According to Ayoub [2], experiential learning could lead learners to use their experience in improving their creativity to solve the problem on learning. Experiential learning could improve conceptual understanding since, students build their knowledge and skills through direct experience. This method emphasizes the role of active experience and student involvement [3].
Experiential learning has applied in several subject across several regions. Experiential learning was intensively performed at Ryerson University to have students understand the concepts of electrical engineering in mechatronics subjects [4]. In a Malaysian higher education institution, experiential learning is applied to create creativity and innovation among students that supports generic skill requirements among students. Widiastuti et. Al [5], using the concept of experiential learning by implementing four cycles of Kolb approach in the subject of heat transfer in mechanical engineering education.
This study is focused on the use of a prototype to support experiential learning. A solar tracker system prototype has been previously used for the subjects of (1) Virtual Instrumentation, (2) Control Systems, (3) Computer Aided Design [6]. The use of prototypes, could benefit in learning since it is easily accessible and requires low cost [7]. In the course of automated production system, control system AEVEC 2020 Journal of Physics: Conference Series 1808 (2021) 012008 IOP Publishing doi:10.1088/1742-6596/1808/1/012008 2 subject is considered to have abstract and complex material. Therefore, the prototype in this subject is expected to help students visualize the matters in improving their understanding.
This research was conducted so that in the learning process leads to the 21st Century Skill Learning program and hopefully students will also have the necessary knowledge and skills in accordance with the learning objectives that will be achieved. If this research is not carried out immediately, it is feared that the learning process is not in accordance with the 21st Century Skill Learning program and the purpose of production automation learning cannot achieve the expected goals.
Research Method
This study uses a qualitative method with a development research approach using the ADDIE development model in learning instruments. The ADDIE approach was used in this study for designing the instructional materials. The purpose of ADDIE is to explain learning science and learning modules as a complement to teaching materials [8]. The ADDIE development model is used to improve the quality of teaching material instruments [9].
Aldobie's research [10] states that the ADDIE development model is divided into five main stages: (1) Analysis, this stage is the first stage that is carried out when using the ADDIE development model. Analysis There are some aspects that do, such as analysis of students, analysis of learning objectives, and developing learning analysis. (2) Design, this stage contains the planning of learning concepts that will be implemented in this development. (3) Development, the development stage contains activities to develop and design instruction, learning materials, and carry out learning. (4) Implementation, this stage changes what was originally a planning into a direct practice. (5) Evaluation, the stage where development is determined whether it is acceptable or should be redeveloped.
The data validity test technique in this study is through expert judgment on the validity of the context. In this case the expert judgment in question is a lecturer in the automation production course of PTM FKIP UNS. The data analysis technique used is a Likert scale. The percentage calculation can be used a Likert Scale to find out the feasibility of the assessed module [11].
Result and Discussion
This research was conducted following the stages of the ADDIE development model. In the analysis stage, interviews were conducted with 2 lecturers in the production automation course. The results of the interview obtained several important points that would support this development research, namely: (1) The production automation course aims to improve students' understanding of the automation production system. (2) Experiential learning based on learning should be compatible with the control system learning material because students will experiment a lot (3) The use of prototypes will help in experiential learning.
At the design stage, learning objectives planning are carried out at each stage of experiential learning. The planning is divided into 2 meetings, namely: Students can get to know the functions of the control system components, namely sensors, controllers, and actuators.
Active Experimentation
Students make block diagrams on the solar tracker system. Concrete Experience Students understand the types of sensors, controllers and actuators in the prototype solar tracker system.
Reflective Observation
Students expected to identify other control system in different applications beside the solar tracker system.
Abstract Conceptualization
Students understand the types of sensors, controllers, and actuators.
Active Experimentation
Students can design a system and understand the components that must be used in the system.
The development stage is the learning design is developed. The learning instruments developed include a lesson plan, Learning Modules, and Prototype Solar Tracker System. The lesson plans are developed in accordance with the experiential learning stages in which there is a solar tracker system prototype to support learning.
Figure 1. Solar tracker system prototype
In the lesson plan, there is a learning steps that has been developed. The learning development carried out is based on experiential learning using a prototype solar tracker system. The learning sequence that has been developed can be seen in the following 1) The lecturer delivers the basic material of the automation system.
2) The lecturer presents a picture of the working mechanism of the water tank.
3) Students are asked to answer questions according to the picture of the water tank mechanism that has been presented. Reflective Observation 1) Students implement the control system elements in the water filling system. 2) Students re-analyze the input, process, and output stages using an analogy. Abstract Conceptualization 1) Lecturers use a prototype so that students explore information and data which is then used to answer these questions to increase understanding. 2) Students are asked to explain the meaning of sensors, controllers, and actuators.
3) The lecturer provides an example of a system application and introduces a block diagram. Active Experimentation 1) Students observe the simulation results of the solar tracker system prototype based on the knowledge concepts that have been obtained.
2) Students are asked to identify the control system components in the solar tracker system prototype 3) Students are asked to make a block diagram in the solar tracker system prototype application and explain the contents of the block diagram. 1) The lecturer displays a prototype to show the various types of sensors, controllers, and actuators used in each example of an automation system application.
2) The lecturer opens questions by asking what sensor, controller, and actuator components are used in the Prototype Solar Tracker System. Reflective Observation 1) Lecturers begin to direct students' minds to see examples of system applications widely so that students can mention control system applications other than the solar tracker system prototype.
2) Students are asked to show one example of a control system in several applications: agriculture, traffic, smart city, building, health, etc., then identify and explain the types of sensors, controllers, and actuators used. Abstract Conceptualization 1) Lecturers use a prototype in order for students to dig up information and data which are then used to answer these questions.
2) The lecturer introduces the many types of sensors, controllers, and actuators. 3) Students are asked to explain the types of sensors, controllers, and actuators. Active Experimentation 1) Students observe the prototype simulation results based on the knowledge concepts that have been obtained.
2) The student proposes a control system with an Arduino controller, then is asked to draw a diagram block and determine the sensor used and the actuator working mechanism in the system. 3) Students are required to be able to use control system components according to their function in the control system design they make. As for the learning module, the lecturers are assessed so that it can be concluded whether the module is suitable for use or not. There are 4 aspects that are assessed in the module, namely: Self Instructional, Self-Contained, Adaptive, User Friendly. The assessment of the lecturer on the learning module will be interpreted so that conclusions can be drawn. According to Risma Novita [11], the percentage calculation can be used a Likert Scale to find out the feasibility of the assessed module using the formula: Then from the calculation results, each element of the question will be interpreted to be able to draw conclusions. The indicators used to draw conclusions are: If the calculation result is 0% -20% = very weak b.
If the calculation result is 81% -100% = very strong The results of the module assessment are as follows: Based on the table above, the results of the overall module assessment are 81%, so the module has a very strong interpretation for use in experiential learning -based learning.
The implementation stage is carried out to test the learning designs that have been developed. The trial was carried out for 40 students in the department of Mechanical Engineering Education who took an automated production system. Learning is carried out online and in accordance with the lesson plan in the course, where learning takes place in 2 credits (2x50 minutes). The learning technique carried out is to provide learning videos related to the learning material to be delivered and a demonstration of the solar tracker system prototype.
In the evaluation stage, the results of using the learning design that have been developed can be seen. Student learning outcomes can be used as indicators to measure the level of understanding of students when experiential learning designs are tested. Student learning outcomes can be seen in the following Based on the student learning outcomes in the table above, the students mean scores were 81.725 and 83.325. If this value is interpreted, it will get category A-values, which have good information.
In addition to learning outcomes, another indicator used to assess the implementation of experiential learning is the student response questionnaire. This study used a closed questionnaire which contained 15 positive questions to be distributed to 40 respondents. Learning Experiential Learning arouse my curiosity towards learning material 3.
The learning model used is very satisfying 4.
The learning is very interesting and not boring 5.
Simple learning concept 6.
The learning process takes place sequentially and systematically 7.
The learning is informative, so that you can understand the basic knowledge of the material being taught 8.
Learning is realistic (according to actual conditions) and related to everyday life 9.
Every activity in learning helps improve my understanding 10.
I feel actively involved in learning 11.
I feel challenged to use this learning model.
12.
Experiential Learning based learning helps me to connect and receive subject matter 13.
In this learning model I learned things I didn't know before 14.
There was a change in the level of my belief and knowledge of the material 15.
The learning process can develop my personal knowledge and skills The calculation can be done using a Likert Scale [11]. The results of the assessment of the response to the use of experiential learning are: Based on the above results, each question given has strong and very strong information so that the respondent gives a positive response to the use of experiential learning with a prototype. This is in accordance with the opinion of Falloon [12], which states that experiential learning can be used to develop students' thoughts and concepts, and find motivation and things that make students interested. Therefore, the use of experiential learning in learning can indeed attract students and students will give positive responses in learning. This is also in accordance with the results of research by Pamungkas [3], which states that the application of experiential learning is very suitable for use in mechanical engineering education because the learning model will increase student knowledge and learning outcomes.
Conclusion
In this research, a learning design based on experiential learning was developed which contained 4 main stages, namely concrete experience, reflective observation, abstract conceptualization, and active experimentation. In experiential learning, a prototype solar tracker system is also used as a tool to provide an overview or visualization of the learning material presented to students. This learning design is structured in a lesson plan which is also equipped with a learning module.
When the experiential learning -based learning design trial was conducted, students gave positive responses or responses to the learning. This can be seen from the questionnaire given to students to assess or respond to the learning they have taken. Even within these classes, the class average score is relatively high. In 2 meetings, the average grade for assignments in the class was 81.725 and 83.325. If interpreted, the two class averages get an A-grade category, which has good information. This study shows that experiential learning can be an option in control system learning or even developed on other basic learning materials or competencies. | v2 |
2018-04-03T03:18:34.697Z | 2017-09-25T00:00:00.000Z | 2505399 | s2orc/train | TLR3 contributes to persistent autophagy and heart failure in mice after myocardial infarction
Abstract Toll‐like receptors (TLRs) are essential immunoreceptors involved in host defence against invading microbes. Recent studies indicate that certain TLRs activate immunological autophagy to eliminate microbes. It remains unknown whether TLRs regulate autophagy to play a role in the heart. This study examined this question. The activation of TLR3 in cultured cardiomyocytes was observed to increase protein levels of autophagic components, including LC3‐II, a specific marker for autophagy induction, and p62/SQSTM1, an autophagy receptor normally degraded in the final step of autophagy. The results of transfection with a tandem mRFP‐GFP‐LC3 adenovirus and use of an autophagic flux inhibitor chloroquine both suggested that TLR3 in cardiomyocytes promotes autophagy induction without affecting autophagic flux. Gene‐knockdown experiments showed that the TRIF‐dependent pathway mediated the autophagic effect of TLR3. In the mouse model of chronic myocardial infarction, persistent autophagy was observed, concomitant with up‐regulated TLR3 expression and increased TLR3‐Trif signalling. Germline knockout (KO) of TLR3 inhibited autophagy, reduced infarct size, attenuated heart failure and improved survival. These protective effects were abolished by in vivo administration of an autophagy inducer rapamycin. Similar to the results obtained in cultured cardiomyocytes, TLR3‐KO did not prevent autophagic flux in mouse heart. Additionally, this study failed to detect the involvement of inflammation in TLR3‐KO‐derived protection, as wild‐type and TLR3‐KO hearts were comparable in inflammatory activity. It is concluded that up‐regulated TLR3 expression and signalling contributes to persistent autophagy following MI, which promotes heart failure and lethality.
Introduction
The family of toll-like receptors (TLRs) serves as a critical component of the immune system [1]. Ten functional TLRs have been identified in humans. Among them, TLR 1/2/4/5/6/10 are expressed on cell surface, whereas TLR3/7/8/9 are localized in intracellular vesicles such as endoplasmic reticulum and endosomes [2]. Through recognizing and binding with invading microbes and endogenous danger molecules released from stressed cells, known as pathogen-associated molecular patterns (PAMPs) and damage-associated molecular inducing interferon-b (Trif), respectively. The MyD88-dependent pathway is universally used by all TLRs except TLR3, which activates nuclear factor-jB (NF-jB), a major inflammatory transcription factor, and induces inflammatory cytokine production. The Trif-dependent pathway can be activated by TLR3 and TLR4, which activates interferon regulatory factor (IRF) 3 and NF-jB, and consequently induces the production of type I interferon and inflammatory cytokines [2].
In addition to the MyD88-and Trif-dependent immune pathways, autophagy has emerged as an effector mechanism downstream of TLRs. Autophagy is a conserved lysosomal degradation pathway responsible for eliminating protein aggregates and damaged organelles. In mammalian cells, autophagy comprises at least three distinct pathways, macroautophagy, microautophagy and chaperonemediated autophagy. The term 'autophagy' generally refers to macroautophagy unless otherwise specified. We hereafter refer to macroautophagy as autophagy. Autophagy occurs at basal levels and can be up-regulated by multiple stresses such as nutrient starvation and ischaemia. There is much controversy over the role of autophagy in cell survival and cell death under stresses [4].
Several TLR subtypes have been described to induce autophagy in immune cells such as macrophages and dendritic cells, which facilitates elimination of invading pathogens [5][6][7][8]. However, little is known about the relation of TLRs to autophagy in other cell types. It is notable that non-immune cells, such as cardiomyocytes, express multiple TLR subtypes [9]. Although autophagy serves as an essential component for physiological and pathological processes in the heart [10], it remains unknown whether TLRs regulate autophagy in cardiomyocytes. Our interest was directed to TLR3, an intracellular subtype predominantly expressed by cardiomyocytes [11]. In a preliminary study, we observed that the activation of TLR3 enhanced autophagic activity in cultured cardiomyocytes. This study was designed to examine the effect of TLR3 on cardiac autophagy under conditions of ischaemic stress.
Materials and methods
Primary culture of neonatal rat ventricular myocytes (NRVMs) The neonatal Sprague-Dawley rats were sacrificed by decapitation and used for the preparation of the primary culture of ventricular myocytes, as we described previously [12]. Briefly, the ventricles were removed, rinsed, minced and digested with 0.2% trypsin (Gbico TM Cat. 27250-018: Thermo Fisher Scientific Inc., Shanghai, China) in Ca 2+ -and Mg 2+free Hanks solution for repeated short time periods. Collected cells were filtered through a nylon mesh, centrifuged, resuspended in Dulbecco's modified Eagle's medium supplemented with foetal bovine serum, preplated, then cultured in a humidified atmosphere of 5% CO 2 at 37°C.
Use of tandem mRFP-GFP-LC3 to assess autophagic flux To assess autophagic flux in response to TLR3 activation, a tandem mRFP-GFP-LC3 adenovirus (Hanheng Biotechnology Co Ltd., Shanghai, China) was transfected into cultured NRVMs for 24 hrs at a MOI of 50. The tandem mRFP-GFP-LC3 protein shows both red (mRFP) and green (GFP) fluorescence in neutral pH. It can form yellow (red+green) puncta that represent autophagosome formation. When an autophagosome fuses with a lysosome, the GFP moiety is degraded as it is pH-labile, but mRFP-LC3 maintains the puncta, which then tracks the autolysosomes [14]. The relative ratio of red-only versus yellow puncta is an index of autophagic flux.
Mice model of myocardial infarction
TLR3 -/mice in the background of C57BL/6 were purchased from the Jackson Laboratory (Stock No: 009675), and wild-type (WT) C57BL/6 mice were purchased from SIPPR-BK Laboratory Animal Co. Ltd., Shanghai, China. A mice model of myocardial infarction (MI) was prepared as described previously [15]. Mice (8-10 weeks of age) were anaesthetized with 2% isoflurane mixed with oxygen (1.5 l/min.). The adequacy of anaesthesia was checked by the lack of corneal reflex and withdrawal reflex to toe pinch. The chest was depilated, a skin cut was made on the left side and a small hole was made under the fourth rib using a mosquito clamp. The clamp was slightly open to allow the heart to 'pop out' through the hole. Then, the left anterior descending coronary artery (LAD) was sutured and ligated with a 6/0 braided silk suture, at the site approximately 2 mm from its origin. MI was confirmed by visual cyanosis of the heart. After ligation, the heart was immediately placed back into the intrathoracic space, and the chest was closed. Sham mice received the same procedure except that LAD was not ligated.
At the end of the 4-week observation period and after echocardiography, the mice were euthanized by placing into a chamber filled with vapour of isoflurane until respiration ceased, and heart tissue was then collected for examination. In a subgroup of mice, the autophagy inducer rapamycin (2 mg/kg/day) or autophagic flux inhibitor chloroquine (50 mg/kg/day) was daily intraperitoneally injected, starting from 24 hrs after surgery and lasting through the observation period of 2 weeks.
All animal procedures were approved by the Animal Experiment Committee of Ningxia Medical University, in accordance with the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (8th Edition, 2011).
Haematoxylin and eosin (HE) staining and Masson's trichrome staining fixation, dehydrated with ethanol, coronally sectioned into halves along the long axis, embedded in paraffin blocks, consecutively sectioned into 5-lm-thick slices, and then stained with commercial reagents for HE or Masson's trichrome. In HE staining, nuclei are stained blue-purple by haematoxylin, whereas cytoplasm and extracellular matrix have varying degrees of pink staining. In Masson's staining, muscle fibres are stained purple-red, while collagen fibres are stained green-blue.
Infarct size measurement
The infarct size was determined with a length-based approach described previously [16]. Coronal slices of the heart were prepared and stained with Masson's trichrome as described above. Using Masson's images of the whole pieces of coronal slices, myocardial midline was drawn at the centre between the epicardial and endocardial surfaces of left ventricle (LV), and the total length of LV midline was recorded as midline circumference. Meanwhile, midline infarct length was taken as the midline of the length of infarct that included >50% of the whole thickness of the myocardial wall. Infarct size was calculated as the percentage of midline infarct length relative to LV circumference.
Echocardiographic examination
Transthoracic echocardiography was performed at the end of the observation period to determine heart function, using an ultrasonic apparatus (Voluson E8; GE Healthcare, General Electric Co., Farmingdale, NY, USA. 15-MHz probe) [17]. Under the anaesthesia of isoflurane, the short-axis view of mouse heart was acquired at the papillary muscle level through two-dimensional mode, and consecutive M-mode images in the short-axis view were recorded. Left ventricular end-diastolic diameter (LVEDD) and end-systolic diameter (LVESD) were measured from M-mode tracings, and fractional shortening (FS) was calculated as (LVEDD-LVESD)/LVEDD 9 100%.
Western blot and co-immunoprecipitation (co-IP) analysis
Western blot and co-IP experiments were performed as we described previously [18]. RIPA buffer containing protease inhibitors was used to extract total proteins from heart tissue and cultured cardiomyocytes, and protein concentration was determined using a bicinchoninic acid kit. For Western blot, 20 lg of total proteins was electrophoresed on a 10% SDS-PAGE gel and transferred onto a nitrocellulose membrane. The membrane was then blocked with 5% non-fat dried milk, probed with specific primary antibodies (1:500-1000) followed by peroxidaseconjugated secondary antibodies (1:1000) and visualized using chemiluminescence reagents. Western blotting signal was quantified by densitometry. The primary antibodies against TLR3 (Cat. NB100-56571), MyD88 (Cat. NBP1-19785) and Trif (Cat. NB120-13810) were purchased from Novus Biologicals, LLC, Littleton, CO, USA; the anti-LC3 antibody (Cat. AL221) was from Beyotime Institute of Biotechnology, Jiangsu, China; the anti-beclin-1 antibody (Cat. 11306-1-AP) was from Proteintech Group, Inc., Rosemont, IL, USA; and the anti-p62 antibody (Cat. 5114) was from Cell Signalling Technology, Inc., Danvers, MA, USA.
Co-IP experiments were performed to examine the interaction between TLR3 and MyD88/Trif, and between TLR3 and LC3/beclin-1. For co-IP, the tissue lysates (300 lg of total proteins) were pre-incubated with 2 lg IgG of the same isotype as the primary antibody to block non-specific combination, followed by incubation with 6 lg anti-TLR3 antibody (Cat. sc-8691, Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) overnight at 4°C. Protein G-agarose beads were then added to precipitate the antibody complexes. The precipitates were then subjected to regular Western blot analysis as described above.
Immunohistochemical staining
Immunohistochemical staining was carried out to examine the expression of TLR3 in the heart. Paraffin-embedded heart slices were used for the staining. Briefly, the slices were dewaxed, microwaved to retrieve antigen, blocked with 5% BSA, incubated overnight at 4°C with an isotype IgG control antibody, or with a primary antibody against TLR3 (diluted 1:50, Cat. sc-8691, Santa Cruz Biotechnology, Inc.), and then hybridized with horseradish peroxidase (HRP)-labelled second antibodies followed by 3,3 0 -diaminobenzidine (DAB) chromogenic reaction. The brown DAB deposits were observed under a microscope.
Transmission electron microscopy
Transmission electron microscopy was used to observe autophagic vacuoles. Heart tissue was quickly cut into1 mm cubes, fixed with 2.5% glutaraldehyde overnight at 4°C, immersed in 1% osmium tetroxide for 2 hrs, dehydrated in graded ethanol, embedded in epoxy resin and incised into ultrathin sections (60-70 nm). The sections were then double stained with uranylacetate and lead citrate and examined under a Hitachi H-7650 transmission electron microscope (JEOL, Peabody, MA, USA).
Statistics and data analysis
All the data are expressed as means AE S.D. Differences between multiple groups were analysed by the one-way analysis of variance (ANOVA) followed by Fisher's least significant difference (LSD) test, using SAS 9.0 statistical software (SAS Institute Inc., Cary, NC, USA). Differences between two groups were analysed by unpaired t-test. A P < 0.05 (twotailed) was considered statistically significant.
TLR3 agonist induced autophagy in cultured cardiomyocytes through a TRIF-dependent pathway
To examine the effect of TLR3 on cardiac autophagy, we used a synthetic ligand for TLR3 (polyinosinic-polycytidylic acid, poly (I:C)) to treat cultured cardiac myocytes, and examined autophagy markers including microtubule-associated protein 1 light chain 3 (LC3), beclin-1 and p62/SQSTM1. LC3 is a specific marker for autophagy initiation. It exists as an 18-kD cytosolic LC3-I form in resting cells and is lipidated upon autophagy induction to produce autophagosome-associated LC3-II, which migrates in SDS-PAGE as a 16-kDa protein [19,20]. The quantification of LC3-II protein level normalized to a loading control is essential for autophagy measurements [21]. P62/SQSTM1 is an ubiquitinbinding adaptor protein that is removed in the final digestion step of autophagy. It acts as an autophagy receptor, binding directly to LC3 to facilitate degradation of ubiquitinated protein aggregates [22,23]. Using the method of Western blot, we observed that treatment with poly(I:C) in myocytes induced a large increase in LC3-II proteins, accompanied by relatively smaller increases in LC3-I, beclin-1 and p62/SQSTM1 proteins (Fig. 1A). These results suggested that poly(I:C) up-regulated autophagic activity in cardiac myocytes.
As the increase in LC3-II may result from either induction of autophagy or reduction in autophagic flux (the rate of transit of autophagosome cargo through lysosomal degradation), we subsequently examined whether TLR3 affects autophagic flux. A tandem mRFP-GFP-LC3 adenovirus was transfected into NRVMs for 24 hrs, followed by treatment with poly(I:C) (100 lg/ml, 4 hrs). Significant increases in the numbers of autophagosomes (yellow puncta) and autolysosomes (red-only puncta) were observed after poly(I:C) treatment (Fig. 1B). However, the relative ratio of red-only to yellow puncta was unchanged, suggesting no change in autophagic flux. Furthermore, we determined LC3-II and p62/SQSTM1 protein levels in the absence and presence of chloroquine (CQ), an autophagic flux inhibitor that prevents autophagosome-lysosome fusion and lysosomal degradation [24]. As shown in Fig. 1C, CQ led to similar accumulations of LC3-II and p62/SQSTM1 proteins, despite the presence or absence of poly(I:C), suggesting that TLR3 activation did not affect autophagic flux. Taken together, it is suggested that autophagy induction was enhanced by TLR3 activation in cardiac myocytes, whereas autophagic flux remained intact.
To further dissect the signalling mechanism of TLR3-mediated autophagy, MyD88 and Trif were individually knocked-down by siR-NAs in NRVMs. The results showed that Trif siRNA attenuated increases of LC3-II and p62/SQSTM1 caused by poly(I:C) (Fig. 1D). In contrast, neither negative control (NC) nor MyD88 siRNA showed significant effects. The above data suggest that TLR3 stimulation promotes autophagy induction in cardiac myocytes through a TRIFdependent pathway.
TLR3-knockout inhibited MI-induced persistent autophagy in mouse heart
To investigate the potential autophagic effect of cardiomyocyte TLR3 in vivo, we employed the model of MI, which has been described to drive autophagy in the heart [10,25]. The function of TLR3 in MIinduced autophagy was examined here using TLR3-knockout (TLR3-KO) mice.
Firstly, to examine autophagy, we monitored the dynamic changes of LC3 and p62/SQSTM1 in infarct myocardium after MI. The results (Fig. 2) showed that LC3-II accumulated in a timedependent manner. A mild increase of LC3-II was detected on day 4 after MI, which was dramatically enhanced by 2 weeks and remained significantly up-regulated at 4 weeks. The level of p62/ SQSTM1 increased in a similar pattern to LC3-II. Theoretically, p62/SQSTM1 increase can result from enhanced autophagy induction and/or decelerated autophagic flux. An autophagic stimulus typically induces an early increase in p62/SQSTM1, followed by clearance of p62/SQSTM1 associated with autophagosome cargo. Consequently, when flux is intact, p62/SQSTM1 barely changes at equilibrium up-regulated autophagy, whereas when flux is impaired, p62/SQSTM1 will rise dramatically [21]. Here we detected increases of p62/SQSTM1 after MI, which was likely attributable to persistent activation of autophagy. A proof is that while p62/SQSTM1 dropped by day 28, compared to day 4 and day 14, LC3-II consistently remained at a high level (Fig. 2). Furthermore, the autophagic flux inhibitor CQ increased LC3-II and p62/SQSTM1 in infarcted hearts of wild-type mice as shown afterwards (Fig. 6A). It is suggested that autophagy was persistently induced in the heart after MI, whereas autophagic flux was barely affected.
Secondly, to examine the endogenous TLR3 activity after MI, we determined the expression of TLR3 and its binding activity with Trif in infarct hearts. As shown in Fig. 3, both the mRNA and protein levels of TLR3 were significantly increased in the hearts of WT mice after 4 weeks of MI. Compared to the sham-operated hearts, TLR3 mRNA levels were increased by 3.4-and 2.7-folds (Fig. 3A), and TLR3 protein levels were increased by 5.9-and 4.4-folds (Fig. 3B), respectively, in the infarct and remote zones of MI hearts. The immunohistochemical staining showed remarkably strong reactivity for TLR3 in cardiomyocytes of both the infarct and non-infarct zones, while much less reactivity was shown for myocytes of sham-operated hearts. Also, TLR3-positive immunoreactivity was observed for part of the infiltrating cells in the infarct and border zones (Fig. 3C). Collectively, TLR3 expression in cardiomyocytes was enhanced after MI. In parallel to increased expression, the co-IP assay revealed increased binding between TLR3 and Trif in infarct hearts (Fig. 3D), suggesting the activation of TLR3-Trif signalling. Considering the pro-autophagic effect of TLR3 in cultured cardiomyocytes (Fig. 1), the up-regulated expression and signalling activity of TLR3 in MI indicate that TLR3 may potentially contribute to MI-induced autophagy.
Thirdly, to determine whether TLR3 contributes to MI-induced autophagy, we examined cardiac autophagic activity under both normal and ischemic conditions in TLR3-KO hearts. For sham-operated hearts, no differences in autophagic markers were observed between WT and TLR3-KO groups. In WT hearts subjected to 4 weeks of LAD ligation, significantly increased protein levels of LC3-I, LC3-II, beclin-1 and p62/SQSTM1 were observed in the infarct area, and relatively small increases were observed in the remote area. TLR3-KO hearts had uniformly decreased levels of LC3-I, LC3-II, beclin-1 and p62/ SQSTM1, typically in the infarct area (Fig. 4A). Also, morphological data obtained by electron microscopy showed that autophagic vacuoles, which were abundant in infarcted WT hearts (Fig. 4B), decreased in number by 67% in infarcted TLR3-KO hearts (WT: 5.5 AE 1.9, TLR3-KO: 1.8 AE 1.5 per 100 mm 2 , P < 0.01). It is shown here that autophagic activity was decreased in TLR3-KO hearts, suggesting that endogenous TLR3 promotes cardiac autophagy after MI.
Fourthly, we further examined the physical association between TLR3 and two proteins essential for autophagy initiation, LC3 and beclin-1. The co-IP assay (Fig. 4C) showed a detectable binding between TLR3 and LC3-I in sham hearts of WT mice, which became more evident in infarct hearts. In contrast, no visible binding was observed between TLR3 and beclin-1. These data support that TLR3 is involved in MI-induced autophagy.
TLR3-knockout attenuated heart failure and improved survival in mice subjected to MI
To uncover the functional role of TLR3-mediated autophagy in MI, we compared cardiac morphology, function and survival rate between WT and TLR3-KO mice. The mice died within the observation period (4 weeks) were only counted for the calculation of survival rate, but exclusively excluded from the other analyses.
The histological staining for HE and Masson's trichrome revealed no morphological difference between WT and TLR3-KO hearts receiving sham operation. However, the cardiac injury induced by MI was morphologically improved in the absence of TLR3. In Masson's staining, significant fibrosis manifested by large blue areas was observed for the infarct area, while mild fibrosis was observed for the remote area in both WT and TLR3-KO hearts (Fig. 5A). The collagen volume fraction calculated from microscopic Masson's images was comparable in the infarct area between WT and TLR3-KO groups, whereas smaller in the remote area of TLR3-KO hearts than that of WT hearts (Fig. 5B). The infarct size of WT mice was 50.2 AE 3.6 %, which was significantly decreased to 36.6 AE 6.2 % in TLR3-KO mice (Fig. 5C).
In the M-mode ultrasound images taken at the midpapillary level, sham-operated WT and TLR3-KO mice showed similar parameters. After 4 weeks of MI, significant increases in left ventricular end-systolic and end-diastolic diameters, with a large decrease in fractional shortening, were seen for WT mice. In contrast, the above changes were consistently attenuated in TLR3-KO mice (Fig. 5D). These data suggest that TLR3-KO attenuated congestive heart failure derived from MI.
The survival rates after MI were compared up to 4 weeks between WT and TLR3-KO mice. The rate of death during surgery was approximately 10%, with no difference between WT and TLR3-KO mice. After the surgery, all the sham-operated mice, either WT (n = 20) or TLR3-KO (n = 18), survived through the observation period of 4 weeks. However, only 50% of the infarcted WT mice survived by 4 weeks. The knockout of TLR3 significantly increased survival rate to 81.4% (Fig. 5E).
Autophagy induction abolished the protection of TLR3-knockout against MI
To further examine whether reduced autophagy in TLR3-KO mice contributes to improved survival and heart protection against MI, we applied an autophagy inducer rapamycin daily for 2 weeks, starting from 24 hrs after LAD ligation. The results showed that rapamycin effectively increased autophagic activity in both sham and LAD-ligated mice (Fig. 6A). Coinciding with the induction of autophagy, the infarct size was enlarged (Fig. 6B), and the post-infarct heart function was deteriorated in TLR3-KO mice (Fig. 6C). These results demonstrate Representative immunohistochemistry images of heart sections stained for TLR3 (brown colour). An isotype IgG control was performed to verify the specificity of TLR3 reactivity. (D) Lysates of heart tissue were immunoprecipitated with anti-TLR3 antibodies (IP: TLR3), followed by SDS-PAGE and immunoblotting (IB) with indicated antibodies. IP with isotype IgG (IP: IgG) was performed as a control to exclude the non-specific binding of antibodies to cellular proteins. Green arrows indicate non-specific bands. The association between TLR3 and Trif, but not MyD88, was detectable in sham myocardium and was increased in infarct myocardium. To verify whether TLR3 affect autophagic flux in vivo, we intraperitoneally injected CQ for 2 weeks, and observed similar accumulations of LC3-II and p62 in WT and TLR3-KO myocardium (Fig. 6A). This result, in accordance with that in cultured myocytes ( Fig. 1C), suggests that autophagy flux is not affected by TLR3-KO.
TLR3 is known to regulate cellular inflammation via the transcription factors NF-kB and IRF3 [2,26] and play an essential role in virusinduced cardiac inflammation [27,28]. To discriminate the potential role of TLR3-mediated inflammation in MI, we determined cardiac Fig. 7, the basal level of cardiac cytokine expression is comparable between WT and TLR3-KO mice, and MI induced similar increases in both groups. In line with this result, while robust inflammatory cell infiltration was present in the infarct and border areas of WT hearts, there was no visible difference in TLR3-KO hearts, as shown by the HE and Masson's staining (Fig. 5A). These data indicate that myocardial inflammation is not affected by TLR3-KO. Therefore, the involvement of inflammation in TLR3-KO-mediated autophagy inhibition and cardiac protection can be excluded.
Discussion
TLRs are a family of innate immune receptors that are essential for recognizing PAMP and DAMP molecules. They are expressed by a variety of immune and non-immune cells including cardiomyocytes [9]. Although autophagy has been linked to TLR signalling, most of the knowledge was obtained from immune cells [5][6][7][8]. The action of TLRs on autophagy in cardiomyocytes remains unknown. This study treated cultured cardiomyocytes with a TLR3 agonist, either alone or in the presence of a lysosomal inhibitor, and observed that autophagy induction was stimulated by TLR3, whereas autophagic flux remained intact. Pathway dissection using siRNA knockdown techniques showed that TLR3 induced autophagy through the Trif-dependent pathway, which was verified by the co-IP analysis showing physical association between TLR3 and Trif but not MyD88. To identify the potential autophagy-inductive role of TLR3 in vivo, the present study employed the mouse model of MI, a condition that has been described to induce autophagy [10,25]. We observed that over 4 weeks of MI, cardiac autophagy was persistently enhanced, accompanied with increased expression of TLR3 and its association with Trif. The knockout of TLR3 significantly attenuated autophagy, prevented heart failure and improved survival, which was abolished by an autophagy inducer. Taken together, this study shows that TLR3 plays a role in persistent autophagy after MI, which contributes to heart failure and lethality.
Autophagy is an essential process for cells to maintain homoeostasis. It enables cells to clean their interiors by forming double-membraned organelles called autophagosomes, which deliver excessive or aberrant organelles and protein aggregates to the lysosomes for degradation [29]. During cell starvation or stress, autophagy is required for organelle turnover, protein degradation and recycling of cytoplasmic components. As a common process, autophagy has been widely characterized in various cell types including cardiomyocytes [30]. It is notable that proper autophagic activity is critical for normal maintenance of cardiac homeostasis. Either excessive or insufficient levels of autophagic flux can contribute to cardiac pathogenesis [10]. A variety of cardiac stresses, including ischemiareperfusion, pressure overload and heart failure, have been shown to be related to autophagy. Evidence of autophagy in human heart diseases was first reported in patients with dilated cardiomyopathy [31], and later in patients with other cardiac disorders [32][33][34].
Autophagy is a highly dynamic process that needs to be carefully assessed. Commonly used measurements for autophagy include Western blot for LC3 and detection of autophagic puncta. The conversion of LC3-I to LC3-II through lipidation is a recognized hallmark of autophagy induction, which can be monitored by Western blot that identifies the electrophoretic mobility shift from the slower-migrating non-lipidated LC3-I to faster-moving lapidated LC3-II [19,20]. The formation of autophagosomes is essential in autophagy detection, which can be measured by electron microscopy and puncta formation of fluorescent-tagged LC3 proteins in cytoplasm under fluorescence microscopy. However, snapshot measurements of LC3 and autophagosomes without measuring autophagic flux are incomplete. A recent review by Gottlieb et al. elaborated on autophagy measurements and emphasized the need to assess autophagic flux, which can be assessed by a tandem RFP-GFP-LC3 construct and be inferred directly through lysosomal blockade or indirectly from the level of p62/SQSTM1 [21,35].
The activation of TLR3 was previously observed to induce autophagy in immune cells such as macrophages [8,36,37], while little is known in non-immune cells. This study for the first time described a role of TLR3 in cardiac autophagy and examined the effect of TLR3 on autophagic flux. The induction of autophagy by TLR3 in immune cells was judged based on the evidence of LC3-II formation and autophagosome accumulation [6][7][8]. This study observed similar results in cardiomyocytes (Fig. 1). However, previous observations were limited to snapshot measurements of LC3 and autophagosomes. Given that enhanced autophagy initiation or decelerated autophagic flux each may cause increases in LC3-II and autophagosomes [21], it is necessary to examine whether TLR3 affects autophagic flux. We herein addressed this question. Using a tandem mRFP-GFP-LC3 adenovirus, we detected no change in autophagic flux (Fig. 1B). Also, using CQ to block autophagic flux, we observed further increases of p62 in response to TLR3 activation (Fig. 1C), suggesting that autophagic flux remains intact upon TLR3 activation.
As to the signalling cascade leading to autophagy induction after TLR activation, previous studies in immune cells have shown the requirement for MyD88 and TRIF in immunological autophagy mediated by different TLR subtypes [7,8,36]. Shi et al. showed that TLR3 signalling uses Trif, but not MyD88, to trigger autophagosome Fig. 6 Autophagy induction abolished the protection of TLR3-KO against MI. An autophagy inducer rapamycin (Rapa, 2 mg/kg/day) or an autophagic flux inhibitor chloroquine (CQ, 50 mg/kg/day) was daily intraperitoneally injected for 2 weeks, starting from day 1 after surgery. Normal saline (NS) was injected as control. Measurements were taken at 2 weeks. (A) Representative Western blot images and quantitative data of LC3 and p62 proteins in infarct tissue. Rapamycin increased LC3-II in all the groups, suggesting successful induction of autophagy. CQ (blue bars) induced similar accumulations of LC3-II and p62 in WT and TLR3-KO myocardium, suggesting that autophagy flux was comparable between the two groups. 405 formation in macrophages [36]. In accordance with this, we observed physical association between TLR3 and Trif, but not MyD88, in sham heart tissue, which was increased in infarct hearts (Fig. 3C). In cultured cardiomyocytes exposed to TLR3 agonists, the ablation of Trif, rather than MyD88, remarkably prevented autophagy induction (Fig. 1D). Collectively, it is indicated that cardiac TLR3 signalling is dependent on Trif rather than MyD88, and Trif links cardiac TLR3 to autophagy induction.
Multiple studies have demonstrated that cardiomyocyte autophagy is activated during myocardial ischaemia [10]. However, whether the up-regulated autophagy is adaptive or maladaptive is not well defined. While some studies show cardioprotective effects of autophagy under ischaemic stress [38][39][40], analysis of hearts from patients with end-stage heart failure suggests autophagic death as the most prominent mechanism for the death of cardiomyocytes [41,42]. Minatoguchi's group observed persistent increases in LC3-II and p62 in infarct hearts over the observation period of 3 weeks, and the most active formation of autophagosomes in remote areas at 3 weeks, as shown by LC3-positive dots in immunofluorescence staining [25]. The present study observed similar up-regulation of LC3-II and p62 following MI (Fig. 2), except that relatively high levels of LC3-II and p62 were observed for infarct areas, compared to remote areas (Fig. 4A). This discrepancy might be the result of different assay methods. Besides, Minatoguchi's group reported that food restriction (FR) prevented post-infarction heart failure by 'enhancing autophagy' [43], whereas we claim here that TLR3-KO generated similar protection by 'inhibiting autophagy'. These results appear paradoxical, but may be explained from different aspects of autophagy dynamics. In the FR study, increased ratio of LC3-II/LC3-I was used as an indicator of enhanced autophagic activity [43]. Although this ratio has been used by many studies, it is actually fickle and unreliable, as pointed out by a recent review [21]. Instead, the level of LC3-II normalized to a protein loading control is proper for flux measurements. When just looking at LC3-II shown by the Western blot images in the FR study (Fig. 3B in reference [43]), we see that FR abolished the increase of LC3-II induced by MI, as well as the increase of p62. Furthermore, LC3-II and p62 in the FR group were both greatly increased after blockade with CQ. In our opinion, these data suggest that FR accelerates autophagic flux and reduced autophagic cargo accumulation, rather than 'enhancing autophagy'. Differently, we herein observed that TLR3 stimulated autophagy induction without affecting autophagic flux. In addition, our results on rapamycin are in conflict with several previous reports [25,44,45]. Kanamori et al., started daily injection of rapamycin after 2 weeks of MI, and detected protective effects after 1 more week [25]. Buss et al. reported that everolimus, a drug similar to rapamycin, attenuated ventricular remodelling and dysfunction in rats subjected to MI [44]. Wu et al. observed that rapamycin prevented MI-induced NFjB activation and attenuated cardiac remodelling and dysfunction [45]. We, on the contrary, observed damage of rapamycin. These discrepancies may result from differences in species, severity of MI and the timing and regimen of drug administration. More strikingly, the highly dynamic nature of autophagy may produce fickle results. Either over-or under-activated autophagy and/or autophagic flux could be harmful. This likely renders great difficulties to intervene with autophagy-associated diseases.
A remarkable effect downstream of TLR activation is the innate immune response, manifested as the production of inflammatory cytokines. TLR2 and TLR4 have been demonstrated to mediate cardiac inflammatory responses under ischaemic stress [9,46]. To examine whether TLR3 contributes to MI-induced inflammation, we herein determined inflammatory cytokine expression in WT and TLR3-KO hearts. The results showed comparable levels under both basal and MI conditions (Fig. 7). Inflammatory cell infiltration was also similar within WT and TLR3-KO hearts subjected to MI (Fig. 5A). These data suggest that TLR3 is not involved in myocardial inflammation after MI. Previous studies testing TLR3-KO mice in the model of myocardial ischemia/reperfusion (I/R) have reported conflict results. Chen et al. reported that TLR3-Trif signalling had no impact on myocardial cytokines or neutrophil recruitment after I/R [47], but Lu et al. reported reductions of cytokine production as well as inflammatory cell infiltration in TLR3-KO hearts subjected to I/R [48]. We previously observed that TLR4 played a role in MI-induced inflammation [49]. However, the present study failed to detect a role for TLR3. An underlying reason might be that TLR3 is minor in cardiac inflammation, whereas TLR2 and TLR4 are predominant. Despite that, this study demonstrates the induction of cardiac autophagy upon TLR3 activation, which contributes to the persistently activated autophagy, heart failure and lethality following MI.
In summary, the present study first observed that TLR3 stimulated autophagy induction in cardiomyocytes, without affecting autophagic flux. In the mice subjected to MI-induced persistent autophagy, TLR3-KO attenuated autophagy, reduced infarct size and improved heart failure and survival. It is highlighted that immunoreceptors may play an important role in post-MI heart failure and lethality through The mRNA levels of inflammatory cytokine markers tumour necrosis factor a (TNFa) and interleukin-6 (IL-6) in heart tissue after 4 weeks of myocardial infarction were determined by real-time RT-PCR and normalized to 18S ribosome RNA transcript levels. Data are means AE S.D. n = 4-5 mice/group. regulating autophagy, while the underlying molecular signals need more investigation. | v2 |
2020-07-27T01:00:27.773Z | 2020-07-24T00:00:00.000Z | 220768609 | s2orc/train | Considerations for Eye Tracking Experiments in Information Retrieval
In this survey I discuss ophthalmic neurophysiology and the experimental considerations that must be made to reduce possible noise in an eye-tracking data stream. I also review the history, experiments, technological benefits and limitations of eye-tracking within the information retrieval field. The concepts of aware and adaptive user interfaces are also explored that humbly make an attempt to synthesize work from the fields of industrial engineering and psychophysiology with information retrieval.
INTRODUCTION
On the nature of learning, I think about my son. A 1-year old at the time of this writing, he plays by waving his arms, looking around, yelling, and putting his mouth on every object in sight. In these moments, I observe him without explicit instruction unless of course danger lurks. His sensorimotor connections to the world around him provides information on the rewards and penalties he needs to be well-adjusted -behave optimally, safely, curiously. Learning through interaction is the foundation of our existence. Equally, we can take a computational approach to this information interaction in the context of human and machine where now the roles are reversed. The machine is the human, and the human, the environment.
What would we need to understand in order to interact with an information system (machine) with our eyes or have the machine interact with us based on what it perceives in our eyes? Well of course, the machine would require a direct interface to an eye-tracking device which would provide a data stream. Consider gaze point as an example signal.
What are the operational characteristics of this signal? Investigation of the speed and sensitivity of the signals is a fundamental objective for this interaction to make sense. Additionally, a human knows precisely when they wish to click, touch, or use their voice, to execute interactions. What can we say about the machine? How would the machine learn when to provide a context menu, retrieve a specific document, adjust the presentation, or filter the information? If such a system existed, how would we democratize it? As I will discuss later in great detail, Pupil Center Corneal Reflection (PCCR) eye tracking devices are extraordinarily expensive and thus research with them becomes self-limiting for building real-time adaptive systems as I have outlined above.
Generally, the traditional methodology for information retrieval experiments has been to study gaze behavior and then report the findings in order to optimize interface layout or improve relevance feedback. If I were to ask where is the technology and how can I interact with it? What we would find is that they are confined to aseptic laboratories. Sophisticated eye trackers utilize infrared illumination and pupil center corneal reflection methods to capture raw gaze coordinates and classify the ocular behavior as an all-in-one package. Local cues within highly visual displays of information are intended to be used to assess and navigate spatial relationships [45,46]. Having functions that enable rapid, incremental, and reversible actions with continuous browsing and presentation of results are pillars of visual information seeking design [1]. Moreover, "how does visualization amplify cognition?" By grouping information together and using positioning intelligently between groups, reductions in search and working memory can be achieved and is the essence of "using vision to think" [9, see page [15][16][17]. Thus, by studying ocular behavior of information retrieval processes, engineers can optimize their systems. This short review provides a historical background on ophthalmic neurophysiology, eye tracking technology, information retrieval experiments, and experimental considerations for those beginning work in this area.
OPHTHALMIC NEUROPHYSIOLOGY
Millions of years of evolution through physical, chemical, genetic, molecular, biological, and environmental, pathways of increasing complexity naturally selected humans for something beautiful and fundamental to our senses and consciousness -visual perception. The knowledge gained since the first comprehensive anatomic descriptions of neural cell types that constitute the retina in the 19 th century followed by electron microscopy, microelectrode recording techniques, immunostaining, and pharmacology, in the 20 th century [44] are immature in comparison to the forces of nature.
Now, here we are in the first-quarter of the 21 st century and human-machine interaction research scientists are asking the question "how can I leverage an understanding of vision and visual perception in my research and development process?" As research scientists in the information field, we should bear this responsibility with conviction and depth to try and understand every possible angle of the phenomena we seek to observe and record. This section on Ophthalmic Neurophysiology is an elementary introduction on how vision works and should be our prism through which we plan and execute all eye-tracking studies. Figure 1 shows the basic anatomy of the eye. First, light passes through the cornea which due to its shape, can bend light to allow for focus. Some of this light enters through the pupil which has its diameter controlled by the iris. Bright light causes the iris to constrict the pupil which lets in less light. Low light causes the iris to widen the pupil diameter to let in more light. Then, light passes through the lens which coordinates with the cornea via muscles of the Ciliary body to properly focus the light on the light-sensitive layer of tissue called the retina. Photoreceptors then translate the light input into an electrical signal that travels via the optic nerve to the brain 1 . Figure 2 shows the slightly more complex anatomy of the eye as a cross-section. We will focus on the back of the eye (lower portion of the figure). The fovea is the center of the macula and provides sharp vision that is characteristic of attention on a particular stimulus in the world while leaving the peripheral vision somewhat blurred. You may notice the angle of the lens and fovea are slightly off-center. More on this later. The optic nerve is a collection of millions of nerve fibers that relay signals of visual messages that have been projected onto the retina from our environment to the brain. The electrical signals in transit to the brain first have to be spatially distributed across the five different neural cell types shown in figure 3. The photoreceptors (rods and cones) are the first order neurons in the visual pathway. These receptors synapse (connect and relay) with bipolar and horizontal cells which function primarily to establish brightness and color contrasts of the visual stimulus. The biploar cells then synapse with retinal ganglion and amacrine cells which intensify the contrast that supports vision for structure, shape, and is the precursor for movement detection. Finally the visual information that has been translated and properly organized into an electrical data structure is delivered to the brain via long projections of the retinal ganglion cells called axons. Described thus far is, broadly, the visual pathway from external stimulus to retinal processing. Sensory information must reach the cerebral cortex (outer layer of the brain), to be perceived. We must now consider the visual pathway from retina to cortex as shown in the cross-section of figure 4. The optic nerve fibers intersect contralaterally at the optic chiasm. The axons in this optic tract end with various nuclei (cell bodies). The thalamus is much like a hub containing nerve fiber projections in all directions that exchange information to the cerebral cortex (among many other regulatory functions). Within the midbrain, involved in motor movements, there is the superior colliculus that plays an essential role in coordinating eye and head movements to visual stimuli (among other sensory inputs). For example, the extraocular muscles are shown in figure 5. Within the thalamus, the lateral geniculate nucleus coordinates visual perception 3 as shown in figure 6. Lastly, the pretectum controls the pupilary light reflex. 4 Based on the introductory ophthalmic neurophysiology reviewed in this section, human-machine interaction experimenters should consider (at a minimum) certain operating parameters: • Pupillary response to lighting conditions is sensitive. Control for this by maintaining stable lighting throughout an experiment, as one may not be able to defend that changes in pupil diameter are in-fact due to changes in focus/attention on the machine or changes in ambient lighting.
• Screen participants for no previous history of ophthalmic disease. If the visual system is impaired at any level, the neurophysiological responses are no longer a reliable dependent variable as an excitatory, lack of, or delayed, ophthalmic response may not accurately represent a neurophysiological transition state with respect to machine interaction. • Many ophthalmic diseases are age-related. For the examination of human-machine interaction in the context of spatial/visual information, recruit study participants that are under the age of 40 to minimize the likelihood of confounding variables. 5,6,7,8,9 After reviewing the itemized list above, some may reason that this preliminary screening criteria is too narrow due to the fact that neuroadaptive systems will soon emerge on the technological landscape and that aging populations are increasingly engaging with technology, therefore their neurophysiological responses should be studied in order to make technology inclusive, not exclusive. I happen to agree with this logic. However, as we will review later, many limitations in current measuring devices exist, and some are related to ophthalmic diseases or deficiencies.
EYE-TRACKING TECHNOLOGY
In this section I will explain the history, theory, practice, and standardization of eye-tracking technology. The pioneers of eye-tracking date all the way back to Aristotle as can be seen in the clock-wise chronological arrangement in figure 7.
Although his work did not use the terms fixations and saccades (rapid movement between fixations), it provided a framework for understanding the terms we use today. In the same year, the German physiologist Edwald Hering and French Ophthalmologist Louis Émile Javal, described the discontinuous eye movements during reading. Dr. Javal was an ophthalmic laboratory director at the University of Paris (Sorbonne), worked on optical devices, the neurophysiology of reading, and introduced the term saccades which of Old French origin (8 th to 14 th century) was saquer or to pull and in modern French translates to violent pull.
"the eye makes several saccades during the passage over each line, about one for every 15-18 letters of text" [30]. (French to English translation).
About twenty years later, the psychologist Edmund Burke Huey appeared to be the first American to cite Javal's work describing that the consistent neurophysiological accommodation (referring to the lens of the eye) from having to read laterally across a page increases extraocular muscle fatigue and reduces reading speed [25]. Moreover, Dr. Huey described his motivations for building an experimental eye-tracking device: "the eye moved with along the line by little jerks and not with a continuous steady movement. I tried to record these jerks by direct observation, but finally decided that my simple reaction to sight stimuli was not quick enough to keep up... It seemed needful to have an accurate record of these movements; and it seemed impossible to get such record without a direct attachment of recording apparatus to the eye-ball. As I could find no account of this having been done, I arranged an apparatus for the purpose and have so far succeeded in taking 18 tracings of the eye's movements in reading." Ortiz A drawing of this apparatus is show in figure 8. Dr. Huey went on to write the famous book on Psychology and Pedagogy of Reading in 1908 [26]. For an excellent historical overview of eye-tracking developments in the study of fixations, saccades, and reading, in the 19 th and 20 th centuries please see sections 6 and 6.1 in [56]. Additionally, and of particular interest, is the work in the 1960's of the British engineering psychologist, B. Shackel, who worked on the inter-relation of man and machine and the optimum design of such equipment for human use. Specifically his early work in measures and viewpoint recording of electro-oculography (electrical potential during eye rotation) for the British Royal Navy on human-guided weapon systems [51,52] (see figs. 9 to 11). The Russian psychologist Alfred L. Yarbus studied the relation between fixations and interest during image studies that used a novel device developed in his laboratory (figure 12). Please see Chapter IV in [58] for a thorough review of his experiments. still the fundamental technology for state-of-the-art eye-trackers today, in the year 2020. 10 Although, the design as it related to form factor, dark room requirements, and restriction of head movement, were sub-optimal for "use in the wild". Unfortunately, as history has shown us, when mission critical United States military funded research projects fail on deliverables, the research community follows in its abandonment of theory and practice and thus many years passed before innovations in eye-tracking emerged once again. However, metrics of performance was the overarching contribution by the early pioneers and are not limited to: • Pupil and iris detection.
• Freedom of head movement.
• Adjustments for human anatomical eye variability.
• Adjustment for uncorrected and corrected human vision.
• Ease of calibration.
• Form factor and cost.
Let's discuss the Pupil Center Corneal Reflection (PCCR) method in more detail. Near-infared illumination creates reflection patterns on the cornea and lens called Purkinje images [17] (see figure 13) which can be captured by image sensors and the resulting vectors can be calculated in real-time that describe eye-gaze and direction. This information can be used to analyze the behavior and consciousness of a subject [14]. Lastly, geometric characteristics of a subject's eyes must be estimated to reliably measure eye-gaze point calculations (see figure 15). Therefore, a calibration procedure involves bright/dark pupil adjustments for lighting conditions, light refraction/reflection properties of the cornea, lens, and fovea, and an anatomical 3D eye model to estimate foveal location responsible for the visual field (focus, full color). 13
EYE-TRACKING IN SEARCH AND RETRIEVAL
In 2003, the first study on eye-tracking and information retrieval (IR) from search engines was conducted [49]. The authors of the study wanted to understand if it was possible to infer relevance from eye movement signals. In 2004, Granka et al. [20] investigated how users interact with search engine result pages (SERPs) in order to improve interface design processes and implicit feedback of the engine while Klöckner et al. [31] asked the more basic question of search list order and eye movement behavior to understand depth-first or breadth-first retrieval strategies. In 2005, similar to the previous study, Aula et al. [3] wanted to classify search result evaluation style in addition to depth-first or breadth-first strategies. The research revealed that users can be categorized as economic or exhaustive in that the eye-gaze of experienced users is fast and decisions are made with less information (economic). conducted during utilization of facets for filtering and refining a non-transactional search strategy [8].
In 2010, Balatsoukas and Ruthven [4] argued that "there are no studies exploring the relationship between relevance criteria use and human eye movements (e.g. number of fixations, fixation length, and scan-paths)". I believe their was some truth to this statement, as the only research close to their work was that of inferring relevance, at the macro-level, from eye-tracking [49]. Their work uncovered that topicality explained much of the fixation data. Dinet et al. [11] studied visual strategies of young people from grades 5 to 11 on how they explored the search engine results page and how these strategies were affected by typographical cuing such as font alterations while Dumais et al. [12] examined individual differences in gaze behavior for all elements on the results page (e.g. results, ads, related searches).
In 2012, Balatsoukas and Ruthven extended their previous work on the relationship between relevance criteria and eye-movements to include cognitive and behavioral approaches with grades of relevance (e.g. relevant, partial, not) and the relationship to length of eye-fixations [5] while Marcos et al. [36] studied patterns of successful vs. unsuccessful information seeking behaviors; specifically, how, why, and when, users behave differently with respect to query formulation, result page activity, and query re-formulation. In 2013, Maqbali et al. studied eye-tracking behavior with respect to textual and visual search interfaces as well as the issue of data quality (e.g. noise reduction, device calibration) at a time when the existing software 14 did not support such features [2].
In 2014, Gossen et al. studied the differences in perception of search results and interface elements between lateelementary school children and adults with the goal of developing methodologies to build search engines for engaging and educating young children based on previous evidence that search behavior varies widely between children and adults [19]. Gwizdka examined the relationship between the degree of relevance assigned to a retrieval result by a user, the cognitive effort committed to reading the documented result, and inferring the relationship with eye-movement patterns [21] while Hofmann et al. examined interaction and eye-movement behavior of users with query auto completion rankings (also referred to as query suggestions or dynamics queries) [24].
In 2015, Eickhoff et al. argued that query suggestion approaches were "attention oblivious" in that without mapping mouse cursor movement at the term-level of search engine result pages, eye-tracking signals, and query reformulations, efforts of user modeling were limited in their value, based solely on previous, popular, or related searches, and not entirely obvious that such suggestions were relevant for users with non-transactional information needs [13]. ...in order to contextualize experimental responses [38]. Prior to their position, experimental concerns were focused on data quality (e.g. noise reduction) and device calibration, not human response calibration.
In 2017, Gwizdka et al. revisited previous work on inferring relevance judgements for news stories albeit with a higher resolution eye-tracking device and the addition of more complex neurophysiological approaches such as electroencephalography (EEG) to identify relevance judgement correlates between eye-movement patterns and electrical activity in the brain [22] while Low et al. applied eye-tracking, pupillometry, and EEG to model user search behavior within a multimedia environment (e.g. an image library) in order to operationalize the development of an assistive technology that can guide a user throughout the search process based on their predicted attention, and latent intention [34].
In 2019, the first neuroadaptive implicit relevance feedback information retrieval system was built and evaluated by Jacucci et al. [29]. The authors demonstrated how to model search intent with eye and brain-computer interfaces for improved relevance predictions while Wu et al. examined eye-gaze in combination with electrodermal activity (EDA), which measures neurally mediated effects on sweat gland permeability, while users examined search engine result pages to predict subjective search satisfaction [57]. In 2020, Bhattacharya et al. re-examined relevance prediction for neuroadaptive IR systems with respect to scanpath image classification and reported up to 80% accuracy in their model [6].
EYE-TRACKING IN AWARE AND ADAPTIVE USER INTERFACES
In this section, I will review only those works that satisfy the criteria of a system (machine) that utilizes implicit signals from an eye-tracker to carry out functions and interact or collaborate with a human.
iDict was an eye-aware application that monitored gaze path (saccades) while users read text in foreign languages.
When difficulties were observed by analyzing the discontinuous eye movements, the machine would assist with the translation [27]. Later, an affordable "Gaze Contingent Display" was developed for the first time that was operating system and hardware integration agnostic. Such a display was capable of rendering images via the gaze point and thus had applications in gaze contingent image analysis and multi-modal displays that provide "focus+context" as can be found with volumetric medical imaging [41].
Children with autism spectrum disorder have difficulties with social attention. Particularly, they do not focus on the eyes or faces of those communicating with them. It is thought that forms of training may offer benefit. An amusement ride machine was engineered and outfitted with various sensors and an eye-tracker. The ride was an experiment that would elicit various responses from the child and require visual engagement of a screen that would then reward with auditory and vestibular experiences, and thus functioned as a gaze contingent environment for socially training the child on the issue of attention [48].
Fluid and pleasant human communication requires visual and auditory cues that are respected by two or more people.
For example, as I am speaking to someone and engaged in eye contact, perhaps I will look away for a moment or fade my tone of voice and pause. These are social cues that are then acted upon by another person where they then engage me with their thought. This level of appropriateness is not embedded in devices. Although the concept of "Attentive User Interfaces" that utilize eye-tracking to become more conscious about when to interrupt a human or group of humans has been studied [53]. Utilizing our visual system as a point and selection device for machine interactions instead of a computer mouse or touch screen would seem like a natural progression in the evolution of interaction. There are two avenues of engineering along this thread. The first simply requires a machine to interact with, an accurate eye tracking device, and thresholds for gaze fixation in order to select items presented by the machine. The second requires that we study the behaviors of interaction (eye, peripheral components) and their correlates in order to build a model of what the typical human eye does precisely before and after selections are made.
With this information we may then be able to have semi-conscious machines that understand when we would like to select something or navigate through an environment. A machine of the first kind was in-fact built and experimented on for image search and retrieval [42], whereby a threshold of 80 millisecond gaze fixation was used as the selection device.
The experiment asked that users identify the target image within a library of images that were presented in groups. All similarity calculations were stored as metadata prior to the experiment. The user would have to iteratively gaze at related images for at least 80 milliseconds for the group of images to filter and narrow with a change of results. The results indicated that the speed of gaze contingent image search was faster than an automated random selection algorithm.
However, the gaze contingent display was not experimented against a traditional interaction like the computer mouse.
Later, a similar system was built and an experiment was conducted using Google image search [15]. The authors in [40] also presented a similar gaze threshold (100 ms) based system called GazeNoter. The gaze-adaptive note-taking system was built and tested for online PowerPoint slide presentations. Essentially, by analyzing a user's gaze, video playblack speed would adjust and recommend notes for particular areas of interest (e.g. bullet points, images, etc.).
The prototype was framed around the idea that video lectures require the user to obtrusively pause the video, lose focus, write a note, then continue. In-fact, the experiments reported show that users generated more efficient notes and preferred the gaze adaptive system in comparison to a baseline system that had no adaptive features.
In [43], the authors note that implementation of eye-tracking in humanoid robots has been done before. However, no experiment had been conducted on the benefits for human-robot collaboration. The eye-tracker was built into Ortiz the humanoid robot "iCub" 15 as opposed to being an externally visible interface. This engineering design enabled a single blind experiment where the subjects had no knowledge of any infared cameras illuminating the cornea and pupil or the involvement of eye-tracking in the experiment. The robot and human sat across each other at a table. The humans were not asked to interact with the robot in any particular way (voice, pointing, gaze, etc.) but were asked to communicate with the robot in order to receive a specific order of toy blocks to build a structure. The robot was specifically programmed in this experiment to only react to eye gaze which it did successfully in under 30 seconds across subjects.
Cartographers encode geographic information on small scale maps that represent all the topological features of our planet. This information is decoded with legends that enable the map user to understand what and where they are looking at. Digital maps have become adaptive to user click behavior and therefore the legends reflect the real-time interaction. Google Earth 16 is an excellent example of this. New evidence indicates that gaze-based adaptive legends are just as useful and perhaps more useful than traditional legends [18]. This experiment included two versions of a digital map (e.g. static legend, gaze-based adaptive legend). Although participants in the study performed similarly for time-on-task, they preferred the adaptive legend, indicating its perceived usefulness.
Technology
The standardization of eye-tracking technology is not without limitation. A number of advancements in the fundamental technology of PCCR based eye-trackers are still required. For example, the image processing algorithms have difficulty on a number of scenarios involving the pupil center corneal reflection method: • Reflections from eye-glasses and contact lenses worn by the subject can cause image processing artifacts.
• Eye-lashes that occlude the perimeter of the pupil cause problems for time-series pupil diameter calculations.
• Large pupils reflect more light than small pupils. The wide dynamic range in reflection can be an issue for image processors. • The eye blink reflex has a complex neural circuit involving the oculomotor nerve (cranial nerve III), trigeminial nerve (cranial nerve V), and the facial nerve (cranial nerve VII). 17,18 When a pathology in this reflex is present the subject does not blink during an experimental task therefore dry and congealed corneas is the result, which makes corneal reflection difficult for the image processor.
• High-speed photography by the image capture modality is required as saccadic eye movements have high velocity, and head movements may at times be also high in velocity causing blurred images of the corneal reflection.
• Squinting causes pupil center and corneal reflection distortion during image processing.
• The trade-off between PCCR accuracy and freedom of head movement may be overcome by robotic cameras that "eye follow" although this is not available in most affordable eye-trackers.
Additionally, sampling frequencies should be thoughtfully understood in order to design an experiment that potentially answers a question or set of questions (see figure 16). Essentially, at the highest frequency (1200 Hz), 1200 data points for each second of eye movement are recorded and each eye movement will be recorded approximately every 0.83 milliseconds (sub-millisecond). While at the lowest end of the frequency spectrum (60 Hz), 60 data points for each second of eye movement are recorded and each eye movement will be recorded approximately every 16.67 milliseconds. These sampling frequencies are important to understand because certain eye phenomena can only be observed at certain frequencies. For example, low-noise saccades are observed at frequencies greater than 120Hz which are sampled every 8.33 milliseconds while low-noise microsaccades are observed at frequencies greater than 600Hz which are sampled every 1.67 milliseconds. 19 Higher sampling frequencies will provide higher sample sizes and levels of certainty over the same unit of time. In terms of stratifying a data stream accurately and building user models for adaptive feedback within a system, high sampling frequency is a pre-requisite and provides more granularity for fixations, fixation duration, pupil dilation, saccades, saccade velocity, microsaccades, and spontaneous blink rate.
Psychophysiology
With this data, we can begin to ask questions related to moment-by-moment actions and their relationship to neurophysiology. For example, it is not possible to move your eyes (voluntarily or involuntarily) without making a corresponding shift in focus/attention and disruption to working memory. This is especially true in spatial environments [54,55].
Perhaps, by modeling a user's typical pattern of eye movement over time, a system can adapt and learn when to politely re-focus the user and/or more accurately model the eye-as-an-input.
Moreover, the eyes generally fixate on objects of thought. Although this may not always be the case in scenarios where we are looking at nothing but retrieving representations in the background [16]. Think of a moment where you gestured with your hand at a particular area of a room or place that someone you spoke to earlier was in. Therefore, in the context of a human-machine interaction, how would the machine learn to understand the difference in order to execute system commands, navigate menus, or remain observant for the next cue? For information systems, at least, this is the argument for supplementary data collection from peripheral components which allow for investigation and 19 https://www.tobiipro.com/learn-and-support/learn/eye-tracking-essentials/eye-tracker-sampling-frequency/ Ortiz potential discovery of correlates that the machine can be trained on to understand the difference. However, an accepted theory of visual perception is that it is the result of both feedforward and feedback connections, where the initial feedforward stimulation generates interpretation(s) that are fed-backward for further processing and confirmation known as a reentrant loop. Experiments have demonstrated varying cycle times for reentrant loops when subjects are presented with information in advance (a specific task) for sequential image processing and detecting a target.
Detection performance increased as the duration of an image being presented increased from 13-80 milliseconds [47].
Another limitation with this interaction is the manipulation device (computer mouse) as the literature has suggested that average mouse pointing time for web search appears to range from 600-1000 milliseconds [39] while pupil dilation can have latencies of only 200 milliseconds. This suggests that visual perception during information seeking tasks is significantly faster than the ability to act on it with our motor movements and thus it is likely that the eye-as-an-input device is more efficient and therefore a significant delay between the moment a user decides upon a selection item and when the selection item is actuated, appears to exist. On this particular issue, experimental protocols should outlines a specific manner in which to understand or operationalize this gap.
Even when a user is focused and attentive, their comprehension may still lack that of an expert. How would an adaptive system learn about a user to the extent that although attentive, their comprehension is not optimal and perhaps recommend material to build a foundation then return later? Most scientists in the field would likely argue that this is the purpose for objective questioning as an assessment. However, these assessments cannot distinguish correctly guessed answers, or misunderstanding in the wording of a question leading to an incorrect answer. Additionally, less fixations and longer saccades may be indicative of proficient comprehension and has been shown to be predictive of higher percentage scores on objective assessments [50].
CONCLUSION
In this short review I have discussed ophthalmic neurophysiology and the experimental considerations that must be made to reduce possible noise in an eye-tracking data stream. I have also reviewed the history, experiments, technological benefits, and limitations of eye-tracking studies within the information retrieval field. The concepts of aware and adaptive user interfaces were also explored that humbly motivated my investigations and synthesis of previous work from the fields of industrial engineering, psychophysiology, and information retrieval.
As I stated at the beginning of this review, on the nature of learning I consistently think about my son. Learning from his environment is the foundation of his existence. His interaction with ambient information reinforces or discourages certain behaviors. Throughout this writing I attempted to express these ideas within the context of human-informationmachine interaction. More precisely, I attempted to express the need for establishing a foundation that measures the decision making process with lower latency but also with the ability to be operationalized non-intrusively and as an input device, which in order to achieve such a goal, requires a window to the mammalian brain that is achievable only with eye-tracking as I firmly believe this to be the future of ocular navigation for information retrieval. | v2 |
2019-07-03T13:06:12.524Z | 2019-07-03T00:00:00.000Z | 195769029 | s2orc/train | Ruthenium Complexes With Piplartine Cause Apoptosis Through MAPK Signaling by a p53-Dependent Pathway in Human Colon Carcinoma Cells and Inhibit Tumor Development in a Xenograft Model
Ruthenium complexes with piplartine, [Ru(piplartine)(dppf)(bipy)](PF6)2 (1) and [Ru(piplartine)(dppb)(bipy)](PF6)2 (2) (dppf = 1,1-bis(diphenylphosphino) ferrocene; dppb = 1,4-bis(diphenylphosphino)butane and bipy = 2,2′-bipyridine), were recently synthesized and displayed more potent cytotoxicity than piplartine in different cancer cells, regulated RNA transcripts of several apoptosis-related genes, and induced reactive oxygen species (ROS)-mediated apoptosis in human colon carcinoma HCT116 cells. The present work aimed to explore the underlying mechanisms through which these ruthenium complexes induce cell death in HCT116 cells in vitro, as well as their in vivo action in a xenograft model. Both complexes significantly increased the percentage of apoptotic HCT116 cells, and co-treatment with inhibitors of JNK/SAPK, p38 MAPK, and MEK, which inhibits the activation of ERK1/2, significantly reduced the apoptosis rate induced by these complexes. Moreover, significant increase in phospho-JNK2 (T183/Y185), phospho-p38α (T180/Y182), and phospho-ERK1 (T202/Y204) expressions were observed in cells treated with these complexes, indicating MAPK-mediated apoptosis. In addition, co-treatment with a p53 inhibitor (cyclic pifithrin-α) and the ruthenium complexes significantly reduced the apoptosis rate in HCT116 cells, and increased phospho-p53 (S15) and phospho-histone H2AX (S139) expressions, indicating induction of DNA damage and p53-dependent apoptosis. Both complexes also reduced HCT116 cell growth in a xenograft model. Tumor mass inhibition rates were 35.06, 29.71, and 32.03% for the complex 1 (15 μmol/kg/day), complex 2 (15 μmol/kg/day), and piplartine (60 μmol/kg/day), respectively. These data indicate these ruthenium complexes as new anti-colon cancer drugs candidates.
INTRODUCTION
Colorectal cancer (CRC) is a lethal disease that ranks third in incidence and second in mortality. In 2018, 1.8 million new CRC cases and 881,000 deaths were estimated to occur worldwide (1). Currently, cytotoxic chemotherapy with FOLFOX (leucovorin, 5-fluorouracil, and oxaliplatin), FOLFIRI (leucovorin, 5-fluorouracil, and irinotecan), or FOLFOXIRI (leucovorin, 5-fluorouracil, oxaliplatin, and irinotecan) are standard regimens most often used (2). However, CRC remains with a high mortality and new treatment strategies are urgently needed.
Cells
Human colon carcinoma HCT116 cells were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA). Cells were cultured as recommended by ATCC and a mycoplasma stain kit (Sigma-Aldrich) was used to validate the use of cells free from contamination. Cell viability in all experiments was examined using the trypan blue exclusion assay. Over 90% of the cells were viable at the beginning of the culture.
Apoptosis Quantification Assay
FITC Annexin V Apoptosis Detection Kit I (BD Biosciences) was used for apoptosis quantification, and the analysis was performed according to the manufacturer's instructions. Cell fluorescence was determined by flow cytometry. At least 10 4 events were recorded per sample using a BD LSRFortessa cytometer, BD FACSDiva Software (BD Biosciences) and FlowJo Software 10 (FlowJo Lcc; Ashland, OR, USA). Cellular debris were omitted from the analysis. Percentages of viable, early apoptotic, late apoptotic and necrotic cells were determined. Protection assays using a JNK/SAPK inhibitor (SP600125; Cayman Chemical), p38 MAPK inhibitor (PD169316; Cayman Chemical), MEK (mitogen-activated protein kinase kinase) inhibitor (U0126; Cayman Chemical), and p53 inhibitor (cyclic pifithrin-α; Cayman Chemical) were performed. In these assays, cells were pre-incubated for 2 h with 5 µM SP600125, 5 µM PD169316, 5 µM U0126, or 10 µM cyclic pifithrin-α, followed by incubation with complexes at previously established concentrations (2.5 µM . For protection assays, cells were pretreated for 2 h with 10 µM cyclic pifithrin-α and then incubated with ruthenium complexes at established concentration (2.5 µM for complex 1 and 5 µM for complex 2) for 48 h. Negative control was treated with the vehicle (0.1% of a solution containing 70% sorbitol, 25% tween 80 and 5% water) used for diluting the complexes tested. Doxorubicin (1 µM) and piplartine (10 µM) were used as positive controls. Data are presented as mean ± S.E.M. of three independent experiments performed in duplicate. Ten thousand events were evaluated per experiment, and cellular debris was omitted from the analysis. *P < 0.05 compared with negative control by ANOVA, followed by Student Newman-Keuls Test. # P < 0.05 compared with respective treatment without inhibitor by ANOVA, followed by Student Newman-Keuls Test.
Human Colon Carcinoma Xenograft Model
HCT116 cells (10 7 cells per 500 µL) were implanted subcutaneously into the left front armpit of the mice. At the beginning of the experiment, mice were randomly divided into four groups: group 1 animals received injections of vehicle with 5% of a solution containing 70% sorbitol, 25% tween 80 and 5% water (n = 10); group 2 animals received injections of piplartine (60 µmol/kg, n = 10); group 3 animals received injections of the complex 1 at 15 µmol/kg (n = 10); and group 4 animals received injections of the complex 2 at 15 µmol/kg (n = 11). Treatments were initiated 1 day after the cancer cell injection. The animals were treated intraperitoneally (200 µL per animal) once a day for 15 consecutive days. One day after the end of the treatment, the animals were anesthetized, and peripheral blood samples were collected from the brachial artery. Animals were euthanized by anesthetic overdose, and tumors were excised and weighed.
Toxicological Aspects
Mice were weighed at the beginning and at the end of the experiment. All animals were observed for toxicity signs throughout the whole study. Hematological analysis was performed by light microscopy in blood samples. Livers, kidneys, lungs, and hearts were removed, weighed and examined for any signs of gross lesions, color changes, and/or hemorrhages. After gross macroscopic examination, the tumors, livers, kidneys, lungs, and hearts were fixed in 4% formalin buffer and embedded in paraffin. Tissue sections were stained with hematoxylin/eosin staining, and a pathologist performed the histological analyses under optical microscopy.
Statistical Analysis
Data are presented as mean ± S.E.M. Differences between experimental groups were compared using analysis of variance (ANOVA) followed by Student-Newman-Keuls test (p < 0.05). All statistical analyses were performed using GraphPad Prism (Intuitive Software for Science, San Diego, CA, USA). The asterisks represent areas with tumor necrosis. The treatments were initiated 1 day after the cancer cell injection. Animals were treated intraperitoneally once a day for 15 consecutive days. Negative control (CTL) was treated with the vehicle (5% of a solution containing 70% sorbitol, 25% tween 80 and 5% water) used for diluting the complexes. Piplartine (PL, 60 µmol/kg) was used as positive control.
Ruthenium Complexes With Piplartine Cause Apoptosis Through MAPK Signaling by a p53-Dependent Pathway in HCT116 Cells
Apoptotic cell death was quantified by annexin-V/PI double staining using flow cytometry in HCT116 cells after treatment with ruthenium complexes with piplartine at established concentrations (2.5 µM for complex 1 and 5 µM for complex 2) after 48 h of incubation. Additionally, since mitogen-activated protein kinase (MAPK) signaling play an essential role in apoptotic cell death, the role of the three main MAPK families, Jun N-terminal kinase/stress activated protein kinase (JNK/SAPK), p38 MAPK, and extracellular signal-regulated kinase (ERK) were investigated. Then, we measured complexesinduced apoptosis in HCT116 cells co-treated with JNK/SAPK inhibitor (SP600125), p38 MAPK inhibitor (PD169316), and MEK inhibitor (U-0126, which inhibits the activation of ERK1/2). Both complexes significantly increased percentage of apoptotic cells, and co-treatment with JNK/SAPK, p38 MAPK, and MEK inhibitors significantly reduced complexes-induced apoptosis in HCT116 cells (Figure 2).
Ruthenium Complexes With Piplartine Reduce HCT116 Cell Growth in a Xenograft Model
Concerning in vivo action of ruthenium complexes with piplartine, anti-colon cancer effect was evaluated in C.B-17 SCID mice engrafted with HCT116 cells, and animals were treated by intraperitoneal injections for 15 consecutive days with complex 1 at dose of 15 µmol/kg/day, complex 2 at dose of 15 µmol/kg/day, and piplartine at dose of 60 µmol/kg/day. Both complexes were able to inhibit statistically significant HCT116 cell growth in xenograft model (Figures 6A, B). On 16th day, the mean of tumor mass weight of negative control group was 1.38 ± 0.15 g. In animals treated with complex 1, the mean of tumor mass weights was 0.89 ± 0.06 g, while was 0.97 ± 0.05 g in animals treated with complex 2. Piplartine-treated animals showed a mean of tumor mass weights of 0.94 ± 0.05 g. Tumor mass inhibition rates were 35.06, 29.71, and 32.03% for complex 1 (15 µmol/kg/day), complex 2 (15 µmol/kg/day), and piplartine (60 µmol/kg/day), respectively. In the histological analysis, all groups exhibited a predominant poorly differentiated adenocarcinoma with solid growth pattern with extensive areas of tumor necrosis for groups treated with piplartine and ruthenium complexes ( Figure 6C).
With regards to the toxicological aspects, body and organ (liver, kidney, lung, and heart) weights, and hematological analysis were assessed in all mice after the end of treatment. No significant alterations were observed in body weight neither in liver, kidney, lung, or heart wet weight of any group (P > 0.05) (data not shown). In addition, all hematological parameters analyzed in mice treated the ruthenium complexes were similar to those of naïve controls (P > 0.05) (data not shown).
Morphological analyses of liver, kidneys, lungs, and hearts in all groups were performed. Histopathological analyses of lungs revealed significant inflammation predominantly of mononuclear cells, edema, congestion and hemorrhage, ranging mild to severe. It is important to note that these histopathological alterations were more pronounced in negative control, piplartine and complex 2 groups than in complex 1 group. The architecture of parenchyma was partially maintained in all groups, observing a thickening of the alveolar septum with decreased airspace, ranging from mild to moderate. In addition, tumor nodules and embolus in lungs were observed only in one animal of negative control group and complex 2, respectively. In livers, the acinar architecture and centrilobular vein were also preserved in all groups. Focal areas of inflammation and coagulation necrosis were observed in negative control, complex 1 and complex 2. Other findings, such as congestion and hydropic degeneration were found in all groups, ranging from mild to moderate. In kidneys, tissue architecture was preserved in all experimental groups. Histopathological changes included vascular congestion and thickening of basal membrane of renal glomerulus with decreased urinary space were observed in all kidneys, ranging from mild to moderate. Histopathological analysis of animal hearts did not show alterations in any group.
DISCUSSION
Two ruthenium complexes with piplartine recently designed and synthesized were shown to be potential anticancer agents that target oxidative stress and cause cell death in cancer cells with higher potency than metal-free piplartine (31). Herein, we demonstrated the intracellular processes modulated by these ruthenium complexes, involving MAPK (JNK, p38 MAPK, and ERK1/2) signaling by a p53-dependent pathway, that trigger apoptosis in HCT116 cells. More importantly, we showed here that these two ruthenium complexes inhibit tumor development in a xenograft mouse model more potently than piplartine.
All three JNK/SAPK (JNK-1, JNK-2, and JNK-3) can trigger the apoptotic pathway by stimulating expression of pro-apoptotic genes through activation of specific transcription factors, including c-Jun, p53, and p73 (32). In this work, co-treatment with a JNK1-3 inhibitor (SP 600125) reduced ruthenium complexes-induced apoptosis in HCT116 cells, which was confirmed by quantification of levels of phosphorylation of JNK2 (T183/Y185), indicating JNK-mediated apoptosis. In fact, D'Sousa Costa et al. (31) found that these complexes up-regulated MAPK-related genes in HCT116 cells. Piplartine has been previously shown to inhibit cell proliferation and cause apoptosis in human melanoma cells via ROS and JNKs pathways (33). Moreover, piplartine increased the phosphorylation of p38 and JNK in bone marrow mononuclear cells from patients with myeloid leukemias, and co-treatment with specific p38 or JNK inhibitors partially reversed piplartine-induced processes, such as ROS production and apoptotic/autophagic signaling activation (34). Piplartine also induced ROS accumulation, leading to cholangiocarcinoma cell apoptosis via activation of JNK/ERK pathway (20). Interestingly, a ruthenium methylimidazole complex caused ROS accumulation and ROS-mediated DNA damage by MAPK (JNK and p38 MAPK) and AKT signaling pathways in lung carcinoma A549 cells (35). Altogether, these findings support the role of JNK pathway in proapoptotic mechanism triggered by ruthenium complexes with piplartine.
Four p38 MAPK isoforms have been identified: p38α, p38β, p38γ and p38δ, being p38α and p38β the most studied isoforms. This MAPK signaling is activated in response to UV damage, oxidative stress, exposure to DNA damaging agents and growth factors and cytokines. Its activation modulates a wide variety of cellular functions, such as protein kinases, phosphatases, cell-cycle regulators and transcription factors, including p53 (36). In order to evaluate whether the p38 MAPK pathway is involved in ruthenium complexes-induced apoptosis in HCT116 cells, we co-treated the cells with a p38 MAPK inhibitor (PD 169316) and ruthenium complexes. Co-treatment with p38 MAPK inhibitor reduced the complexes-induced apoptosis in HCT116 cells, and was also confirmed by quantification of the levels of phosphorylation of p38α (T180/Y182), indicating p38 MAPK-mediated apoptosis. Wang et al. (37) reported that piplartine induces apoptosis and autophagy in leukemic cells through targeting the PI3K/Akt/mTOR and p38 signaling pathways. Moreover, a platinum complex with piplartine caused ROS-mediated apoptosis by ERK1/2/p38 pathway in human acute promyelocytic leukemia HL-60 cells (17). ERK1/2 MAPK signaling belongs to the RAS-regulated RAF-MEK-ERK signaling pathway, and activation of ERK1/2 leads phosphorylation of more than 200 substrates (38). Therefore, the consequences of ERK1/2 activation are diverse that include some apparently contradictory biological responses such as cell cycle progression or cell cycle arrest and cell survival or cell death. The type of cellular response is determined by some factors, such as the duration and intensity of ERK1/2 activation, co-activation of other pathways and subcellular distribution of ERK1/2 (38). Herein, co-treatment with a MEK inhibitor (U-0126, which inhibits the activation of ERK1/2) reduced ruthenium complexes-induced apoptosis in HCT116 cells, and an increased phosphorylation of ERK1 (T202/Y204) was demonstrated, indicating ERK1/2-mediated apoptosis. The ruthenium complex with xanthoxylin was previously reported to induce S-phase arrest and cause ERK1/2mediated apoptosis in HepG2 cells through a p53-independent pathway (25).
The tumor suppressor p53 has been shown to induce cellcycle arrest, promote DNA repair or induce apoptotic cell death in response to cellular stress. The p53 signaling activation is induced by several cellular stress signals, including DNA damage and oxidative stress (36). In addition, DNA damage can be monitored by quantification of phosphorylation of histone H2AX (γH2AX) that is an early sign of DNA damage. Herein, both ruthenium complexes with piplartine were able to increase phosphorylation of the histone H2AX (S139), and co-treatment with a p53 inhibitor (cyclic pifithrin-α) reduced complexesinduced apoptosis in HCT116 cells, which was confirmed by increasing in levels of phosphorylation of p53 (S15), indicating p53-dependent apoptosis. Corroborating with these results, D'Sousa Costa et al. (31) observed that TP53 gene was upregulated in HCT116 cells treated with complex 1. Since the p53 protein can functionally interact with MAPK pathways, including JNK/SAPK, the p38 MAPK, and the ERK1/2, these results corroborate the apoptosis induction through MAPK signaling by a p53-dependent pathway in complexes-treated HCT116 cells.
Human tumor xenograft mouse model is one of the most widely used models to evaluate in vivo antitumor effect of new compounds. It remain human tumor cell heterogeneity, madding possible predict the drug response in human patient as well allows a fast analysis of human tumor response in vivo protocols (39,40). Therefore, we also investigated in vivo anti-colon cancer of ruthenium complexes with piplartine in C.B-17 SCID mice xenografted with HCT116 cells. These complexes were able to inhibit tumor growth with higher potency that piplartine, since complexes tested at doses of 15 µmol/kg/day showed similar efficacy observed to piplartine tested at dose of 60 µmol/kg/day. No significant changes were observed in body and organs weights neither hematological parameters of any group, indicating low toxicity of these complexes. These in vivo results corroborate the potential use of piplartine-based compounds for colon cancer treatment previously described. Piplartine and a N-heteroaromatic ringbased analog were reported to repress tumor growth in HCT116 xenograft mouse model (40). The ruthenium complex with xanthoxylin was previously reported to inhibit the development of HepG2 cells in xenograft model (25). Moreover, a ruthenium complex inhibited dose-dependently the growth of human hepatocarcinoma cells BEL-7402 in xenotransplanted mice (41).
In conclusion, we showed here that ruthenium complexes with piplartine cause apoptosis through MAPK signaling by a p53-dependent pathway in HCT116 cells (Figure 7), and are able to inhibit HCT116 cell growth in xenograft model with higher potency than piplartine alone. These data indicate these complexes as promising new anti-colon cancer drugs candidates.
ETHICS STATEMENT
The institutional Animal Ethics Committee of Gonçalo Moniz Institute approved the experimental protocol (number 06/2015).
AUTHOR CONTRIBUTIONS
IB, MS, JN, AB, and DB conceived and designed the experiments. IB, SS, LS, JN, RD, CS, and CR performed the in vitro and in vivo experiments. IB, SS, CR, AB, and DB analyzed the data: CR, MS, AB, and DB contributed reagents, materials, and analysis tools. DB wrote the paper. All authors read and approved the final manuscript. | v2 |
2020-07-30T02:04:37.711Z | 2020-07-24T00:00:00.000Z | 225479039 | s2orc/train | “HOUSE NEPAL” PROJECT: INITIAL RESULTS AND PERSPECTIVES FOR AN ANTI-SEISMIC COOPERATION PROJECT
The “HouSe-Nepal” project is being developed within the framework of the ADSIDEO programme (Project for the Centre for Development Cooperation of Universitat Politècnica de València 2018-2020) in collaboration with the Nepalese foundation Abari: Bamboo and Earth Initiative. This action aims to provide the technological and scientific support needed for the construction of antiseismic housing taking into consideration environmental, socio-cultural, and socio-economic sustainability as key factors for the project. Students from Kathmandu University are taking part in a series of experimental constructive actions in the town of Dhulikhel, aiming to provide a response to the major constructive problems and limitations of local housing (as starkly highlighted by the 2015 Ghorka earthquake). This paper aims to present the initial results of the project and some possible perspectives and actions to be specified in its final year. Basically, the design efforts are being aimed at the promotion of an architecture taking inspiration from local Nepalese architecture, as a sign of identity which is safer in the event of ground movement, and more sustainable in terms of production and execution than conventional constructions whose format and technology have been imported from Europe. * Corresponding author 1. ANTECEDENTS AND REASONS FOR THE PROJECT 1.1 Framework of the project The “House Nepal” project is one of the lines of action of the Centre for Development Cooperation (CCD) of Universitat Politècnica de València, Spain, which serves as the headquarters for developing R&D&I projects in the field of Studies on Development, International Cooperation, and applying Technology for Human Developments to reach Sustainable Design Goals. These goals propose common responses for the major challenges facing the world at present: poverty, inequality, and sustainability. Therefore, this research should be relevant to securing these, while taking development cooperation into account. In this regard, the project, led by F. Vegas with other researchers from UPV and the Nepalese Foundation Abari: Bamboo and Earth Initiative chiefly aims to provide scientific and technological support to the local Nepalese population, creating actions for training, dissemination, and empowerment of local specialists, in collaboration with the town of Dhulikhel and students from Kathmandu University. 1.2 Nepal, the third poorest country in Asia The earthquake which devastated Nepal in 2015 was a major blow to one of the poorest countries in Asia (Figure 1). In continuous political transition for the last decade and heavily reliant an agricultural economy Nepal has suffered greatly as a result of this tragedy. Located between two Asian giants, India and China, Nepal’s level of development is low, with around a quarter of the population living below the poverty line (in Asia this is only exceeded by Afghanistan and Tajikistan). Its main economic activity is agricultural, and 70% of the Nepalese population works in this sector. According to CIA data, in 2018 its GDP reached 24,589M.€, with a per capita GDP of 875 €. In terms of the United Nations Human Development Index (HDI), used to measure the progress of a country and show the standard of living of its inhabitants, the Nepalese are shown to have a poor standard of living (with an updated index for 2018 of 0.574, placing the country in position 147/183 in ascending order of world poverty UNDP 2018). In addition to the weakness of its economy, corruption and political instability, Nepal faces major gender inequalities despite improvements in recent years. Serious deforestation problems are decimating the country’s forests, and most of the country has suffered extreme natural disasters such as harsh winters, strong winds, landslides, torrential monsoons, floods, avalanches, and earthquakes. Figure 1. Effects caused by 2015Ghorka earthquake. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-M-1-2020, 2020 HERITAGE2020 (3DPast | RISK-Terra) International Conference, 9–12 September 2020, Valencia, Spain This contribution has been peer-reviewed. https://doi.org/10.5194/isprs-archives-XLIV-M-1-2020-719-2020 | © Authors 2020. CC BY 4.0 License. 719 1.3 Local architecture between tradition and import The vernacular architecture of Nepal is rich and varied (Bernier, 1997; Gray, 2006). Almost three quarters of the current population of Nepal live in vernacular housing, built using local materials such as stone, wood, brick or earth in the form of rammed earth, adobe walls, mixed walls, or rendered wattle (CBS, 2014). Over the last thirty years successive governments have directly and indirectly encouraged the abandonment of this vernacular architecture moving towards a contemporary international architecture which uses reinforced concrete and metal structures (Figures 2 and 3). These imported materials, which do not always adapt to the local climatic conditions, were costly for local residents. In addition, the restriction in the use of timber due to the rampant deforestation of the country has led to it no longer being used for ring beams or ledgers in masonry walls, a practice which provided resistance to earthquakes (Yeomans, 1996). The 2015 Ghorka earthquake caused these vernacular buildings, when poorly constructed, to collapse in the same way as modern reinforced concrete buildings. In contrast, traditional and contemporary buildings that were well built with anti-seismic traditional timber or modern metal reinforcements successfully withstood the earthquake (Abari, 2016). Figures 2 and 3. Dhulikhel:local traditional architecture and imported materials. 1.4 The Abari foundation and its catalyst role The Abari, Bamboo and Earth Initiative foundation is an initiative committed to society and the environment (Figure 4). It designs and builds an architecture which studies, promotes, and celebrates vernacular architectural tradition in Nepal, especially using natural materials like earth and bamboo. This initiative, with a strong social component, has developed and built numerous houses, schools, and infrastructures which benefit the country. Following the 2015 earthquake, given the great number of homes in need of reconstruction and the scattered and isolated nature of most of the settlements affected, a dignified proposal was made for Owner Driven Reconstruction to encourage owners to implement seismic solutions in homes in earthquake-stricken regions. To do so, Abari drew up several manuals for the construction of provisional and permanent housing and schools which were made available to the public free of cost (Abari, 2016a, 2016b, 2016c, 2016d) (http://abari.earth/our-story/). These ideas have also been taken into consideration by the Nepalese government as part of its strategies for the reconstruction of the country. Figure 4. Details of Abari Foundation working approach. 2. ANSWERS FOR NEPALESE BUILDINGS? 2.1 A critical scenario: the construction of housing in Nepal At present, the construction of housing in Nepal, both in connection with the usual demand and the still plentiful postseismic reconstruction work, is facing several problems. Most of the collapses of traditional buildings caused by earthquakes were due to a lack of structural integrity, roof collapses, foundation issues, poor building quality, or other issues affecting loadbearing walls. Most of these complications could have been avoided with the design of low buildings with thick rammed earth walls or compacted earth walls with a low centre of gravity, using traditional vertical and horizontal wood connectors incorporated into the masonry or earthen constructions, which have always been vital to resistance to earthquakes (Abari, 2016). Figures 5 and 6. Dhulikhel: local production of CEB. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-M-1-2020, 2020 HERITAGE2020 (3DPast | RISK-Terra) International Conference, 9–12 September 2020, Valencia, Spain This contribution has been peer-reviewed. https://doi.org/10.5194/isprs-archives-XLIV-M-1-2020-719-2020 | © Authors 2020. CC BY 4.0 License. 720 However, these traditional wood ledgers have gradually been abandoned due to impediments from the government, which is taking steps to ensure the conservation of the country’s forests. This has brought about the gradual appearance of imported solutions incorporating polypropylene meshes between several courses (Adhikary, 2016), galvanized steel bands, and metallic containment gabions (Langenbach, 2015), combining local materials and techniques with minor contributions from external materials. Thanks to these solutions the use of timber as an interior connection in walls is avoided to a large extent, although this material is still needed for the construction of floors and roofs. This creates a pressing need to think about a housing prototype which may be able to withstand these issues. 2.2 The proposal: homes without wood or imported materials With this diagnosis and conditioning factors in mind, the “House Nepal” project proposes the creation of a timber-free housing prototype with 0 km materials, or failing that, cheap materials which can be locally sourced with little or no processing, so that they are environmentally friendly. The starting point for the options is the construction of earthen walls, which are traditional in Nepal, in the form of rammed earth or CEB or Compressed Earth Block walls, in a modern reinterpretation of adobe (Figures 5 and 6). CEBs are obtained by compacting a mix of local earth and approximately 5% of bonding agent (lime or cement), avoiding calcination and fuel costs so that it is harmless to the environment. The possibility of using ceramic brick tile vaults (Moya, 1947; Fortea, 2001; Davis, 2012) or CEBs (Ramage, 2010; Block et al., 2010) for work on the floors and ceilings is being considered. Up to now “only” prior studies have been executed for techniques, materials, modelling, and laboratory tests, although the construction of housing prototypes is planned for the town of Dhulikhel in 2020: The local population will also be involved in awareness and dissemination activities, and training provided for students of Kathmandu University.
Framework of the project
The "House Nepal" project is one of the lines of action of the Centre for Development Cooperation (CCD) of Universitat Politècnica de València, Spain, which serves as the headquarters for developing R&D&I projects in the field of Studies on Development, International Cooperation, and applying Technology for Human Developments to reach Sustainable Design Goals.
These goals propose common responses for the major challenges facing the world at present: poverty, inequality, and sustainability. Therefore, this research should be relevant to securing these, while taking development cooperation into account.
In this regard, the project, led by F. Vegas with other researchers from UPV and the Nepalese Foundation Abari: Bamboo and Earth Initiative chiefly aims to provide scientific and technological support to the local Nepalese population, creating actions for training, dissemination, and empowerment of local specialists, in collaboration with the town of Dhulikhel and students from Kathmandu University.
Nepal, the third poorest country in Asia
The earthquake which devastated Nepal in 2015 was a major blow to one of the poorest countries in Asia ( Figure 1). In continuous political transition for the last decade and heavily reliant an agricultural economy Nepal has suffered greatly as a result of this tragedy. Located between two Asian giants, India and China, Nepal's level of development is low, with around a quarter of the population living below the poverty line (in Asia this is only exceeded by Afghanistan and Tajikistan). Its main economic activity is agricultural, and 70% of the Nepalese population works in this sector. According to CIA data, in 2018 its GDP reached 24,589M.€, with a per capita GDP of 875 €. In terms of the United Nations Human Development Index (HDI), used to measure the progress of a country and show the standard of living of its inhabitants, the Nepalese are shown to have a poor standard of living (with an updated index for 2018 of 0.574, placing the country in position 147/183 in ascending order of world poverty -UNDP 2018).
In addition to the weakness of its economy, corruption and political instability, Nepal faces major gender inequalities despite improvements in recent years. Serious deforestation problems are decimating the country's forests, and most of the country has suffered extreme natural disasters such as harsh winters, strong winds, landslides, torrential monsoons, floods, avalanches, and earthquakes. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-M-1-2020, 2020
Local architecture between tradition and import
The vernacular architecture of Nepal is rich and varied (Bernier, 1997;Gray, 2006). Almost three quarters of the current population of Nepal live in vernacular housing, built using local materials such as stone, wood, brick or earth in the form of rammed earth, adobe walls, mixed walls, or rendered wattle (CBS, 2014). Over the last thirty years successive governments have directly and indirectly encouraged the abandonment of this vernacular architecture moving towards a contemporary international architecture which uses reinforced concrete and metal structures (Figures 2 and 3). These imported materials, which do not always adapt to the local climatic conditions, were costly for local residents. In addition, the restriction in the use of timber due to the rampant deforestation of the country has led to it no longer being used for ring beams or ledgers in masonry walls, a practice which provided resistance to earthquakes (Yeomans, 1996). The 2015 Ghorka earthquake caused these vernacular buildings, when poorly constructed, to collapse in the same way as modern reinforced concrete buildings. In contrast, traditional and contemporary buildings that were well built with anti-seismic traditional timber or modern metal reinforcements successfully withstood the earthquake (Abari, 2016).
Figures 2 and 3. Dhulikhel:local traditional architecture and imported materials.
The Abari foundation and its catalyst role
The Abari, Bamboo and Earth Initiative foundation is an initiative committed to society and the environment (Figure 4). It designs and builds an architecture which studies, promotes, and celebrates vernacular architectural tradition in Nepal, especially using natural materials like earth and bamboo. This initiative, with a strong social component, has developed and built numerous houses, schools, and infrastructures which benefit the country. Following the 2015 earthquake, given the great number of homes in need of reconstruction and the scattered and isolated nature of most of the settlements affected, a dignified proposal was made for Owner Driven Reconstruction to encourage owners to implement seismic solutions in homes in earthquake-stricken regions. To do so, Abari drew up several manuals for the construction of provisional and permanent housing and schools which were made available to the public free of cost (Abari, 2016a(Abari, , 2016b(Abari, , 2016c(Abari, , 2016d (http://abari.earth/our-story/). These ideas have also been taken into consideration by the Nepalese government as part of its strategies for the reconstruction of the country.
A critical scenario: the construction of housing in Nepal
At present, the construction of housing in Nepal, both in connection with the usual demand and the still plentiful postseismic reconstruction work, is facing several problems. Most of the collapses of traditional buildings caused by earthquakes were due to a lack of structural integrity, roof collapses, foundation issues, poor building quality, or other issues affecting loadbearing walls. Most of these complications could have been avoided with the design of low buildings with thick rammed earth walls or compacted earth walls with a low centre of gravity, using traditional vertical and horizontal wood connectors incorporated into the masonry or earthen constructions, which have always been vital to resistance to earthquakes (Abari, 2016).
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-M-1-2020, 2020 HERITAGE2020 (3DPast | RISK-Terra) International Conference, 9-12 September 2020, Valencia, Spain However, these traditional wood ledgers have gradually been abandoned due to impediments from the government, which is taking steps to ensure the conservation of the country's forests. This has brought about the gradual appearance of imported solutions incorporating polypropylene meshes between several courses (Adhikary, 2016), galvanized steel bands, and metallic containment gabions (Langenbach, 2015), combining local materials and techniques with minor contributions from external materials.
Thanks to these solutions the use of timber as an interior connection in walls is avoided to a large extent, although this material is still needed for the construction of floors and roofs. This creates a pressing need to think about a housing prototype which may be able to withstand these issues.
The proposal: homes without wood or imported materials
With this diagnosis and conditioning factors in mind, the "House Nepal" project proposes the creation of a timber-free housing prototype with 0 km materials, or failing that, cheap materials which can be locally sourced with little or no processing, so that they are environmentally friendly.
The starting point for the options is the construction of earthen walls, which are traditional in Nepal, in the form of rammed earth or CEB or Compressed Earth Block walls, in a modern reinterpretation of adobe (Figures 5 and 6). CEBs are obtained by compacting a mix of local earth and approximately 5% of bonding agent (lime or cement), avoiding calcination and fuel costs so that it is harmless to the environment. The possibility of using ceramic brick tile vaults (Moya, 1947;Fortea, 2001;Davis, 2012) or CEBs (Ramage, 2010;Block et al., 2010) for work on the floors and ceilings is being considered.
Up to now "only" prior studies have been executed for techniques, materials, modelling, and laboratory tests, although the construction of housing prototypes is planned for the town of Dhulikhel in 2020: The local population will also be involved in awareness and dissemination activities, and training provided for students of Kathmandu University.
Constructive and design actions
In spring 2020 construction and experimentation is expected to take place with a 4x4 m habitation module to be added to other similar ones to form a housing unit which does not require wood to guarantee stability and construction and is also able to withstand the seismic movements which affect the country (Figures 7, 8 and 9). Experimentation will be carried out with the following materials to find the best possible solutions from all perspectives: loadbearing walls (rammed earth, CEBs or recycled materials) tile vaults (brick, CEB) and reinforcement mesh (jute, hemp, recycled plastic). The options studied will be aggregation, bond, intrinsic resistance, durability, and the ability to withstand earthquakes.
The autonomy and amount of resources and human means to guarantee maximum empowerment of the local population will also be studied. This will include women, who could carry out vital work producing reinforcement meshes with jute or hemp fibres, or recycled plastic strips, awarding them a frontline role in a country where they have been ignored.
Socio-cultural empowerment actions
This project is proposed as a participatory process, involving the entire population of Dhulikhel -children, women, and men -in keeping with what the Abari Foundation has developed in recent years, empowering local residents in the construction and reconstruction of housing and buildings destroyed in the 2015 Ghorka earthquake. The project aims to support the one set up by the Abari Foundation for Owner Driven Reconstruction to avoid the import of standardized models and foreign technology, as well as delays in the construction and occupation of housing.
Educational and technical training actions
Courses geared towards the students of Kathmandu University will be included in the framework of the campaign for Dhulikhel in spring 2020. These courses will examine technical training, both with walls with plant mesh, and the technological and scientific training for the construction of tile vaults with similar reinforcements.
This specific high quality scientific and technical training is based on theoretical-practical courses and the execution of walls and vaults reinforced with mesh like that of the housing prototype; it will allow local young people to learn a technique which uses local materials to improve on technology and which will be used beyond the construction of this housing in other buildings in these towns, or even in other parts of the country. Figure 10. Examples of Abari, Bamboo and Earth Initiative foundation projects.
4.CONCLUSIONS
The "House Nepal" project proposes a holistic approach to the issue of sustainable housing, simultaneously considering social, economic, and environmental aspects. In social terms, residents are involved, traditional construction methods are preserved with improved resistance to earthquakes using simple strategies and local materials. From an economic viewpoint, construction costs are brought down and chiefly transferred to labour and simple work tools, with the possibility of improving and maintaining these homes in the future, using old-new technology from the prototype as a professional way to earn a living. In environmental terms, the impact of construction is minimized through the use of natural materials, taking advantage of daylight and the excellent insulation provided by the natural thermal inertia of earth. | v2 |
2019-11-16T15:39:35.649Z | 2019-11-16T00:00:00.000Z | 208045769 | s2orc/train | Tribological Performance of Non-halogenated Phosphonium Ionic Liquids as Additives to Polypropylene and Lithium-Complex Greases
Four non-halogenated ionic liquids (ILs) with trihexyl(tetradecyl)phosphonium cation are tested as lubricant additives to polypropylene (PP) and lithium-complex (LiX) greases. In pin-on-disk tests at elevated temperatures, the addition of an IL with bis(oxalato)borate ([BOB]) anion reduces wear by up to 50% when compared to the neat LiX base grease; an IL with bis(mandelato)borate ([BMB]) anion reduces friction by up to 60% for both PP and LiX. Elemental analysis reveals that oxygen-rich tribofilms help to reduce wear in case of [BOB], while the friction reduction observed for [BMB] is likely caused by adsorption processes. We find that temperature has a pronounced effect on additive expression, yet additive concentration is of minor importance under continuous sliding conditions. In contrast, rolling-sliding experiments at 90 °C show that the traction performance of LiX grease is dependent on additive concentration, revealing a reduction in traction by up to 30 and 40% for [BMB]- and [BOB]-containing ILs at concentrations of 10 wt%. Finally, an IL with dicyanamide anion reduces friction and increases wear in pin-on-disk tests at room temperature, while an IL with bis-2,4,4-(trimethylpentyl)phosphinate anion increases wear, showing only limited potential as grease additives. Overall, this work demonstrates the ability of non-halogenated ILs to significantly extend grease performance limits.
Introduction
Lubricating greases rely on carefully designed additive packages to reduce friction and wear under harsh operating conditions. While traditional additive formulations often contain considerable amounts of phosphorus, zinc, and sulfur, the growing need for sustainable lubrication technologies is limiting the use of these elements to ever-decreasing quantities [1]. Thus, the development of additives with reduced environmental impact has become a multidisciplinary challenge.
So far, various ILs have been studied for their use in tribological systems, both as neat lubricants and lubricant additives [19][20][21][22][23][24]. The focus of research has been on ILs with imidazolium, phosphonium, ammonium, and pyridinium cations in combination with halogenated anions, such as tetrafluoroborate, bis(trifluoromethylsulfonyl)imide, and hexafluorophosphate. Although this so-called second generation of ILs showed remarkable performance in basic tribological testing, their claimed environmental advantage has been questioned repeatedly [25,26], mostly because of susceptibility to hydrolysis (and therefore corrosion) and the formation of toxic halogen compounds, such as hydrogen fluoride.
In earlier work, our group has synthesized non-halogenated, orthoborate ILs that enable low friction and wear in aluminum-steel [27] and steel-steel [28,29] sliding contacts, both as neat lubricants and additives to oils. Due to their boron-based anions, these ILs potentially exhibit good tribological performance [30] with reduced environmental impact; however, a detailed assessment of these and other properties is the subject of the current and future work.
Focusing on phosphonium ILs, the present study aims to further the understanding of IL performance in lubricating greases. In particular, we are comparing grease blends based on polypropylene (PP) and lithium-complex (LiX) base greases. While LiX grease is the industry standard for lubricating grease, PP has only caught the attention of the research community in the mid-1990s [42]. So far, studies have shown promising tribological performance for this grease type [43][44][45][46], which is of interest due to its non-polar thickener, good film-forming properties [47], comparatively high oil bleed rate at low temperatures, and compatibility with common grease additives [48].
We present a comparison of the friction and wear performance of IL-containing LiX and PP grease blends. Four non-halogenated ILs are studied as candidate additives, and the influence of anion type and IL content is assessed in initial screening tests. Variations of temperature and slideto-roll ratio (SRR) are then performed for two selected ILs. Finally, scanning electron microscopy (SEM) and energydispersive X-ray (EDX) spectroscopy are used to analyze the worn surfaces.
Grease Blends
Four ILs with trihexyl(tetradecyl)phosphonium cation were blended with PP and LiX greases in IL concentrations of 2, 5, and 10 wt%. To synthesize the grease blends, greases and ILs were mixed in a Flacktek Speedmixer DAC 600.1 FVZ. Each sample was mixed at 1 400 rpm during two cycles of 5 min. In between the cycles, the grease was carefully scraped from the side of the container in order to guarantee a homogeneous sample. Table 1 gives an overview of base grease properties; the chemical structures of the ILs are summarized in Table 2.
To increase polarity and thereby facilitate the saponification reaction, a polar base oil component (adipate ester) was added to the LiX base oil blend. This is standard procedure in industrial grease manufacturing when non-polar base oils, such as PAO, are used. The presence of a polar component facilitates a rapid and complete reaction, ensuring that the soap concentration in the final grease blend is not too high. While there is no technical reason to add adipate ester to PP grease, identical base oils were used for both base grease types in order to limit the number of experimental variables. Moreover, it is possible that the adipate ester improves IL solubility, and the effect is expected to be similar for both grease types. A high pressure differential scanning calorimeter of type Mettler-Toledo HP DSC1 was used to determine oxidation onset temperature of the LiX grease and its blends with [P 6,6,6,14 ][BMB] and [P 6,6,6,14 ][BOB]. The measurements were carried out in pure oxygen at 3.8 MPa pressure. The procedure followed the standard ASTM E2009-08(1014) with the exception of not using an oxygen flow through the test cell and the sample. The sample amount was 0.1-0.2 mg.
Grease Rheology
Rheological properties of LiX and PP grease were measured on an Anton Paar MCR 301 rheometer with a plate-plate setup and a Peltier-type heating element. To measure the complex viscosity, the temperature was gradually increased (20-140 °C, 3 °C/min) while a constant strain (0.1%) was applied to the sample, using a plate with a diameter of 25 mm and a gap height of 1 mm. The strain value was chosen so that tests were performed in the linear viscoelastic (LVE) region. Prior to testing, a relaxation time of 5 min at 20 °C was allowed for, and measurements were carried out at an angular frequency of 10 rad/s. To determine the yield stress and flow point, strain sweep measurements were carried out at an angular frequency of 10 rad/s and temperatures of 25, 40, 90, and 130 °C. To minimize wall slip effects at higher temperatures, tests were performed with a serrated plate (diameter 50 mm). Grease was loaded at 20 °C and the gap height set to 1 mm. Afterwards, the temperature was increased at a rate of 3 °C/min while the grease sample was at rest, and tests were started 10 min after a set temperature value was reached. The strain was then increased from 0.01 to 1000%.
Pin-On-Disk Experiments
To assess the performance of the grease blends, pin-ondisk (POD) experiments were performed using a Mini Traction Machine (MTM) of type MTM2 by PCS Instruments. The test conditions for all experiments are summarized in Table 3.
The first series of experiments (POD RT) was carried out at room temperature (RT) with IL concentrations of 2 and 5 wt%. Here, the aim was to screen the friction and wear performance of different grease blends and to establish a reference for tests at elevated temperatures.
Based on the initial screening results, two ILs were then selected for further testing. In test series POD 90, the temperature was increased to 90 °C, and grease blends with 10 wt% IL concentration were added to the test matrix.
Finally, test series POD 130 was designed to provide first insights into the tribological performance at high temperatures (130 °C). Based on findings from series POD RT and POD 90, tests were limited to IL concentrations of 2 wt%.
In all experiments, commercially available AISI 52100 steel balls (diameter 6 mm, quality grade G20 according to ISO 3290 [49], hardness ≥ 62 HRC, roughness R a ≤ 0.032 μm) and steel bearing washers (hardness ≥ 62 HRC, roughness R a = 0.1 μm) were used as pins and disks, respectively. Before testing, specimens were cleaned with acetone in an ultrasonic bath for half an hour, rinsed with 2-propanol, and dried in ambient air. Afterwards, about 0.3 g of grease was applied to the disk.
During the tests, the friction coefficient was recorded with a frequency of 1 Hz; a grease scoop made from polytetrafluoroethylene was used to prevent starvation of the contact track.
Wear Measurements and Surface Analysis
After the pin-on-disk experiments, the wear scar diameters on the pins were measured in two perpendicular directions using an optical microscope of type MM-60 manufactured Table 3 Test conditions for pin-on-disk (POD) and ball-on-disk (SRR) experiments with entrainment speed v, slide-to-roll ratio SRR, sliding distance s, mean Hertzian contact pressure p mean , ambient temperature θ, IL mass fraction w IL , and minimum number of repetitions n by Nikon. The average of both measurements was used for further analysis. For selected pins, the topography and elemental composition of the worn surfaces were analyzed using SEM and EDX (accelerating voltage 15 keV) on a Hitachi S-3700 N with a Bruker Quantax EDS system and XFlash 4010 detector.
Data Analysis
On completion of the pin-on-disk experiments, more than 170 friction curves and mean wear scar diameters were obtained. To present this data set in an accessible way, three average friction coefficients were calculated for each friction curve, based on the sliding distance ranges of 0-500 m, 500-1 000 m, and 1 000-1 500 m. The average friction coefficients were then summarized in a scatter diagram where error bars indicate the standard deviation of the friction coefficient within each sliding distance range. Since wear measurements were only performed at the end of the experiment, the average wear scar diameters are presented without any further modification. Instrument output data were processed using MTM-specific import routines of the Python Tribology Package [50].
Ball-On-Disk Experiments
To evaluate the traction performance in mixed sliding-rolling conditions, ball-on-disk experiments were carried out. Tests were performed using the MTM2 test rig mentioned above. Here, the entrainment speed v between ball and disk was kept constant while the slide-to-roll ratio was varied between 0 and 195% in steps of 5%, starting at low SRR values. For each step, the steady-state traction coefficient was measured for the case of v ball > v disk and v ball < v disk and averaged afterwards. Experiments were carried out at room temperature and 90 °C for two selected ILs (again, based on the results of test series POD RT). Within the limitations of the test rig capabilities, the experimental conditions follow those of the pin-on-disk experiments; a complete overview is given in Table 3.
Steel bearing washers were used as disks (properties see Sect. 2.3); the properties of the AISI 52100 ball specimens (diameter 19.05 mm) are comparable to those of the pins (hardness ≥ 62.5 HRC, R a < 0.02 μm). The cleaning procedure, grease amount, and grease scoop setup follow that of the pin-on-disk experiments. To ensure even grease distribution at the beginning of the experiment, the disk was rotated for at least 1 min with a tangential velocity of 0.2 m/s before the ball was brought into contact. For each test, a new set of ball and disk specimens was used. Wear was not quantifiable due to the short run time.
Grease Rheology
Although both greases have the same NLGI grade, the complex viscosity of PP is significantly lower than that of LiX (see Fig. 1). As temperature increases, the relative change in complex viscosity is comparable for both grease types.
To quantify the flow behavior in more detail, Fig. 1 also shows the flow transition index (FTI), which is defined as the ratio between the stress in the flowpoint (storage modulus is equal to loss modulus, G′ = G″) and the yield stress.
Simply speaking, the closer the value of the flow transition index is to 1, the more immediate the transition from elastic behavior to plastic flow, i.e., for a grease with FTI = 1, the sample will start to flow immediately as soon as it is deformed.
As can be seen, FTI is well correlated with the complex viscosity for both grease types up to 90 °C, decreasing logarithmically as temperature increases. Yet, for PP grease a sudden drop in FTI can be observed between 90 and 130 °C, reaching an FTI of 1.0 at 130 °C, thus indicating a significantly reduced resistance to deformation at higher temperatures.
Pin-on-Disk at Room Temperature
The friction and wear results for test series POD RT are summarized in Figs. 2 and 3 (left panels). For LiX and PP base greases, the steady-state friction coefficient stabilizes in the range of 0.09-0.12 (LiX) and 0.11-0.13 (PP) towards the end of all experiments. Thus, on average, lower friction is observed for LiX grease. In terms of wear, experiments with PP grease are more repeatable, Adding [P 6,6,6,14 ][BMPP] to LiX and PP base grease, friction remains unchanged; however, on average, wear increases by about 80-100 μm for both grease types and is independent of additive concentration. Similar to the neat base greases, the wear results are more consistent for PP.
An increase in wear scar diameter is also found when [P 6,6,6,14 ][DCA] is added to PP and LiX base grease. The [BMPP] additives. For the friction plots, each data point represents the average friction coefficient within a sliding distance range of either 0-500 m, 500-1 000 m, or 1 000-1 500 m (from left to right for each group). The error bars indicate the standard deviation of the friction coefficient within the distance range. Within each group, darker colors indicate higher IL concentrations (dark → 5 wt%, light → 2 wt%). For the wear results, each data point represents the mean wear scar diameter on the pin, measured after 1.5 km of sliding. Comparing friction and wear data, points with the same marker symbol belong to the same test run. The horizontal line is provided to guide the eye increase is more pronounced for PP grease; however, so is the reduction in friction that comes with it, reaching friction coefficients as low as 0.05. Here, the results may indicate a dependence on additive concentration-lower friction coefficients for higher IL concentrations-, but poor repeatability means that the data ultimately remain inconclusive.
When [P 6,6,6,14 ][BOB] is added to PP base grease, friction decreases by about 10-20% (Fig. 3, top left panel). On average, the wear scar diameter remains unchanged with respect to the base grease, and neither friction nor wear shows any dependence on additive concentration (Fig. 3, bottom left panel). Similar trends are found for LiX base grease: the reduction in friction with respect to the base grease is small, and both friction and wear are independent of the additive concentration.
In contrast, the results for [P 6,6,6,14 ][BMB] may indicate a weak dependence on additive concentration: For LiX grease, wear is found to be consistently lower for higher IL concentrations. At the same time, friction remains unchanged with respect to the base grease, indicating that, at room temperature, [P 6,6,6,14 ][BMB] may have good anti-wear properties in combination with LiX grease. For PP grease, the average reduction in wear-though visible-is less pronounced, and no dependence on additive concentration is observed.
In summary, it is found that friction may reduce by 60% when [P 6,6,6,14 ][DCA] additives are added to PP and LiX base grease in concentrations of 2 and 5 wt%; however, this reduction in friction comes at an almost equal increase in wear scar diameter. A similar increase is also observed for [P 6,6,6,14 ][BMPP], yet without a repeatable decrease in friction. Wear remains unchanged for [P 6,6,6,14 ][BOB], and [P 6,6,6,14 ][BMB] shows potential to reduce wear for both base grease types. Thus, the latter two ILs were selected for further pin-on-disk and rolling-sliding experiments.
Pin-on-Disk at 90 °C
The friction and wear results for test series POD 90 are summarized in Fig. 3 (center panels). For both base greases, increasing the ambient temperature to 90 °C leads to an increase in friction and wear. Again, measurements for PP grease show better repeatability, and larger scatter is observed for LiX.
When adding [P 6,6,6,14 ][BOB] and [P 6,6,6,14 ][BMB] additives to PP base grease, friction reduces by 10-20%, similar to the results at room temperature. No correlation between additive concentration and friction coefficient is found for additive concentrations of 2 and 5 wt%. For concentrations of 10 wt%, however, both friction and wear are found to be consistently at the lower end of the spectrum. While the absolute reduction in friction is small here, the wear scar diameters are significantly reduced with respect to the neat base grease. Thus, at 90 °C, adding [P 6,6,6,14 ][BOB] and [P 6,6,6,14 ][BMB] additives to PP base grease effectively offsets the effects of increased temperature on wear performance.
For LiX grease, the interpretation of the friction results is more ambiguous since large scatter occurred, especially in the case of [P 6,6,6,14 ][BMB]. Here, unstable friction behavior is observed for experiments with various additive concentrations, and no clear correlation between IL concentration and friction coefficient is found. While more stable, friction measurements for [P 6,6,6,14 ][BOB] in LiX show a similar trend, with minor reductions in friction and no correlation between IL concentration and friction performance. On average, the wear scar diameters are slightly reduced compared to the neat base grease, yet the effect is less pronounced than for PP grease.
Pin-on-Disk at 130 °C
To get insights into the friction and wear performance at even higher temperatures, test series POD 130 was carried out at 130 °C. The friction and wear results are summarized in Fig. 3 (right panels).
While the friction performance of the base greases is comparable to that for tests at room temperature and 90 °C, an increase in wear is found, in particular for LiX grease. Here, the average size of the wear scar diameter is doubled with respect to the room temperature tests, and significantly larger than at 90 °C.
In contrast to the results at lower temperatures, the addition of 2 wt% [P 6,6,6,14 ][BMB] leads to a pronounced reduction in friction for both grease types. While repeatability is poor, average friction coefficients as low as 0.04 are observed for PP base grease. For LiX, the reduction in friction is-on average-of comparable magnitude. Thus, at 130 °C, [P 6,6,6,14 ][BMB] shows a clear potential to reduce friction for both base grease types. Also, on closer inspection, it can be seen that friction and wear are reversely-if weakly-correlated for [P 6,6,6,14 ][BMB] and PP base grease, which means that low friction comes at the cost of increased wear. More experiments would be required to study this correlation in more detail.
Finally, the addition of [P 6,6,6,14 ][BOB] additives has little effect on friction, but wear is significantly reduced. For both base grease types, wear is observed to drop to room temperature levels, which corresponds to an average wear reduction of 50% for LiX.
SEM and EDX Analysis
Following the pin-on-disk experiments, SEM/EDX analysis was carried out for two selected pin samples from test series POD 130. Since similar friction and wear performance is observed independent of base grease type-a reduction in friction for [P 6,6,6,14 ][BMB], and a reduction in wear for [P 6,6,6,14 ][BOB]-, both pin specimens were selected from experiments with PP grease [see marker type square (filled square) in Fig. 3, 130 °C, data for PP grease]. Figure 4 shows SEM wear scar images of the two pin specimens, based on secondary electron collection. For [P 6,6,6,14 ][BMB] (top), the wear scar image shows no distinct surface features, except for the evenly distributed scratch marks that run horizontally over the contact region.
For [P 6,6,6,14 ][BOB], however, large parts of the wear scar surface appear darker than the reference area outside of the wear scar. Images from optical microscopy confirm that similar surface features are present on all pin specimens from this test series (both LiX and PP) if [P 6,6,6,14 ][BOB] was used as an additive-with the exception of a single experiment with PP grease that produced a relatively large mean wear scar diameter of 425 μm [see marker type dot (filled circle) in Fig. 3, 130 °C]. Thus, a clear correlation is found between the appearance of dark surface features and a reduction in wear.
As shown in Table 4, EDX spectra recorded in different regions of the worn surface (see Fig. 4, bottom) reveal increased levels of oxygen inside the darker areas; yet, except for typical alloying elements of AISI 52100 steel, no other elements are present in significant quantities. In particular, the EDX analysis does not show signs of tribofilms containing boron compounds, which were clearly detected in other tests with [P 6,6,6,14 ][BMB] [29].
Traction Tests at Room Temperature
The results of test series SRR RT are summarized in Fig. 5 (left panels) for LiX (top) and PP (bottom) base grease. For both grease types, the data show a steady increase in traction with increasing SRR values, approaching a sliding friction coefficient of 0.1. This is in agreement with the results from the pin-on-disk tests at room temperature (see Fig. 3, left).
For LiX grease, adding 10 wt% [P 6,6,6,14 ][BOB] leads to a significant and repeatable reduction in traction over the entire SRR range. However, when the concentration is reduced to 2 wt%, repeatability is poor and the traction performance is indistinguishable from that of the base grease. Thus, traction is found to be dependent on additive concentration for LiX grease and [P 6,6,6,14 ][BOB]. For PP grease, however, neither concentration of [P 6,6,6,14 ][BOB] has a traction-reducing effect. In fact, an increase in traction may have been observed here in the central part of the SRR range.
Adding 10 wt% of [P 6,6,6,14 ][BMB] has no discernible effect on traction for both base grease types. While a slight reduction may have been observed for LiX grease, the difference (compared to the neat base grease) is ultimately within the precision limit of the measurement approach. Again, a reduction in concentration to 2 wt% may lead to an increase in traction for PP grease within the central SRR range.
Traction Tests at 90 °C
Increasing the temperature to 90 °C, the traction performance of neat LiX base grease remains unchanged within the repeatability of the experiment (Fig. 5, top right panel). For PP grease, a reduction in traction of about 15% is found for all SRR values (Fig. 5, bottom right panel). Adding 10 wt% of [P 6,6,6,14 ][BOB] to LiX grease leads to a significant and repeatable reduction in traction, especially at lower SRR, which is in agreement with the room temperature tests. The same can be said for 10 wt% of [P 6,6,6,14 ] [BMB] in LiX base grease, although the effect appears to be less pronounced here. For lower concentrations of either IL, the results may indicate a slight reduction in traction for higher SRR values.
For PP grease, no IL type is found to have a significant impact on traction performance at 90 °C, irrespective of additive concentration.
Discussion
For the grease systems investigated above, [P 6,6,6,14 ][BMPP] shows little potential for friction and wear reduction. Yet, promising tribological behavior was found for this IL in other studies on the nano-and macro-scale, including a reduction in sliding friction for steel and titanium tribopairs [51][52][53]. The poor wear performance observed in our experiments is also not well documented in the literature, where [P 6,6,6,14 ][BMPP] was previously found to decrease rather than increase wear [51,52]. Thus, our experiments indicate that results previously obtained for neat [P 6,6,6,14 ][BMPP] may not be easily transferable to more complex tribosystems. Given the small sample size and parameter range of our experiments, further tests are required for a more general verdict on the tribological performance of this particular grease additive. Our findings for [P 6,6,6,14 ][DCA] are equally limited in generalizability; nevertheless, for the grease systems studied here, the observed increase in wear limits the range of potential use cases-despite the outstanding friction performance in combination with PP grease at room temperature. Since this IL has not yet been widely used as a candidate additive in the nano-and macro-tribological community-let alone in the relatively small sub-discipline of IL grease research-, we cannot draw on a large number of published findings for comparison. However, as [DCA] anions with other types of cations have partially shown promising friction [54][55][56][57] and wear [58] performance in other studies, a change in cation may be a promising way forward here. That being said, care should be taken as ILs with [DCA] anions may produce highly toxic compounds such as hydrogen cyanide during decomposition [59].
Looking at the orthoborate ILs selected for the second stage of testing, [P 6,6,6,14 ][BOB] shows high potential to reduce wear under severe sliding conditions. For this IL, the wear scar diameters are found to remain at room temperature levels for a wide range of temperatures. SEM/EDX analysis of selected pin specimens shows tribofilm formation; their appearance correlates well with low wear. Thus, we suggest that these films protect the steel surfaces from excessive wear under severe sliding conditions. The mechano-chemical processes that lead to the formation of the tribofilms, however, are currently not completely understood and subject of ongoing, more fundamental research in our group.
In contrast, no tribofilms were detected for grease blends with [P 6,6,6,14 ][BMB] additives, which indicates that physical adsorption processes rather than chemical reactions may play a dominant role in the reduction in friction observed in the POD experiments. A thermogravimetric analysis shows that [P 6,6,6,14 ][BMB] is more thermally stable than [P 6,6,6,14 ] [BOB]; in addition, the [BMB] anions have a stronger potential to interact electrostatically with a tribo-charged surface, facilitating the formation of low shear boundary films enriched in cations. While a detailed assessment of the friction reduction mechanism for [P 6,6,6,14 ][BMB] is part of ongoing work, it requires a simplified lubricant system in order to-for now-bracket off the complex interaction between base oil, thickener, IL, and steel surfaces.
Looking at the above results from a more applicationoriented perspective, though, it can be concluded that, under rolling-sliding conditions, base grease type and IL concentration are deciding factors for the experimental outcome: high concentrations of [P 6,6,6,14 ][BOB] facilitate a reduction in traction, but only for LiX base grease. At elevated temperatures, a reduction in traction is found for both orthoborate ILs at 10 wt% concentration-again, only for LiX.
To explain these observations, it is worth noting that the test conditions of the SRR tests are less severe compared to those of the POD tests. Firstly, the test duration is much shorter, leaving little time for surface and grease degradation processes to take place, let alone tribo-chemical reactions. Secondly, due to the high levels of rolling at lower slide-toroll ratios, the accumulated friction energy input is greatly reduced compared to the rather severe sliding tests. Thus, differences in lubricant performance are likely caused by the physical interaction between additive, thickener and steel surface, rather than the mechano-chemical processes that are typically associated with the formation of solid tribofilms.
In previous ball-on-disc experiments with PP and LiX greases, it was found that PP grease forms significantly thicker lubricating films at low to moderate speeds (and temperatures) compared to LiX grease [47], despite its nominally lower complex viscosity. This was attributed to lumps of PP thickener increasing the local film thickness by purely mechanical means. Hence, in our case, the inherent film-forming properties of PP grease may overshadow ion adsorption processes which occur on much smaller scalesand constitute one of the fundamental mechanisms of friction and wear reduction in IL-lubricated contacts [19,22]. This is in agreement with the fact that the traction-reducing effect of [P 6,6,6,14 ][BOB] in LiX grease becomes more pronounced as temperature increases, which can likely be attributed to a temperature-induced change in oil bleed behavior and IL solubility, as well as a reduction in base grease film-forming capabilities [47]. As a result, the inherent film-forming properties of LiX are reduced to a point where IL adsorption becomes relevant. Following this line of thought for the case of PP grease, a further increase in temperature to 130 °C-though hardly relevant from an application point of view-may lead to a more visible additive expression in the SRR tests.
Moreover, the SRR experiments show that the additive expression is most pronounced at low-to-medium slide-toroll ratios: the more the conditions approach pure sliding, the less important the lubricant composition. One reading of these results is that the film-forming abilities of the two base greases become more similar at higher levels of sliding-assuming that no degradation and aging processes have occurred yet.
This hypothesis finds support in the POD experiments, where friction and wear performance are similar for both grease types and additive concentrations during the first 500 m of sliding. The tests with [P 6,6,6,14 ][BMB] clearly show that the additive expression only becomes pronounced in later stages of the experiments-although the frictionreducing effect does not seem to stem from the progressive (and potentially slow) build-up of chemical reaction films. 1 Thus, we suggest that grease degradation processes play a vital role in the POD tests. Similar to the loss in film-forming 3 Page 10 of 13 abilities described above, grease degradation may be accelerated at elevated temperatures, giving better surface access to the ILs as sliding distance increases.
Previous studies have investigated the effect of aging and metal debris on grease performance [60,61]. Here, it was found that when LiX grease is thermally stressed and metal debris enters the grease, samples show structural changes (due to oxidation) that cause a decrease in oil release and a loss of ability to replenish the contact. The resulting thickening of the grease then sets off a self-enhancing cycle of increasing wear as more metal debris enters the grease, further accelerating the aging process.
The catalytic effect of metal debris also increases the thickener degradation rate for PP grease, effectively increasing the complex viscosity [60]. However, in contrast to LiX grease, the resulting increase in friction (and therefore temperature) will then cause local melting of the polypropylene thickener and essentially liquefy the grease, thereby replenishing the contact and reducing the temperature.
Thus, while LiX grease enters into a thermal runaway reaction at its end of life, PP grease continues to replenish the contact through a self-regulating cycle of heating and cooling before the lubricant finally becomes too thick for efficient lubrication [60].
Since an increase in temperature will accelerate the above processes, we suggest that the increase in wear for LiX grease at 130 °C indicates an imminent lubrication failure due to thickening of the grease. As a result, the integrity of the tribological contact increasingly relies on the IL additives, which not only help to reduce friction ([P 6,6,6,14 ] [BMB]) and wear ([P 6,6,6,14 ][BOB]), but may also retard oxidation (as shown in Fig. 6). Similarly, additive expression at 130 °C may become more pronounced in case of PP grease as liquefaction processes increase additive solubility and general access to the surface.
Conclusions
Pin-on-disk and ball-on-disk experiments were carried out for PP and LiX grease blends containing non-halogenated phosphonium ILs at concentrations of 2-10 wt%. Generally, the performance limits of the greases can be significantly extended by adding ILs. In particular, we find the following: • In pin-on-disk tests, [P 6,6,6,14 ][BOB] reduces wear over a wide temperature range compared to the neat base greases, while sliding friction is slightly reduced. The reduction in wear scar diameter is especially large-up to 50%-for LiX grease at 130 °C. The decrease in wear correlates with the appearance of oxygen-rich surface features on the worn surfaces. At room temperature, adding 2 and 5 wt% of [P 6,6,6,14 ][BOB] to LiX and PP base greases has only minor effects on the friction and wear performance. No clear correlation between IL additive concentration and tribological performance is found. • In ball-on-disk tests, adding 10 wt% of [P 6,6,6,14 ][BOB] to LiX grease reduces traction by up to 40% over a wide range of slide-to-roll ratios. This reduction does not occur for PP grease. • For greases containing [P 6,6,6,14 ][BMB], an increase in temperature leads to a reduction in sliding friction of up to 60%, while wear is of the same order of magnitude as for the neat base grease. Only a minor influence on the friction and wear performance of LiX and PP greases is observed when adding 2 and 5 wt% of [P 6,6,6,14 ][BMB] to greases at room temperature. • [P 6,6,6,14 ][BMB] reduces traction in rolling-sliding conditions over a wide range of slide-to-roll ratios. This reduction occurs at 90 °C and, similar to [P 6,6,6,14 ][BOB], is observed for LiX-based greases only. • [P 6,6,6,14 ][DCA] additive reduces sliding friction for both base grease types at room temperature but also shows increased wear. • [P 6,6,6,14 ][BMPP] additive causes increased wear, but does not reduce friction in sliding contacts at room temperature. Fig. 6 Oxidative stability measurements for LiX grease. Measurements were performed for the neat base grease as well as grease blends with 10 wt% [P 6,6,6,14 ][BMB] and [P 6,6,6,14 ][BOB]. The dots mark the oxidation onset temperature, which increases for grease blends containing IL additives. The difference in intensity of the thermograms is due to a slightly different amount of grease samples used Page 11 of 13 3 I-LEAP research team for helpful discussions and valuable comments. Also, we would like to thank Sagar P. Mahabaleshwar for his help with initial screening tests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | v2 |
2018-04-03T05:50:30.483Z | 2017-09-27T00:00:00.000Z | 21686109 | s2orc/train | Reactive oxygen species measured in the unprocessed semen samples of 715 infertile patients
Abstract Purpose To determine whether reactive oxygen species (ROS) in semen samples could be measured with the Monolight™ 3010 Luminometer. Methods Using the Monolight™ 3010 Luminometer, the ROS was measured in the unprocessed semen samples of infertile male patients, as well as the luminescence of 190 semen samples. The samples were classified as “luminescence‐detectable” (n = 89) and “luminescence‐undetectable” (n = 101). Thereafter, the luminescence of the semen samples that had been obtained from the 715 infertile patients was measured and compared by using Sperm Motility Analyzing System measurements. Moreover, in order to investigate the ROS measurement consistency, the chemiluminescence values of 84 samples were measured concurrently by using the Monolight™ 3010 Luminometer and the 1251 Luminometer™. Results The semen volume, sperm motility, and progressive motility of the samples were significantly higher in the luminescence‐undetectable samples. The sperm motility, straight‐line velocity, curvilinear velocity, mean amplitude head displacement, beat cross frequency, and progressive motility showed an inverse correlation with the logarithmic‐transformed luminescence level in the luminescence‐detected samples. The integrated chemiluminescence levels in the 84 samples were correlated. Conclusion The substance that was measured in the unprocessed semen with the Monolight™ 3010 Luminometer and stimulated chemiluminescence is ROS.
decline in sperm motility. 5,6 Furthermore, ROS infiltrate into sperm and break down the sperm DNA. 7 These adverse effects of ROS result in a decrease in the natural pregnancy rate 8 and the fertilization rate of assisted reproductive technology. 9 As a result of a negative correlation between the ROS levels with sperm motility and fertilization, the ROS levels in semen could serve as an independent marker of male factor infertility, particularly in cases of idiopathic infertility. 10,11 Numerous studies concerning ROS in semen have been reported. [6][7][8][9][10][11][12] Chemiluminescence assays are widely adopted in ROS measurement. 6,8,[12][13][14][15][16][17] Constant chemiluminescence after the addition of 5-a mino-2,3-dihydro-1,4-phthalazine-dione (luminol) to unprocessed or washed semen is measured with a luminometer. The authors previously had measured ROS in semen samples by using the 1251 Luminometer ™ (LKB Wallac, Turku, Finland); however, cuvettes were unavailable for ROS measurement with the 1251 Luminometer ™ . Nevertheless, it was possible to obtain the Monolight ™ 3010 Luminometer (BD Biosciences Pharmingen, Ltd., San Diego, CA, USA) and the necessary cuvettes.
To the authors' knowledge, no previous study has measured ROS in semen by using the Monolight ™ 3010 Luminometer. Therefore, ROS were measured in the unprocessed semen samples of infertile male patients by using this device. This study aimed to determine whether ROS in whole semen samples could be measured with the Monolight ™ 3010 Luminometer. The semen analyses were performed two or three times before treatment with the Sperm Motility Analyzing System (SMAS ™ ; DITECT, Ltd., Tokyo, Japan). After measurement of the semen volume with a 10 mL serological pipet (FALCON ® ; Corning, Tewksbury, MA, USA), the following parameters were measured by using the SMAS ™ : sperm concentration (×10 6 /mL); sperm motility (%); straight-line velocity (VSL) (μm/s) (measured as the straight-line distance from beginning to end of a sperm track divided by the time taken); curvilinear velocity (VCL) (μm/s) (measured as the total distance traveled by a given sperm divided by the time elapsed); linearity index (LIN), or the ratio of VSL to VCL; mean amplitude of lateral head displacement (ALH) (μm) (measured as the mean width of sperm head oscillation); beat cross frequency (BCF) (Hz), defined as the frequency of the sperm head crossing the average sperm path; and progressive motility (%), or the fraction of spermatozoa that progress at a rate >25 μm/s in liquefied, unprocessed semen.
| Measurement of the chemiluminescence of the semen samples with the Monolight ™ 3010 Luminometer
During each patient's first consultation, immediately after the semen analysis, the results of the chemiluminescence of the sufficiently liquefied semen samples, using the Monolight ™ 3010 Luminometer, a double-tube luminometer, at room temperature (~25°C) in a slightly dark laboratory room, were recorded after the addition of 40 μL of 100 mmol L -1 luminol to 500 μL of whole semen. 16 First, the integrated chemiluminescence between 0 and 200 seconds, without the addition of luminol, was measured. Subsequently, the integrated chemiluminescence over a similar period of time after the addition of luminol to the samples was measured. The integrated chemiluminescence value was calculated as the difference between after and before the addition of luminol to the semen samples. When the integrated luminescence value was negative, the value was defined as zero. When comparing the sperm motile parameter and the lumines-
| Comparison of the data measured with the 1251 Luminometer ™ and the Monolight ™ 3010 Luminometer
The luminometer was changed from the 1251 Luminometer ™ to the Monolight ™ 3010 Luminometer because cuvettes for ROS measurement with the former were unavailable. The chemiluminescence of the semen samples from 84 patients was measured by using both the 1251 Luminometer ™ and the Monolight ™ 3010 Luminometer in order to investigate the compatibility of chemiluminescence between the two devices. Moreover, only 84 cuvettes for the 1251 Luminometer ™ were available; hence, 84 samples were examined.
The 1251 Luminometer ™ was used to measure the whole-semen ROS level according to a previously reported method. 16 When the luminescence was ≥0.1 mv/s at peak value, ROS production in this sample was considered to be detectable ( Figure 1). Additionally, the integrated ROS values were used to clarify the differences between the ROS-detectable and the ROS-undetectable cases. The integrated ROS levels between 0 and 30 min after the addition of luminol were expressed as mV/30 minutes and considered as a new ROS level of the sample.
The integral ROS level, measured with both luminometers, was plotted on a graph and the correlation between the ROS levels, as measured by the two different luminometers, was investigated.
| Statistical analysis
The statistical values are presented as the mean ± standard deviation. A chi-square test was used to confirm bias in disease and the time-course-curve pattern, as well as disease and cases showing disease and emission values above the threshold. A Mann-Whitney U-test was used to compare the luminescence values and semen parameters in the luminescence-detectable and -undetectable groups.
Correlations between the log (luminescence value) and the semen parameter, measured with the SMAS ™ , and those between the luminescence values that had been measured with the two luminometers were investigated by using Spearman's correlation coefficient. Differences were considered to be statistically significant when the P-value was ≤.05. All the calculations were performed with the IBM SPSS statistics for Macintosh software (v. 22.0; IBM Corporation, Armonk, NY, USA).
| Time course of chemiluminescence
When the chemiluminescence was measured with the Monolight ™ 3010 Luminometer, the luminescence level increased for several seconds and then decreased rapidly before the addition of luminol. When multiple measurements were performed on the same specimen, the chemiluminescence values were almost the same (data not shown).
Therefore, the data that were measured by using this study's method were reproducible and hence chemiluminesence measurement was performed once after the semen analysis. Moreover, the relationship between the time-course-curve pattern and the chemiluminescence value was investigated by using 190 sample data and the time course was recorded from the start of the study. The patients were diagnosed as having idiopathic infertility (n = 55), untreated varicocele (n = 80), spermatogenic failure due to cancer chemotherapy (n = 19), treatment of undescended testis (n = 2), or other causes (n = 6) and 29 had an infertile female partner. These 190 samples were divided into three groups, according to the time-course pattern after the addition of luminol. The luminescence level increased rapidly and then decreased slowly for 100-150 s in Group A (n = 62). The integral luminescence value of the samples in this group was 51.62 ± 166.89-fold higher than that observed before the addition of luminol. The luminescence level of the second sample group (Group B, n = 27) increased for several seconds and did not decrease during the measurement. The integral luminescence value of the samples in this group was 24.61 ± 31.84fold higher than that measured before the addition of luminol. In Group C (n = 101), although the peak value increased, the pattern of the time course was similar to that before the addition of luminol ( Figure 2).
The integral luminescence value was 1.68 ± 0.57-fold higher than that measured before the addition of luminol (Table 1). Table 2
| Determinants of the threshold level of chemiluminescence
When the measured luminescence values were arranged in order from the lowest, all the samples exhibiting a luminescence value of F I G U R E 1 Reactive oxygen species (ROS), as measured by the chemiluminescence method with a 1251 Luminometer ™ . 16 When the peak level was ≥.1 mv/s, the ROS formation was considered to be positive. The integral level of ROS production was calculated by subtracting the area under the baseline from the total chemiluminescence values, between 0 and 30 minutes after the addition of 40 μL of 100 mmol of luminol to 500 μL of unprocessed semen, and expressed as mV/30 min 10 6 spermatozoa F I G U R E 2 Time course of luminescence in the unprocessed semen samples. Before the addition of 40 μL of 100 mmol of luminol, the luminescence value increased for several seconds, before decreasing rapidly. After the addition of luminol, a rapid increase and then a slow decline in the luminescence level occurred in Group A, a rapid increase and then the maintenance of luminescence during the measuring time occurred in Group B, and a course similar to that observed before the addition of luminol, but with a slight increase in the integral luminescence level, occurred in Group C (Figure 7).
| Correlation of the luminescence levels measured by the 1251 Luminometer ™ and the Monolight ™ 3010 Luminometer
The chemiluminescence values of the 84 samples were measured concurrently with the two luminometers and plotted to determine whether a correlation between the devices existed. The integrated luminescence level in the 84 semen samples, as measured by the Monolight ™ 3010 Luminometer, was strongly correlated with that measured by the 1251 Luminometer ™ (conversion formula: y = 31.974X + 1769.6; P < .001, R = 0.824) (Figure 8).
| DISCUSSION
Oxidative stress is one of the major factors that could result in male infertility. 2,3 A study reported that several male patients with infertility of unknown cause had OS. 17 Moreover, testicular damage due to OS is induced by male infertility diseases, such as varicocele and cryptorchism, 18 and by exposure to chemicals, such as anticancer drugs, 18 heavy metals, 19 and phthalates. 18 Numerous studies on the relationship between ROS in semen and sperm motility, as well as fertility, have been published.
The methods of measuring OS in semen include the following: (i) a direct assay that measures the amount of ROS, including chemiluminescence, electron spin resonance, nitroblue tetrazolium test, and thiobarbituric acid assay; and (ii) an indirect assay that measures the effect of OS, including the measurement of antioxidants, lipid peroxidation, and DNA damage. 11 Of these methods, the chemiluminescence assay is used widely to measure ROS in semen. [6][7][8][13][14][15]18,20 To (Table 2; P = .065). This finding could be because F I G U R E 6 Chemiluminescence (mean ± standard deviation) value above the threshold level of the samples in each group of patients F I G U R E 5 Number of patients with a chemiluminescence value greater or less than the threshold level. The patients are categorized according to the cause of their infertility. P < .001, according to the chi-square test the dose of luminol (100 mmol L -1 , 40 μL) in the current study was higher than that in other reports. 6,8,[13][14][15]21 The authors assumed that increasing the dose of luminol would make ROS detection clearer and the authors adopted the luminol dose that was reported by a specific study. 16 However, in this study's results, the percentage of the semen samples in Groups A and B were 31.9 and 14.1%, respectively. In all the patients, the percentage of samples that had greater than the threshold level was 44.1%. These percentages were almost similar to those in other reports. Moreover, Figure 3 shows that the ranges of logarith-
Conflict of interest:
The authors declare no conflict of interest. Human and Animal Rights: All the procedures that were followed were in accordance with the ethical standards of the responsible committees on human experimentation (institutional and national) and with the Helsinki Declaration of 1964 and its later amendments. This article does not contain any study with animal participants that has been performed by any of the authors. | v2 |
2020-02-13T09:25:00.112Z | 2020-01-28T00:00:00.000Z | 213568139 | s2orc/train | “Sustainable
development” and globalisation processes
How do we relate globalisation to other types of mondialisation, such as
communications and economics? The answer should be: any globalisation should be motivated by the general interest of humanity and striving to that aim. In practice, this means that international protection of human
rights and environmental rights need not only jurisdictional (legal–political) but also, above all, ethical standards. Without it, a conflict between different types of globalisation could become damaging,
almost dangerous. The very idea of the global village that has so well explained the phenomenon of mondialisation can assist in solving the problems that need to be addressed. One of the features of each village is the intense
connection among the inhabitants. That phenomenon is now present globally, which is the essence of globalisation. That implies a global responsibility that must be implemented on the one hand by communities and the other by individuals, especially those who serve in the service of community — politicians. The crucial question arises, “How to define the responsibility of one and the others?” It is evident that at the top of the pyramid, there are
major planetary problems whose solutions require the cooperation of all
nations and countries. The straightforward phrase “Think globally, act locally”, expresses the rule of the fundamental game of the global world
and its diversity — a possible ethic of sustainable development.
"SUSTAINABLE DEVELOPMENT" AND GLOBALISATION PROCESSES
Ivan Koprek https://doi.org/10.32701/dp.21.1.2 The term "sustainable development" was coined in the 1970s as a concept that encompasses all further human development needed, together with the mandatory environmental protection. To date, according to the available data, there were several hundred attempts to define this term. Generally speaking, the most known and accepted definition dates back to 1987. In this year, a report titled Our Common Future or well-known as the Brundtland Report, the definition of sustainable development has been published and remained leading until today: "Sustainable development is a development that meets the needs of the present without compromising the ability of future generations to meet their own needs." 1 At the last decades of the 20th century, numerous conferences and meetings are organised addressing the development and implementation of the concept of sustainable development in all elements and at all levels of global societies. Thus, one of the most famous conferences was held in Rio de Janeiro in 1992. The result of the Rio de Janeiro meeting is a document called Agenda 21.
Agenda 21 is a program for sustainable development at a global level, encompassing the social and economic dimension, protection and management of developing resources, empowering the role of the crucial groups and implementation of funds. The slogan, "Think globally, act locally!" which has been promoted there, is considered today to be the main principle and essential guideline for thinking and acting in accordance with sustainable development. 2 1. What is sustainable development?
On the whole, sustainable development is explained today as a process of change in which the utilisation of resources, the direction of investment, the orientation of technological development and institutional reforms (political, educational, legal, financial and other systems' reforms) are in harmony with each other and enable the needs and expectations of the present and future generations. It can be said that sustainable development represents a general direction, an aspiration to create a better, more ethical world, a world of balancing social and economic factors in protecting the environment. There are three domains of sustainable development: economic, social and environmental (ecological). The process of globalisation is undoubtedly linked to them. 3 The globalisation that is the creation of a world without borders, in the opinion of many, is a result of a) the worldwide expansion of communications and b) liberalisation wave of change of goods, followed after the fall of the Berlin Wall. Both statements are true, however, only partially. The globalisation of communication connections is the outcome of technological development, while the globalisation in the field of economics is the result of ideology and strategies.
The "mondialisation of civilisation", the birth of what M. McLuhan called the "global village" as its product, is the result of a combination of two previous factors: economy and communication. Yet, if we look at the state of the things over the last fifty years, we will realise that there is actually one form of globalisation that precedes the latter, and which results from the recognition of a simple thing, and that is the environment.
The global approach to environmental issues stems more from existing states of affairs and efforts to understand facts rather than from technological or political data or thoughtfully and carefully planned strategies. Pope Francis states in the Laudato si encyclical that a global approach stipulates recognition and affirmation of human activities. It is enough to mention only one aspect -threats to biodiversity due to the massive and accelerated disappearance of species: whales, birds, tropical plants…, of which the primary cause is human activity. As soon as this phenomenon was accepted and understood, sometime in the 1960s, its global character became apparent. Thus, in many environments, public opinion became aware of the danger that is threatening our planet due to the increasing number of disorderly human activities, compounded by the influence of not always the right technologies. The symptoms of environmental destruction became undeniable such as water pollution, black tides on shores, poisoned fogs, depletion of natural resources, and testimony of the risk that people caused by their activities. The biosphere itself is also endangered. Humanity simply had to react.
The fact that the term "environment" itself is a new term in many languages illustrates the swiftness of this evolution. Nevertheless, the relations of human with nature and its elements have been around since ancient times. Cosmological representations occur in ancient civilisations or those that stem from their roots. In African, American-Indian or Asian civilisations, the Earth is the "goddess", the "mother" of humanity, the human matrix and at the same time the food provider. Therefore, she is sacred, and animals like plants are worthy of respect. The three great monotheistic religions (Judaism, Christianity, Islam) preach that the universe is God's work and it belongs to Him.
This conception, as mentioned before, has changed with the progressive emergence of rigid rationalism and mercantilism, which has only interest in the market value of goods. The deep sense of respect that human had for nature was replaced progressively by the desire (greed!) for profit. With the advent of the industrial age and lightning-fast technological advancements, a new belief emerged -human, the only lord of the world, can allow things to happen and dominate nature.
By an odd, but by no means illogical coincidence, materialism that has conquered Western societies and progressively spread to the rest of the world has shamelessly led to the exploitation of the biosphere, which was significantly considered a communal natural resource in the service of humanity. Even the first signs of concern for the preservation of specific environmental components had anthropocentric features: some aspects of nature, fish, birds or seals (used for fur production!) are protected only if utilised by a human.
Some reasons for discussing sustainable development
The confusion described here raises some questions, "Why is the environment protected? What is the reason for its protection? Why are we talking about, or should we be talking about, sustainable development?" 4 There are at least four possible answers to these questions: the first scientific, the second economic, the third humanistic and finally the fourth -ethical.
For scientists, a series of claims about the increasing importance of short-term or long-term pollution, depletion of the stratospheric ozone layer, loss of biodiversity and climate change are warning signs of acting more ethically.
The economic explanation, which dominated the 1970s, takes environmental elements to be the natural resources needed for living and development of economic systems. While traditionally the list of these resources contained only arable land, forests, minerals, wild flora and fauna, the reduction of unpolluted seas, drinking water as well as clean air, now they additionally gained their economic value. The use of natural resources has had its price, which has led to more rational use, that is, their management.
The approach we call humanistic holds that humanity should be at the heart of the biosphere and that natural goods should be equally shared and mediated for generations to come. One form of this understanding is longterm development, a concept that has dominated public speech since the 1992 Rio de Janeiro conference: goods must meet the needs of present and future generations. From a few years ago, there is talk of natural law within the present generation and law among future generations.
An ethical explanation encompasses several versions. Religious people understand the world as taught by the Bible, Christians, but also Jews and Muslims, are reminded of the fact that human is not the master of the Earth but merely manager and thus responsible for it. Humanity is part of the biosphere, and by destroying it, man ruins himself, too. Some conclude from that that humanity should be considered one of the millions of species that is an integral part of the global ecosystem. Hence, we can talk about natural law among species.
We have concluded that all of these explanations have globalisation in mind, that they hold the problem of environmental protection to be a planetary problem. Moreover, it is acknowledged that environmental issues are above the current political and economic structures. In the late 1960s, international instruments reminded us that the environment knew no boundaries, while in the 1980s, it was officially recognised that the greatest threats to our biosphere were an issue of worldwide importance. These include stratospheric ozone depletion, climate change, desertification, deforestation, accelerated loss of biodiversity and depletion or pollution of natural resources, whether we talk about fish or drinking water supplies.
No country, no continent, no matter how developed, is capable of leading the battle alone. Mondialisation is imposed because of the necessary solidarity in facing these threats. Thus, one can speak of the emergence of a new element in the general interest of humankind, of global sustainable development.
How to protect the environment?
Every human society gathers around several values, emphasised principle that legal language names constitutive. In most countries of the world among those are: respect for a person, religious or other beliefs, freedom, private and family life, property etc. Protecting these fundamental values and social cohesion becomes a general interest. The purpose and aim of the laws and institutions of each country are to strengthen that general interest. It is reasonable to question whether this scheme can be transformed into an international, that is, a world plan since globalisation inevitably leads to it.
The logical answer is that it should be this way: the two fundamental sectors of human society are now open to the whole world: a) on the one hand communication, thus large part of civilisation, and on the other b) economy, with all the consequences for social and political structures of different countries. At the same time, there are no countries nor societies in the world that could impose to "planetary village" mechanism respect for ground values. Maybe there might be a place for the legal regulation of this problem, but above all, a place for pointing out the necessity and meaning of ethics.
The classical theory of international law holds international treaties, in one way, as a limitation of countries. We can take as an example, four world conventions, which were to be accepted by almost all countries of the world: Vienna Convention in 1985 and Montreal Protocol in 1987 on the protection of the stratospheric ozone layer; Basel Convention in 1989, which put under severe control the expose and import of hazardous waste; Rio de Janeiro Convention on biological diversity and its counterpart, Convention on Climate Change. The last one took place in Paris in 2015. 5 How do we relate this form of globalisation to other types of mondialisation, such as communications and economics? The answer should be: any globalisation should be motivated by the general interest of humanity and striving to that aim. 6 In practice, this means that international protection of human rights and environmental rights need not only jurisdictional (legalpolitical) but also, above all, ethical standards. Without it, a conflict between different types of globalisation could become damaging, almost dangerous. The very idea of the global village that has so well explained the phenomenon of mondialisation can assist in solving the problems that need to be addressed. One of the features of each village is the intense connection among the inhabitants. That phenomenon is now present globally, which is the essence of globalisation. That implies a global responsibility that must be implemented on the one hand by communities and on the other by individuals, especially those who serve in the service of community -politicians. The crucial question arises, "How to define the responsibility of one and the others?" It is evident that at the top of the pyramid, there are major planetary problems whose solutions require the cooperation of all nations and countries. The straightforward phrase "Think globally, act locally", expresses the rule of the fundamental game of the global world and its diversity -a possible ethic of sustainable development. | v2 |
2017-06-25T19:05:58.625Z | 2006-10-11T00:00:00.000Z | 5743879 | s2orc/train | Simultaneous alcohol and cannabis expectancies predict simultaneous use
Background Simultaneous use of alcohol and cannabis predicts increased negative consequences for users beyond individual or even concurrent use of the two drugs. Given the widespread use of the drugs and common simultaneous consumption, problems unique to simultaneous use may bear important implications for many substance users. Cognitive expectancies offer a template for future drug use behavior based on previous drug experiences, accurately predicting future use and problems. Studies reveal similar mechanisms underlying both alcohol and cannabis expectancies, but little research examines simultaneous expectancies for alcohol and cannabis use. Whereas research has demonstrated unique outcomes associated with simultaneous alcohol and cannabis use, this study hypothesized that unique cognitive expectancies may underlie simultaneous alcohol and cannabis use. Results: This study examined a sample of 2600 (66% male; 34% female) Internet survey respondents solicited through advertisements with online cannabis-related organizations. The study employed known measures of drug use and expectancies, as well as a new measure of simultaneous drug use expectancies. Expectancies for simultaneous use of alcohol and cannabis predicted simultaneous use over and above expectancies for each drug individually. Discussion Simultaneous expectancies may provide meaningful information not available with individual drug expectancies. These findings bear potential implications on the assessment and treatment of substance abuse problems, as well as researcher conceptualizations of drug expectancies. Policies directing the treatment of substance abuse and its funding ought to give unique consideration to simultaneous drug use and its cognitive underlying factors.
Discussion:
Simultaneous expectancies may provide meaningful information not available with individual drug expectancies. These findings bear potential implications on the assessment and treatment of substance abuse problems, as well as researcher conceptualizations of drug expectancies. Policies directing the treatment of substance abuse and its funding ought to give unique consideration to simultaneous drug use and its cognitive underlying factors.
Background
Expectancies represent intervening cognitive variables that connect memory and behavior, and reflect knowledge of a relationship between events and objects [1]. Bolles [2] identified expectancies as environmental stimulus-outcome contingencies that directly affect behavior, and regarded expectancies as synonymous with the concept of association. Expectancies represent individual learning associations made between stimuli, individual responses and resulting outcomes. Heavy drug consumers may be more likely to activate positive expectancies for drug effects [e.g. [3]].
While researchers have investigated the cognitive mechanisms underlying single substance use [e.g. [4]], a relative dearth of research exists on the cognitive mechanisms informing simultaneous polydrug use or their role in drug abuse. Research suggests that similar mechanisms may underlie expectancies for both alcohol and cannabis use. Stacy [5] demonstrated that similar memory association mechanisms underlie both alcohol and cannabis use. Boys and Marsden [6] found that polysubstance users' expectations regarding relief of negative mood states increased the likelihood to use drugs such as cannabis and alcohol simultaneously. Yet no studies assess simultaneous drug use expectancies, despite findings that simultaneous use of cannabis and alcohol may yield different outcomes than use of cannabis alone [e.g. [6]].
Researchers identify cannabis and alcohol as the two substances most frequently used simultaneously [7,8]. The simultaneous use of alcohol and cannabis may introduce greater problems than the use of both drugs independently or even concurrently. Staines et al [9] reported a positive relationship between problems with alcohol and simultaneous use of illicit drugs such as cannabis. Other studies link simultaneous cannabis and alcohol use with increased negative consequences such as psychological distress, psychopathology [10] and substance dependence [9].
Earleywine and Newcomb [7] distinguished between concurrent drug use (e.g. use on separate occasions) and simultaneous use of multiple drugs, and found the two types of use form two distinct constructs. Simultaneous polydrug users may experience greater psychological distress and other negative consequences associated with drug use compared to other substance users [11,12]. Smucker Barnwell et al [13] found that interactions between measures of cannabis use and alcohol consumption significantly predicted cannabis dependence among frequent cannabis users. Stenbacka [14] found that simultaneous use of alcohol and cannabis among adolescents predicted later problems with either drug. Alcohol use and abuse serve as primary predictors of cannabis dependence [15][16][17], and cannabis dependence covaries more with alcohol dependence than with most other psychiatric diagnoses [18]. Similarly, several studies find that cannabis use frequently occurs among individuals with alcohol dependence diagnoses [18]. Heavy alcohol consumption among cannabis users may result in more problematic cannabis use, less successful cessation and more resulting negative life consequences [19].
The present research also sought to understand the etiology underlying the increased risks associated with simultaneous polydrug use. The interaction of alcohol and the chemical component known to cause the majority of cannabis' intoxicating effects, delta-9 tetrahydrocannabinol (THC), may offer a pharmacological explanation. Lukas and Orozco [20] found that smoking cannabis while also consuming ethanol lead to increased THC levels in the blood and more intensely positive reported subjective effects. Individuals who consume both drugs simultaneously may experience higher absorption rates of THC, increased positive effects of the drug, and, perhaps, greater cannabis or even alcohol dependence symptoms. Outside of pharmacological explanations, researchers posit that expectancies play a major role in the prediction of substance use and abuse [e.g. [5]].
Whereas different behavioral outcomes are predicted for simultaneous use of cannabis and alcohol, it seems plausible that unique cognitive expectancies underlie these outcomes. Individuals using several drugs simultaneously seem likely to possess different expectancies than an individual using a single substance. Perhaps these individuals possess unique cognitive templates that inform simultaneous polydrug use. Beyond individual drug expectancies, this study sought to identify unique cognitive constructs motivating simultaneous drug use. Perhaps simultaneous alcohol and cannabis use possess a distinct and unique set of expectancies, predicting simultaneous use beyond individual drug expectancies. This study examined whether simultaneous alcohol and cannabis use expectancies more accurately predicted simultaneous cannabis and alcohol use than single substance expectancies alone. Since little research exists on the identification and measurement of simultaneous drug use expectancies, this project offers a unique contribution to the existing literature on substance use and its motivators. Researchers and policy makes alike stand to benefit from greater understanding of usual patterns of drug use, their precipitating factors and, by extension, pathways to intervention and even prevention.
Members of the Marijuana Policy Project (MPP) and
National Organization for the Reform of Marijuana Laws (NORML) listserves received an email requesting survey participation. The study focused on individuals consuming cannabis and alcohol at least once a month to ensure sufficient information regarding target behaviors (N = 2637). Two thirds of respondents were male (66%); one third was female (34%). Respondents ranged in age from 13 to 86. Their mean age was 34.0 years (SD = 13.3). The average respondent first tried cannabis at the age of 16.0 years (SD = 3.9). The group was primarily of European descent, with other ethnicities ranking far behind. The majority of respondents possessed a Bachelor's degree, Associate's degree, or some college credit completion. The average participant earned an annual income of less than $40,000 (See Table 1). All participants completed an online consent form in accordance with university ethical procedures. To ensure participant confidentiality, respondents were not required to provide any identifying information. Those participants who wished to enter into a raffle for one of several $100 gift certificates could provide an email address. Email addresses were immediately disconnected from all participant data to ensure respondent anonymity.
Alcohol consumption measures
The average respondent consumed 44.2 drinks per month (SD = 55.4) when consuming any alcohol. This score derived from a measure inquiring about average incidents of drinking per month, and usual numbers of drinks per incident. When drinking alcohol and consuming cannabis together within three hours of each other (e.g. only alcohol and cannabis), the average participant consumed fewer (M = 28.7; SD = 49.4) drinks per month. In contrast, when drinking only alcohol (e.g. with no other substances), respondents reported drinking still fewer (M = 20.5; SD = 42.3) drinks per month (See Table 2). Respondents reported different amounts of alcohol when drinking only alcohol, alcohol with cannabis and alcohol with any other substance (See Table 3).
Simultaneous consumption measures
Participants then completed the Simultaneous Polydrug Use Questionnaire [DUQ ; 21]. The DUQ assessed frequency of use of alcohol/drug and drug/drug combinations among seven classes of drugs: alcohol, cannabis, cocaine, opiates, sedatives, stimulants and hallucinogens. The use of this measure permitted the researcher to examine patterns of simultaneous drug use outside of cannabis and alcohol combinations. Simultaneous use was defined as use of two or more substances occurring within three hours of each other. Approximately one quarter of respondents reported consuming drug combinations other than alcohol and cannabis. 1 Removal of participants using drug combinations other than alcohol and cannabis did not alter findings. The majority of these respondents reported consuming marijuana with hallucinogens at least once a month in the past four-months. Other drug combinations were significantly less common (See Table 4).
Explicit measures of alcohol expectancies
Participants completed subscales of the Alcohol Expectancy Questionnaire [AEQ; 22], a widely used 120 item self-report questionnaire for measuring alcohol expectancies. The scale demonstrates predictive and concurrent validity and is the most commonly used measure for alcohol expectancies. The AEQ consists of six subscales: global positive changes, sexual enhancement, social and physical pleasure, social assertiveness, relaxation, and arousal/ aggression. To reduce participant burden, the study employed only global positive changes and relaxation subscales. Consistent with the measure's intent, we altered the measure instructions to indicate that items referred to the effects of alcohol use only. The measure demonstrated high internal consistency as measured by Cronbach's alpha (α = .89). Respondents endorsed 13.7 (SD = 7.4) alcohol expectancy items out of a possible score of 37, indicating moderate expectancies regarding the positive effects of alcohol.
Explicit measures of cannabis expectancies
The Marijuana Effect Expectancy Questionnaire [MEEQ; 23] measured explicit expectancies regarding cannabis consumption. The complete measure consists of 78 true/ , and summing the scores. Items not endorsed the second administrations of the AEQ and MEEQ were not given the follow-up question (e.g. "How does alcohol/ cannabis alter this effect" and were thus not included in their respective summary scores for simultaneous expectancies. We believe that this scoring method is approximately consistent with the original expectancy measure scoring which excludes items not endorsed from total expectancy scores. Lower scores indicated a lesser belief that simultaneous use intensified the experience, while higher numbers indicated a greater belief that simultaneous use intensified drug use experiences.
After several phases of revisions, pilot group participants ultimately indicated that the measure was clear and comprehensible. The first administration of the AEQ corre- Note. * p < .05. ** p < .01
Correlations
Large, significant correlations emerged between the various measures of alcohol consumption. Similarly, significant correlations emerged among measures of cannabis consumption. Cannabis and alcohol expectancy scores demonstrated small but significant correlations with alcohol and cannabis consumption measures (See Table 5).
Regressions
The first linear regression equation examined alcoholic drinks consumed per month when also using cannabis as the dependent variable. Lognormal transformations improved the skew of the dependent variable [24]. The skewness and kurtosis statistics of the measure were 4.3 and 27.4, respectively, prior to lognormal transformations and -.03 and -.91 afterwards. Age and gender acted as covariates. Alcohol expectancies (e.g. AEQ), cannabis expectancies (e.g. MEEQ) and simultaneous alcohol and cannabis expectancies each acted as predictors. Age and alcohol expectancy scores each demonstrated a main effect. In addition, simultaneous expectancies for alcohol and cannabis demonstrated a main effect. As predicted in study hypotheses, simultaneous expectancies predicted simultaneous alcohol and cannabis use beyond individual drug expectancies (See Table 6).
In the second linear regression equation, metric weight of cannabis consumed per month when also using alcohol served as the dependent variable. Again, lognormal transformations improved the skew of the dependent variable [24], and age and gender acted as covariates. The skewness and kurtosis statistics were 1.3 and .36, respectively, before lognormal transformations and .98 and -.52 afterwards. Cannabis expectancies, alcohol expectancies and simultaneous expectancies again acted as independent variables. Only age and simultaneous expectancy scores Table 7). In this equation, unique variance accounted for simultaneous expectancies was notably larger than that of individual drug expectancies.
Discussion
In this study, simultaneous expectancies predicted simultaneous drug use quantity and frequency. Simultaneous expectancies demonstrated the strongest main effect in the prediction of number of days consuming alcohol and cannabis simultaneously, as well as the number of alcoholic drinks when also consuming cannabis. Age, alcohol expectancies and simultaneous expectancies predicted amount of cannabis consumed when drinking alcohol.
Effect size and power
Given the vast sample size, a potential limitation was the clinical significance of effect size. Perhaps the large sample size provided sufficient statistical power to reveal relatively small findings with marginal clinical significance. It is plausible that the extremely large online data sample size may have resulted in the magnification of findings that bear little real-world relevance. Yet effects are frequently difficult to encounter in drug use data [e.g. [13]]. Although the effects associated with simultaneous expectancies are small and do not consistently account for the largest measures of unique variance in the regression equations, the existence of statistically significant effect merits further examination of the potential clinical significance of these findings.
Homogeneity of socio-economic status and ethnicity
Participants were largely Caucasian, educated and of moderate socioeconomic status. Low socioeconomic status [25] and low educational attainment [26] both correlate
Sample selection biases
Participants belonged to the National Organization for the Reform of Marijuana Laws or the Marijuana Policy Project. Individuals in this group were more likely to be aligned with the organization's mission of cannabis activism. Individuals belonging to these organizations may have been more likely to report positive associations with cannabis, accounting for higher cannabis expectancies in the online data sample. Also, participants belonging to these organizations may have been less likely to report problem-inducing cognitions associated with cannabis. We know of no research that would indicate that minimizing of cannabis problems would impact reports of expectancies. Respondents continued to report considerable use. Previous studies have successfully identified alcohol and cannabis problems in similar online populations [see [13]]. Thus this potential limitation does not represent an insurmountable barrier to data interpretation. These findings may offer greatest importance for those individuals who are frequent and/or heavy users of cannabis and alcohol. As previously mentioned, a more ethnically diverse sample would have offered more generalizable findings. Still, whereas the present sample provided a cross-section of individuals who consume the two drugs and demonstrate associated problems, we contend that these data offer some generalizability for individuals who consume cannabis and alcohol simultaneously.
Measurement
Whereas this study established a new measure of simultaneous alcohol and cannabis expectancies, it is possible that this measure introduced problems to the study. High disagreement between the first and second administration of items used in the MEEQ could indicate a problem with the measure. Reordering of items (e.g. MEEQ before AEQ) could significantly alter measure scores. Inherently, the simultaneous expectancy measure was lengthy and repetitious. It is possible that some participants encountered the measure as burdensome. During pilot testing, respondents indicated that although the measure was long and repetitious, these features did not preclude accurate completion of the survey. However, it is possible that the actual study respondents encountered the measure differently than pilot test participants.
Furthermore, the current administration of the simultaneous expectancies measure is likely the first in an iterative development process. The creation of a new measure typically requires numerous administrations and intensive research into the measure validity and reliability. Further research geared toward developing a more comprehensive measure of simultaneous alcohol and cannabis expectancies could provide greater evidence regarding unique expectancies for increased drug effects.
Also, issues of self-report may have obstructed findings in these data. Although data collection was anonymous, some participants may have encountered self-presentational issues that prevented them from reporting substance use. Although past research suggested that online measures elicited greater reports of drug consumption compared to laboratory-based measures [27], respondents may have been hesitant to report drug use and especially illicit drug use. It is unclear whether those who suffer the worst negative consequences of drugs are willing to disclose all symptoms and the full extent of their drug use in an Internet context. Still, the aforementioned limitations are common to many studies on these topics [see [28]], and may not present insurmountable barrier to interpretation of the data.
Experimental manipulation
As with any moderator that has not been manipulated experimentally, findings in this study could have arisen from drug use correlates rather than substance use itself. Personality traits, family history, genetics, or a plethora of other factors that correlate with alcohol and cannabis use may have actually served as the impetus for simultaneous alcohol and cannabis use. For example, sensation seekers may be more likely to score higher on drug expectancy measures and engage in more polydrug use. Perhaps sensation seeking, a known correlate of cannabis use [29], or its genetic correlates were actual factors underlying simultaneous alcohol and cannabis use. Nevertheless, the idea that unique expectancies regarding simultaneous use leads to increased simultaneous polydrug expectancies seems tenable.
Implications
Whereas the concept of simultaneous drug expectancies is a relatively new adaptation of the existing literature on expectancies, additional study could assist researchers in the development of this potentially useful research construct and measurement tool. The second administration of MEEQ items came toward the end of the study measures. Perhaps moderate correlations between first and second administrations of MEEQ items suggest participant fatigue toward the end of the study. Future studies may lessen participant burden by abbreviating the scale or dividing test administration into two sessions. Examination of other relevant subscale items from the AEQ and MEEQ may merit further exploration. The recommended procedure of reverse scoring negative items in the MEEQ may not yield items comparable to positive items on the AEQ. That is, the reverse score of a negative statement may not be the ideological analog to a positive statement. Future research may wish to explore rewriting items from the MEEQ's global scale under the supervision of the scale's authors.
Finally, although a combined expectancy score provided superior predictive powers than its component scales (e.g. alcohol alters experiences with marijuana; marijuana alters experiences with alcohol) in this study, future researchers may wish to examine these components separately. Perhaps further study will reveal different outcomes associated with the belief that one substance impacts the other to a greater extent.
Conclusion
If simultaneous expectancies offer a sound predictor of simultaneous drug use and, ultimately, problems, researchers may wish to integrate these findings into drug treatment and prevention efforts, including intervention attempts to decrease positive drug expectancies [e.g. [30,31]]. Research examining specific expectancies (e.g. simultaneous alcohol and cannabis expectancies for aggression) could explore simultaneous expectancies' capacity to predict particular simultaneous drug use consequences (e.g. drug-related violence). | v2 |
2019-01-05T15:45:56.110Z | 2019-01-04T00:00:00.000Z | 57426879 | s2orc/train | Microglia and amyloid precursor protein coordinate control of transient Candida cerebritis with memory deficits
Bloodborne infections with Candida albicans are an increasingly recognized complication of modern medicine. Here, we present a mouse model of low-grade candidemia to determine the effect of disseminated infection on cerebral function and relevant immune determinants. We show that intravenous injection of 25,000 C. albicans cells causes a highly localized cerebritis marked by the accumulation of activated microglial and astroglial cells around yeast aggregates, forming fungal-induced glial granulomas. Amyloid precursor protein accumulates within the periphery of these granulomas, while cleaved amyloid beta (Aβ) peptides accumulate around the yeast cells. CNS-localized C. albicans further activate the transcription factor NF-κB and induce production of interleukin-1β (IL-1β), IL-6, and tumor necrosis factor (TNF), and Aβ peptides enhance both phagocytic and antifungal activity from BV-2 cells. Mice infected with C. albicans display mild memory impairment that resolves with fungal clearance. Our results warrant additional studies to understand the effect of chronic cerebritis on cognitive and immune function.
D iverse environmental fungi are increasingly recognized as causal or contributory to the majority of common chronic, cutaneous inflammatory conditions such as atopic dermatitis (eczema), onychomycosis, and common mucosal inflammatory conditions such as pharyngitis/laryngitis, esophagitis, asthma, chronic rhinosinusitis, vaginosis, and colitis 1 . Cutaneous candidal disease in the form of mucocutanous candidiasis assumes a much more invasive and destructive character in the context of immunodeficiencies 1,2 . Fungi are further implicated in diseases as diverse as rheumatoid arthritis 3 and Alzheimer's disease (AD) [4][5][6][7][8] .
In addition to their frequent involvement in mucosal and cutaneous diseases, the fungi are further emerging as major causes of invasive human diseases such as sepsis, especially in intensive care units in the context of critical illness. Candidemia and fully invasive candidiasis, mainly caused by Candida albicans and related species 9,10 , is an especially serious concern in the nosocomial setting where it has emerged as one of the leading bloodstream infections in developed countries, producing high mortality and costing >1 billion dollars annually in the United States alone 11 . Diagnosis of candidemia can be difficult, as clinical signs and symptoms are often protean and non-specific, often presenting late in the course of infection when therapy is much less likely to be effective 12 . Moreover, blood fungal cultures and fungal-based serodiagnostic approaches lack sensitivity. Thus, a better understanding of fungal, especially candidal, disease pathogenesis, diagnosis, and therapy is emerging as an essential medical challenge of the 21st century.
Unique inflammatory responses have evolved to combat fungi growing along epithelial surfaces. Careful dissection of mucosal allergic inflammatory responses has revealed that characteristic granulocytes (eosinophils), cytokines (interleukin (IL)-5 and IL- 13), and T effector cells (T helper type 2 (Th2) cells; Th17 cells) are potently fungicidal or at least are required for optimal fungal clearance at mucosal sites in vivo 13,14 .
The rising prevalence of candidemia, often nosocomially aided through intravascular instrumentation, but also occurring as a consequence of mucosal colonization 9 , raises fundamental questions regarding the physiological effect of fungal sepsis and the immune responses that are activated during disseminated disease. Fungal sepsis/hematogenous dissemination specifically does not elicit allergic responses, which instead appear to be reserved to prevent fungal dissemination from mucosal sites, and rapidly attenuate in favor of type 1 and type 17 immunity when dissemination occurs, at least in the context of hyphal fungal disease due to Aspergillus spp. [13][14][15] . In part, such fungal-immune system cross-talk involves two-way interactions with innate immune cells that specifically attenuate fungal, especially Candida, virulence, and regulate adaptive immunity 16,17 .
Under resting conditions, the brain receives a relatively large fraction of the cardiac output (14%) and hence is susceptible to invasion due to blood-borne pathogens such as Candida spp. Candida brain infections have long been recognized as the most common cause of mycotic cerebral abscess seen at autopsy, and often present as delirium in the context of chronic illness 18,19 . Delirium is commonly seen in ICU patients who are highly susceptible to candidal sepsis, but aside from the tentative association seen between central nervous system (CNS) infection with Candida spp. and AD [4][5][6][7][8] , the clinical presentation of metastatic CNS infection complicating Candida sepsis is poorly understood.
Experimentally, high-grade candidemia is lethal to mice and produces a profound cerebritis marked by dissemination of the organism throughout the cerebral cortex and induction of type one immunity with neutrophilia that is devoid of allergic character 20 . However, in many human contexts, candidemia resulting from a variety of pathologies is likely to be low-grade, involving periodic showering of the CNS and other organs with relatively few organisms that may gain vascular entry from mucosal sites 16 .
In this study, we sought to model the effect of low-grade, transient candidemia and Candida cerebritis on cerebral function and further define the major immune mechanisms involved in resolving these potentially common CNS infections. We show that hematogenously acquired C. albicans are readily able to penetrate the mouse blood brain barrier (BBB) and establish a transient cerebritis that causes short-term memory impairment. We further show that the cerebritis is characterized by a unique pathologic structure, the fungal-induced glial granuloma, that is marked by focal gliosis surrounding fungal cells and the deposition of both amyloid precursor protein (APP) and amyloid beta peptides, that latter which promote anti-fungal immunity. These granulomas are further accompanied by increased production of the innate cytokines IL-1β, IL-6, and tumor necrosis factor (TNF), and enhanced phagocytic capacity of microglial cells. Thus, even low-grade candidemia can produce a physiologically significant brain infection.
Results
Acute model of intracranial fungal infection. To begin to understand the CNS effects of low-grade, transient fungemia, we developed a mouse model of fungal cerebritis based on the model developed by Lionakis et al. 20 . We developed this model using C. albicans as this organism is one of the most common fungi isolated from human blood 9 and is a significant cause of human CNS infection 21 . Intravenous injection of large numbers of C. albicans (e.g., 10 5 -10 6 organisms) induces considerable mortality in mice 20 . To avoid this and mimic more accurately the transient and silent fungemias that are likely to occur in humans, we modified this model to include fewer organisms (2500-50,000 yeast cells). We discovered that a single injection of 25,000 viable cells of C. albicans into wild-type C57BL/6 mice produced a transient cerebral infection that was detectable 4 days post challenge, but largely cleared by day 10 (Fig. 1a, b). This degree of infection produced no fever or hypothermia (Supplementary Figure 1A), no obvious abnormal behavior, and no mortality. We found no evidence pathologically or behaviorally that C. albicans induced meningitis as a result of our intravenous challenge protocol.
To determine where in the brain the fungi dispersed after hematogenous administration, we performed immunofluorescence staining on coronal whole brain sections from infected mice 4 days post i.v. injection. We discovered multiple (~5-10/brain), discrete, roughly spherical lesions,~50-200 μm in diameter occurring in both cerebral hemispheres, but sparing the cerebellum, that consisted of the central accumulation of cells that avidly stained for periodic acid-Schiff (PAS), a general marker of polysaccharides, containing small nuclei as assessed by DAPI (Fig. 1c-e). These lesions further consisted of the focal aggregation of astrocytes, assessed as GFAP-expressing cells, and microglia, assessed as IBA1-expressing cells, surrounding the PAS-positive central cells (Fig. 1c-h). The focal astrocytosis and microgliosis consisted of a rim of aggregated cells that did not enter the central areas ( Fig. 1f-h). Additional staining by calcofluor white, which binds specifically to the fungal cell wall polysaccharide chitin 22 , confirmed that at the center of these lesions were numerous yeast cells (Fig. 1i).
In contrast to the more diffuse, disseminated CNS lesions that result with high-grade C. albicans challenges 20 , these subclinical fungal infections induced significant recruitment of cerebral monocytes, but not neutrophils as demonstrated by flow cytometry 20 (Supplementary Fig. 1B). We further did not observe the conversion of C. albicans yeasts into hyphal forms, a marker of more aggressive infection 23 , as was seen in fatal invasive Candida cerebritis, nor did we observe fungal forms or lymphocytes outside the areas of gliosis ( Fig. 1d and data not shown) 20 . Together, these observations describe a new type of CNS lesion arising in the context of subclinical fungal infections in which the organism is tightly contained in areas of gliosis. We term this novel type of focal inflammatory process due to CNS fungi a fungal-induced glial granuloma (FIGG).
FIGGs are linked to activated microglia and innate cytokines. We conducted additional studies to understand the cellular and biochemical inflammatory accompaniments of FIGGs. We first compared IBA-1 stained coronal sections of brains from mice injected with fungi or sham. Within FIGGs, we observed hypertrophic microglia that stained brightly for IBA-1 (IBA-1 high ), indicative of microglial activation and proliferation 24,25 , as compared to brain from sham-infected mice ( Supplementary Fig. 2). IBA-1 high cells were not found in any brain sections of control mice (data not shown). Enumeration of all and IBA-1 high microglia from coronal sections of different brain regions revealed increased numbers of both total and especially activated microglia, consistent with prior findings of microgliosis in association with brain inflammation (Supplementary Fig. 2) 24 .
The transcription factor nuclear factor kappa B (NF-κB), which comprises a small family of functionally distinct transcription factors, is commonly activated in immune contexts, including during fungal infections where it is required to activate effective anti-fungal immunity 26 . We assessed induction of both NF-κB messenger RNA (mRNA) and protein (p65 subunit) in total mRNA and protein extracted from brain between 4 and 14 days following i.v. challenge with 25,000 C. albicans cells (Fig. 2a-c). Relative to naive brains, we found that NF-κB p65 expression was significantly elevated at both RNA and protein levels at all timepoints examined, with the highest levels seen at day 14, several days past the point at which infection was no longer detectable (Fig. 1b). These findings were confirmed by demonstrating the downregulation of the inhibitory NF-κB subunit IκBα under the same conditions ( Supplementary Fig. 3).
Among many genes induced by NF-κB are the proinflammatory innate immune cytokines IL-1β, IL-6, and TNF 27 . These cytokines were profoundly induced in the brains of mice at most or all times examined after fungal challenge through day 14, well after the point at which fungi could no longer be cultured Microglia are known to produce these cytokines in the context of other CNS inflammatory diseases such as AD 28,29 . We hypothesized that microglia inducibly secrete IL-1β, IL-6, and TNF in response to C. albicans.
To test this, we utilized the immortalized murine microglial cell line BV-2. Co-culturing BV-2 cells (1 × 10 5 /ml) with 200 viable cells of C. albicans led to significantly enhanced secretion of IL-1β, IL-6, and TNF as compared to control-treated cells (Fig. 2g). Of note, neither whole C. albicans lysate antigen equal to 200 viable cells nor irradiated C. albicans induced any cytokine secretion, whereas purified secreted aspartic proteinases (SAPs) stimulated the production of these pro-inflammatory cytokines as previously described in a manner that required intact proteinase activity 30,31 (Fig. 2h). Thus, microglial cells upregulate the production of the type 1 cytokines IL-1β, IL-6, and TNF in the presence of viable fungal cells, possibly via SAPs.
APP is upregulated in FIGGs during C. albicans cerebritis. One of the hallmarks of chronic brain inflammation and degeneration (e.g., AD) is the presence of parenchymal plaques composed of amyloid β (Aβ) peptides that are cleaved from APP 32 . Studies have suggested that β-amyloid peptides possess anti-microbial activity 33,34 , especially against C. albicans. We sought to determine if in the context of C. albicans cerebritis brain cells increase production of APP. To address this, we first performed quantitative PCR on total RNA extracted from brains of mice at different days post infection with C. albicans (Fig. 3a). Relative to control brains, we found significantly higher expression of APP mRNA at all timepoints examined after infection out to 10 days. We also measured APP by western blot from the same brains and confirmed a progressive, more than threefold increase in APP production by day 14 following infection with C. albicans (Fig. 3b, c). To determine where in the brain enhanced production of APP was occurring, we performed fluorescent immunohistochemistry for APP on sections of brain from wild-type mice 4 days after infection with C. albicans using an antibody that only recognizes APP and not Aβ. Although APP is widely expressed in brains, we found that enhanced accumulation of APP following C. albicans challenge occurred almost exclusively around FIGGs, and primarily within the areas of gliosis (Fig. 3d), observations that were confirmed through additional side-by-side staining of comparable FIGGs from app -/mice, in which no APP signal was detected (Fig. 3e). These results demonstrate that APP is upregulated in the brain and accumulates around FIGGs in areas of gliosis, but sparing the central areas that contain the fungi, during acute C. albicans cerebritis. Of note, the general appearance of FIGGs did not differ between wild-type and app -/mice.
Aβ physically associates with C. albicans within FIGGs. We next addressed whether cleaved amyloid β peptides localize similarly to APP using a peptide-specific antibody. Surprisingly, we found that in contrast to the distribution of APP, amyloid β peptides were concentrated in the center of the FIGGs, presumably in direct contact with the fungal cells (Fig. 4a). To again validate the specificity of the antibody against amyloid β peptides, we also carried out side-by-side control staining for amyloid β peptides in brain sections from infected app -/mice and again no signal was observed (Fig. 4b). We further determined by ELISA that soluble amyloid β peptides are persistently elevated in mouse brains well past fungal challenge and clearance (Fig. 4c, d).
Of note, insoluble amyloid β aggregation was not observed via thioflavin S staining in this acute infectious model (data not shown).
APP is cleaved endogenously to yield amyloid β peptides by the peptidases β secretase (BACE-1) and presenilin 1 (PS1) 35,36 . To further support the molecular link between amyloid β peptides and C. albicans cerebritis, we measured β and γ secretase levels from mouse brain homogenates and observed a significant increase in the protein levels of BACE-1 and PS1, which is a subunit of γ secretase ( Supplementary Fig. 4). Thus, in contrast to APP, which localized only to areas of gliosis, amyloid β peptides localized to fungal cells exclusively within the center of FIGGs through a C. albicans-dependent mechanism that involves the induction of β and γ secretases.
Aβ promotes anti-fungal immunity by stimulating BV-2 cells.
Previous studies have shown that Aβ peptides interact physically with C. albicans in vitro and may be directly fungistatic 33,34,37 . We first attempted to verify that Aβ peptides possess anti-fungal activity in vivo. 5xFAD mice that overexpress human APP in the cerebrum and app −/− mice were challenged i.v. with 25,000 viable cells of C. albicans and brains were removed at different days post challenge for fungal recovery. We found that clearance of C. albicans at days 4 and 7 from mouse brain was strongly impaired by the lack of APP, but conversely was markedly enhanced in 5xFAD mice (Fig. 5a). Nonetheless, all mice in this experiment achieved brain sterility by day 10 (Fig. 5a). app −/− mice further developed significant hypothermia, a sign of overwhelming infection, while demonstrating significantly impaired secretion of pro-inflammatory brain cytokines at days 4 and 7 ( Supplementary Fig. 5B). These results indicate that APP or Aβ peptides promote anti-fungal immunity at early timepoints after infection.
We next utilized an in vitro fungistasis assay 38 to determine if Aβ peptides possess anti-fungal (either fungicidal or fungistatic) activity (Fig. 5b). This assay involves the microscopic enumeration of growing C. albicans colonies in response to Aβ peptides, with or without addition of BV-2 cells in comparison to control conditions. In keeping with prior studies 34 , we initially incubated C. albicans with 50 μg/ml of mouse amyloid β peptides or scrambled control peptide to determine if Aβ peptides were directly fungistatic. In contrast to prior observations 33,34 , we found no inhibition of C. albicans growth under these assay conditions (Fig. 5c).
We next modified the fungistasis assay by adding BV-2 cells that had been pre-stimulated with and Aβ peptides or control to determine if Aβ peptides can indirectly induce fungistasis through bystander cells. We found that at concentrations of 1 μM (4 μg/ ml), both Aβ 1-40 and 1-42, but not scrambled peptide, aggregated tightly around yeast cells as previously described 34,37 . More importantly, both Aβ1-40 and 1-42, but not scrambled peptide, significantly stimulated fungistasis when pre-incubated albicans induces pro-inflammatory cytokines in whole-brain and BV-2 cells. C57BL/c mice were challenged i.v. with 25,000 CFU of C. albicans after which whole brains were harvested at the indicated days for analysis by real-time quantitative PCR (RT-qPCR), western blot, or ELISA. a, b Nuclear factor kappa B (NF-κB) expression as assayed by RT-qPCR (a) or western blot (b; p65 subunit) over 14 days. c Densitometric analysis of the data from b. d-f IL-1β, IL-6, and TNF cytokine levels from whole-brain homogenates as assessed by ELISA. g BV-2 cells were seeded for 6 h in 24-well plates (1 × 10 5 cells per well) and then incubated with C. albicans (200 viable cells per well) for 16 h after which secreted IL-1β, IL-6, and TNF were quantitated by ELISA. h BV-2 cells were seeded same as above and incubated with lysates of C. albicans (equivalent 200 viable cells per well), irradiated C. albicans (200 cells per well), secreted aspartic proteases (SAP, 1 μM), or inhibited secreted aspartic proteases (1 μM) for 16 h and the same cytokines were quantitated by ELISA (n = 4. mean ± S.E.M, ns: not significant, *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001, using two-tailed Student's t-test (g) or one-way ANOVA (a, c-f, h) followed by Dunnett's test for multiple comparison). Data are shown as representative of two independent experiments with BV-2 cells prior to the addition of yeast cells to the assay (Fig. 5d). Of note, human Aβ peptides induced similar fungistatic activity in BV-2 cells (Supplementary Fig. 6A).
To further define the mechanisms by which APP-derived peptides induce fungistasis, we first performed a supernatant transfer experiment in which BV-2 supernatants were collected after pre-stimulation with Aβ peptides and transferred to monocultures of C. albicans (Fig. 5e). We discovered that the Aβ peptide-stimulated, BV-2 cell-free supernatant was sufficient to induce significant fungistasis as compared to scrambled peptide in a manner that differed marginally from cultures having BV-2 cells present (Fig. 5d, e). These data demonstrate that whereas Aβ peptides fail to exhibit direct anti-fungal activity, they do stimulate BV-2 cells to secrete one or more soluble antifungal factor.
We confirmed this stimulatory effect of Aβ peptides on microglia by pre-incubating BV-2 cells with Aβ peptides and then washing them to remove unbound peptides before addition of C. albicans. We observed a similar fungistatic effect from these primed BV-2 cells, further supporting the ability of Aβ peptides to activate microglia to an enhanced anti-fungal state.
Aβ peptides enhance phagocytic activity of BV-2 cells. Another potential mechanism by which BV-2 cells might inhibit fungi is through phagocytosis and intracellular killing 39 possibility, we co-cultured BV-2 cells pre-stimulated with Aβ or scrambled peptide with fluorescent (mNeonGreen-expressing) C. albicans cells and determined the number of mNeonGreen + /CD11b + cells as a means of determining cell-yeast interactions. We found that cell-yeast interactions occurred in 21.9 ± 3.0% of control peptide-stimulated BV-2 cells, but that Aβ peptides stimulated significantly more such interactions (37.2 ± 2.8% and 41.0 ± 3.2%, Aβ40 versus Aβ42, respectively, p < 0.05, Fig. 6a, b).
To further characterize these events, we customized a gating strategy to orthogonally compare yeast-BV-2 cell interactions as a means of determining true yeast uptake by phagocytosis (producing overlapping images) from mere cell surface association (producing non-overlapping images) (Fig. 6c). This analysis confirmed that both Aβ peptides significantly enhanced phagocytosis of C. albicans as compared to control-treated cells (scrambled peptide: 14.0 ± 2.0% overlap; Aβ 40 peptide: 25.0 ± 2.1% overlap; Aβ 42 peptide: 25.7 ± 2.4% overlap, p < 0.01; Fig. 6d). Finally, we confirmed that phagocytically active (CD68 +40 ) microglial cells cluster densely at the center of FIGGs, in the immediate vicinity of the yeast clusters (Fig. 6e). Together, these findings indicate that in addition to enhancing secretion of soluble anti-fungal factors, Aβ peptides also promote the direct phagocytic uptake and intracellular killing of yeast cells by microglia.
Dectin-1 promotes microglial phagocytosis of C. albicans. Disease-associated microglia (DAM) are recently described, highly activated microglial cells that surround inflammatory lesions in AD and other neurodegenerative disorders 41,42 . Among many inflammatory markers, DAMs express Dectin-1/Clec7A, a pattern recognition receptor expressed by phagocytic cells that recognizes fungal β-glucan 43 . We confirmed that Dectin-1 is highly expressed on DAMs of FIGGs (Fig. 7a). To demonstrate if Dectin-1 is required for the Aβ peptide-enhanced phagocytosis by microglia, we assessed by flow cytometry the uptake of fluorescent C. albicans with and without addition of a blocking anti-Dectin-1 antibody. We found that Dectin-1 blockade inhibited by up to 50% the phagocytoic uptake of mNeonGreen + yeast cells both at rest and after stimulation by Aβ 1-42 peptide (Fig. 7b-d). Thus, Dectin-1 expression is enhanced on DAMs associated with FIGGs and enhances the phagocytic uptake of C. albicans by microglial cells. Low-grade fungal cerebritis transiently impairs memory. We carried out well-established tests of rodent behavior using naive and C. albicans-infected wild-type mice to determine if substandard performance in any of these assays could be correlated with fungal cerebritis. We first conducted open-field tests to quantify the degree of locomotor activity and anxiety potentially related to fungal infection that could spuriously influence subsequent behavior tests. (Fig. 8a-e). No significant differences were found in any of the five indices, suggested that mice were not experiencing severe stress following infection with C. albicans and are consistent with our empiric observation that mice were grossly normal following fungal infection. We next performed T-maze spontaneous alternation tests in sham and C. albicans-infected mice. Compared to sham, C. albicans-infected mice made significantly fewer alternations, and recovered as the infection cleared by day 10 (Fig. 8f). This result demonstrated that intracranial infection induces impaired working spatial memory. However, no difference in contextual fear conditioning, a form of associative learning and memory, was observed (Fig. 8g). Thus, acute, low-grade C. albicans cerebritis induces a transient, mild working memory deficit that is reversible with clearance of the infection.
Discussion
Although recognized as an increasingly common medical problem, the long-term health effects of transient candidemia are almost completely unknown. High-grade candidemia is rapidly lethal to both humans and mice 10,20,44 . Here, however, we sought to understand how low-grade candidal sepsis affects a critical target organ, the brain. We discovered that the intravenous injection of 25,000 viable C. albicans cells is surprisingly well tolerated in young, healthy mice, producing no gross abnormalities either acutely or chronically. However, a brain parenchymal infection is clearly established, albeit transiently, involving the cerebral cortices exclusive of the cerebella and meninges. Such infection further induces a robust innate immune response characterized by focal gliosis and monocytosis devoid of neutrophils or lymphocytes and marked by the production of the innate cytokines IL-1β, TNF, and IL-6, and the enhanced expression of multiple microglial activation markers that enhance phagocytic function. Such innate inflammation is ultimately successful in resolving the cerebritis, but we detected a transient decline in cerebral function that was likely due to the direct effects of the fungi and the sterilizing inflammation directed against them. These findings expand our knowledge of the CNS effects of transient candidemia and have important implications for understanding the potentially broader role of fungi in CNS disease.
Our findings demonstrate that the CNS and systemic immune and physiologic responses to low-grade candidemia (25,000 yeast cells) differ substantially from high-grade exposures (>250,000 yeast cells). Whereas high-grade candidemia was rapidly and completely fatal and accompanied by massive cytokine release in the context of CNS neutrophilia and monocytosis, low-grade disease yielded no mortality or neutrophilia. Perhaps most strikingly, rather than diffuse CNS spread of the organism through the brain cortices as was seen in high-grade disease, lowgrade candidemia yielded a strikingly focal CNS infection in which numerous organisms were collected in a unique neuropathologic structure that we term FIGGs. The size and spherical structure of FIGGs containing geographically distinct layers of inflammatory cells with the pathogen centrally located is highly reminiscent of more typical granulomas containing histiocytes and lymphocytes that have long been known to form around fungi and other organisms in non-neuronal tissues 45 . The CNS is uniquely protected from toxic and microbial challenges by the BBB, hence it is surprising that hematogenously acquired C. albicans could readily pass the BBB to proliferate in the brain parenchyma of our mice 46 . The BBB is capable of halting the spread of some bacteria and many viruses to the CNS, but the unique ability of yeast cells to penetrate endothelia in vivo and in vitro 47 , perhaps through elaboration of unique virulence factors such as proteinases and lytic peptides such as candidalysin 31,48 , suggests that the main defense of the CNS against blood-borne fungal, especially candidal, invasion is immunologic, not physical. Additional studies are required to understand the microbial receptors and fungal virulence factors that coordinate such effective protection.
In addition to efficiently sterilizing the brain following lowgrade Candida infection, the innate immune response elicited appears to also attenuate the pathogenicity of the organism. Unrestrained Candida yeast forms acquire tissue-invasive potential as marked by extension of pseudohyphae and full hyphal transformation as was seen in overwhelming CNS disease 16,20 . As we observed only yeast forms with no evidence of hyphae in the brains of our mice, we presume that the immune response to the Candida, likely including phagocytic immune cells such as microglia, precluded or reversed such pathologic transformation 49 .
As histiocytic granulomas are required for optimal eradication of pathogens in the periphery 45 , we presume that FIGGs are similarly essential to the rapid clearance of C. albicans from the CNS in our model. This is supported by our demonstration for the first time that microglia are inducibly activated into a fungicidal/fungistatic state as demonstrated by two processes, secretion of anti-fungal substances and enhanced phagocytosis. Another unique aspect of FIGGs is the geographic separation of the parent protein APP, confined to the FIGG periphery, and Aβ peptides, located centrally and physically associating with the fungi. Our data support the possibility that APP is cleaved through the activity of β and γ secretases, but also raise the intriguing possibility that APP could also be cleaved by secreted fungal proteinases. Although we could not confirm that Aβ Anti-Dectin 1 d c Fig. 7 Dectin-1 expression is associated with FIGGs. a Wild-type mice were challenged with C. albicans as in Fig. 2, and immunofluorescence staining for DAPI, IBA1, and Dectin-1 was performed on brain sections containing FIGGs. b Representative flow plot from the ImageStream X of treated BV-2 cells (Aβ or Dectin-1 blocking antibody) co-cultured with mNeonGreen + C. albicans. c Percentage of mNeonGreen + cells in all CD11b + cells. d Percentage of overlap cells in all CD11b + cells with the same gating strategy in Fig. 6. (n = 4, mean ± S.E.M, *p < 0.01, **p < 0.01 using one-way ANOVA followed by Dunnett's test for multiple comparison. Data are shown as representative of two independent experiments.) peptides are directly candidacidal 33,34,37 , we have demonstrated that these peptides enhance microglial activity generally and antifungal activity specifically. Future studies are required to understand the microglial receptors for Aβ peptides that mediate these responses and whether Aβ peptides induce similar anti-microbial responses from other cell types.
In addition to expressing Aβ peptides centrally and being comprised in part of highly activated DAMs that express CD68 and Dectin-1, FIGGs are further characterized by the presence of chitin centrally, demonstrating the presence of yeast cell aggregates. Remarkably, these are all features shared in common with senile plaques of AD 50,51 . Chitotriosidase, a mammalian enzyme that degrades chitin, which is not made by mammals, was also found to be upregulated in brains of AD patients 52 and may be a useful biomarker for AD 53 .
Moreover, fungi and fungal components have also been detected in the peripheral blood 4,8 and cerebrospinal fluid 5,53 of AD patients. More extensive analysis specifically revealed the presence of C. albicans and other fungal species in the brains of AD patients, but not in healthy control brains 4,6,7 . Allergic asthma, which we have linked to airway mycosis, a form of superficial fungal infection of the airway mucosa 54 , is epidemiologically linked to later onset dementia 55 . Our mice further developed memory deficits, another hallmark of AD, albeit a transient form that resolved with fungal clearance. AD is also associated with neuroinflammation marked by expression of NF-κB, IL-1β, IL-6, and TNF precisely as we observed 29 .
Regardless of any possible link to AD, we have shown that transient fungemia in healthy mice has important physiological consequences, including alterations in working memory. Although transient after a single episode, it is conceivable that intermittent fungal showering of the vascular space and attendant low-grade fungal cerebritis that occurs over timeframes of years could eventually lead to permanent brain damage and lasting cognitive defects. More importantly, our findings suggest that resolution of low-grade CNS fungal infections through the use of antifungals and other means might preclude or even reverse attendant cognitive decline.
For long-term fungal bloodstream showering to occur, however, a peripheral site of chronic infection is also required. C. albicans and other Candida species comprise the normal microbial flora and such commensal organisms are unlikely to disseminate hematogenously 56 . However, multiple common medical practices conspire to alter the normal fungal microbiome or the immune system, leading to pathological enlargement of the human fungal biomass, often at mucosal sites. Such practices include the overuse of antibiotics, corticosteroids, hygiene products that disrupt potentially protective mucosal biofilms, and proton pump inhibitors that neutralize candidacidal stomach acid 16,57 . Consequently, the esophagus and more distal gastrointestinal tract may become highly colonized with Candida spp., a pathological condition that leads to systemic spread of the fungi 16,58,59 that in some cases leads to symptomatic fungal infections of the lung, liver, spleen, and kidneys 58,59 . Our studies have shown that the CNS is also readily infected during low-grade candidemia, with important acute histologic and physiologic consequences.
The physiologic impact of chronic systemic fungal dissemination is not known, but the general importance of chronic organ inflammation due to unresolved infections and particulate, e.g., nanoparticulate carbon black derived from cigarette smoke 60 , exposures is chronic inflammation and organ destruction that can be fatal if not checked. Thus, although a single low-grade challenge with C. albicans is quickly resolved and results in only transient physiologic derangement as shown here, the broader concern with chronic fungemia is diffuse end-organ injury, which in the CNS could include substantial neuronal loss and long-term, progressive cognitive impairment. Our findings thus support the creation of in vivo models that permit dissecting the impact of chronic candidemia on CNS integrity and function. Such models are likely to improve our understanding of chronic neurodegenerative conditions such as AD.
Methods
Mice. Eight-week-old C57BL/6J male and female mice (wild-type, 5xFAD and homozygous App −/− mice) were purchased from Jackson Laboratories. All mice were bred and housed at the American Association for Accreditation of Laboratory Animal Care-accredited vivarium at Baylor College of Medicine under specificpathogen-free conditions. All experimental protocols were approved by the Institutional Animal Care and Use Committee of Baylor College of Medicine, and followed federal guidelines.
Fungal isolation and maintenance. Wild-type C. albicans was isolated from airway secretions of an asthma patient as described 61 and propagated on Sabouraud's agar plates. Mucoid colonies of C. albicans were harvested after growing to 10 mm diameter, and populated in Sabouraud's broth at 37°C for 4 days. Cells were collected and dispersed in pyrogen-free phosphate buffered saline (PBS; Corning cellgro, Mediatech, Manassus, VA) passed through 40 μm nylon mesh, and washed twice with PBS by centrifugation (10,000 × g, 5 min, 4°C). Fungal cells were then suspended in PBS and aliquots frozen in liquid nitrogen at 5 × 10 7 /ml. Viability after freezing (>95%) was confirmed by comparing hemacytometer-derived cell counts to CFU as determined by plating serial dilutions on Sabouraud's agar. Fungal identity was determined by standard morphology (Microcheck, Northfield, VT) as describe previously 14 . Thawed, >95% viable cells were washed once, counted, and suspended in normal saline at indicated concentrations for intravenous injection.
Construction of fluorescent Candida albicans. The pENO1-NEON-NAT R plasmid contains a codon-optimized version of the NEON gene under the control of the constitutive ENO1 promoter and with a nourseothricin resistance marker (NAT R ). Codon-optimized NEON (GenScrip Piscataway 62 , NJ) was cloned into pENO1-dTomato 63 using NcoI and PacI, replacing the dTomato gene. The pENO1-NEON-NAT R plasmid was NotI-linearized within the ENO1 segment before transfection into C. albicans strain SC5314.
Intravenous injection. Viable cells of C. albicans in 100 μl normal saline were injected through the tail veil using a tuberculin syringe and 27-gauge needle. Mice are then returned to clean cages, and monitored carefully until resuming normal grooming.
Brain dissemination assay. Mice were euthanized with pentobarbital (Beuthanasia, Intervet Inc., Madison, NJ) and exsanguinated by transecting the descending aorta followed by perfusing the brain with normal saline. Brains were removed by sterilely removing the calvarium, weighed, and were put into 1 ml of sterile PBS. Brains were then homogenized, and spread directly onto Sabouraud's agar (one sample per plate). Plates were sealed with Parafilm (Pechiney Plastic Packaging, Chicago, IL) and incubated at 37°C for a maximum of 10 days. CFU were enumerated and species confirmed as described above.
Calcofluor white stain. Thirty micrometers of sections of brain on slides were treated with one drop of calcofluor white stain (18909, Sigma-Aldrich) and one drop of 10% potassium hydroxide for 1 min. The slide is then examined under ultraviolet light.
Imaging. Fluorescent immunostained brain sections were first imaged using the EVOS FL Auto system to locate sites of infection, and then were imaged using a Leica laser confocal microscope.
Enumeration of brain microglia. Immunostained coronal brain sections (30 μm) were fixed on slides and stained for IBA-1. Using a low-image threshold setting, total IBA-1-positive cells were first counted, after which IBA-1 high cells were counted after raising the image acquisition threshold beyond which IBA-1 low cells were no longer visible (ImageJ, ver. 1.51J8).
Leukocyte isolation and single-cell suspension from brain. Control and infected mice were euthanized 4 days post infection. Mice were anesthetized using pentobarbital (Beuthanasia, Intervet Inc.), perfused with normal saline and brains were collected in 3 ml of HBSS (Gendepot, Barker, TX) containing 20% FBS, and homogenized using the plunger portion of a 5 ml syringe in six-well flat bottom plates. The homogenate was then transferred to a 15 ml tube and added 1.25 ml of 90% Percoll (GE Healthcare) in PBS. Then, the suspension was underlaid with 3 ml of 70% Percoll and centrifuged at 2450 r.p.m. (1200 × g) for 20 min at 4°C. The leukocytes at the interphase were collected, washed with HBSS, and passed through a 40 μm filter 20 .
BV-2 fungus co-culture for ELISA. BV-2 cells were originally obtained from ATCC and maintained frozen in liquid nitrogen. Cells were thawed, expanded in DMEM medium, and expression of standard microglial surface markers (TREM2, CD68, and MCM5 by real-time quantitative PCR) was confirmed. Cells were then seeded in 1 ml of DMEM (serum-free, Gendepot) in 24-well plates at 100,000 cells per well for 6 h. Live, irradiated C. albicans (200 cells/ml), C. albicans lysate (equivalent to 200 cells/ml), SAPs (1 μM), or protease activitity-inhibited SAPs were then added to each well and incubated for 16 h at 37°C. Supernatants were then collected for ELISA as described above.
Preparation of C. albicans lysates. C. albicans was cultured in suspension in liquid Sabouraud's broth (100 cc), enumerated by hemacytometer and collected by centrifugation. A planetary ball mill grinder (PM100, Retsch, Hann, Germany) was used to lyse the cells 54 . Briefly, pelleted fungal cells were resuspended in 30 ml by cold PBS, decanted into cold, sterile canisters and an equal volume of zirconium grinding beads, and centrifuged (550 r.p.m., 5 min, 3 cycles). The canisters were cooled on ice between each cycle for 5 min, the samples were removed, and centrifuged at 4000 r.p.m. (3700 × g), 30 min, 4°C in a separate centrifuge. The supernatants were transferred to a new tube and centrifuged again (8500 r.p.m./6800 × g, 30 min, 4°C), from which the supernatants were passed through 0.22 μm sterilizing filters, adjusted to a protein concentration of 6 mg/ml and distributed in 0.5 ml aliquots in sterile, 1 ml tubes for storage at −80°C.
Preparation of irradiated C. albicans. C. albicans was cultured in suspension in liquid Sabouraud's broth (100 cc), enumerated by hemacytometer and collected by centrifugation. Cells were then irradiated using a 137 Cs irradiator (3000 rad, Gamma Cell 40, MSD Nordion, Ottawa, Ontario, Canada). Inactivation of C. albicans was confirmed by absent growth on Sabouraud's agar plate.
Isolation of secreted aspartic proteinases. SAPs were isolated as previously described 65,66 . Briefly, C. albicans was grown in YPD broth (BD, Sparks, 21152) for 24 h at 26°C. Cells were removed by centrifugation (8500 r.p.m./6800 × g, 5 min, 4°C) and the supernatants containing SAPs were concentrated 25 times in a Pierce Protein Concentrator (10 kDA MWCO, #88535, Thermo Fisher Scientific, Waltham, MA). Concentrated SAPs were then purified by passage through a Pierce Strong Anion Exchange Spin Column (#90011, Thermo Fisher Scientific, Waltham, MA). 20 mM Tris/HCl (pH 6.0) was used for column binding and 2 M Tris/HCl (pH 6.0) was used for elution. SAPs were then concentrated again as described above. SAP concentration was determined using a BCA protein assay kit (Thermo Fisher Scientific).
Inhibition of secreted aspartic proteinase. SAPs were incubated with halt protease and phosphatase inhibitor cocktail (#78442, Thermo Fisher Scientific) overnight. SAPs were then washed using 25 mM Tris/HCl (pH6.0) and concentrated as stated above three times to remove excessive inhibitors before applying to BV-2 cells. Absence of proteinase activity was confirmed by Coomassie Blue proteinase assay 65,66 .
In vitro fungistasis assay. BV-2 cells were cultured in 24-well flat tissue culture plates at 12,000 cells per well for 6 h with stimulation by 1 μM Aβ peptides or scrambled peptide control (mouse Aβ: A-1007-1, A-1008-1, A-1004-1; human Aβ: A-1156-1, A-1166-1. rPeptide, Wadkinsville, GA 33 ). C. albicans were added to each well at 200 viable cells per well in a 37°C/5% CO 2 incubator. FGEs were counted after 16 h. Percent of fungal growth inhibition was calculated as the (# FGE in wells containing no cells − # FGE in wells containing cells/# FGE in wells containing no cells) × 100%. Cell-free supernatants from wells under these conditions were also transferred into new 24-well plates and 200 viable cells of C. albicans were added to each well and incubated for another 16 h. FGEs were counted and % FGE inhibition was calculated as above. In some experiments, BV-2 cells were first primed with 1 μM Aβ peptides or scrambled peptide control overnight in 24-well flat tissue culture plates at 12,000 cells per well. Cells were washed with fresh medium three times to remove Aβ peptides before adding C. albicans.
Phagocytosis assay using BV-2 cells. BV-2 cells were plated at 1 × 10 6 /ml and stimulated with 1 μM Aβ peptides or scrambled peptide control for 6 h after which C. albicans (mNeonGreen) were added at 1 × 10 6 /ml. BV-2 cells were co-cultured with C. albicans for 2.5 h in a 37°C/5% CO 2 incubator, and washed with fresh medium three times. BV-2 cells were then collected using a cell scraper and stained with APC-Cy7-conjugated anti-mouse CD11b (M1/70, Biolegend). After three washes with PBS, cells were analyzed using ImageStreamX MKII (Millipore Sigma). CD11b/mNeonGreen double-positive cells were masked and imaged orthogonally to characterize fungal-BV-2 cell interactions as representing surface binding alone or true phagocytosis. Data were analyzed using FlowJo software (version 10.0.7; Treestar, Ashland, OR).
Behavior tests. Twenty-five thousand CFU or equivalent numbers of heatinactivated cells of C. albicans were injected intravenously after which behavior tests were conducted 3 and 10 days later using different groups of mice. Open-field test, T-maze spontaneous alternation, and contextual fear conditioning was carried out in order to minimize the effect of stress as previously described 67 .
Open-field test. Mice were placed in an open arena (40 × 40 × 20 cm) and allowed to explore freely for 10 min while their position was continually monitored using tracking software (AnyMaze). Tracking allowed for measurement of distance traveled, speed, and position in the arena throughout the task. Time spent in the center of the arena, defined as the interior 20 × 20 cm, was then recorded 67 .
T-maze spontaneous alternation task. The apparatus was a black wooden Tmaze with walls 25 cm high and each arm was 30 cm long and 9 cm wide. A removable central partition was used during the sample phase but not the test phase of each trial. Vertical sliding doors were positioned at the entrance to each goal arm. At the beginning of the sample phase, both doors were raised, and the mouse was placed at the end of the start arm facing away from the goal arms. Each mouse was allowed to make a free choice between the two goal arms; after its tail had cleared the door of the chosen arm, the door was closed, and the mouse was allowed to explore the arm for 30 s. The mouse was then returned to the end of the start arm, with the central partition removed and both guillotine doors raised, signaling the beginning of the test phase. Again, the mouse was allowed to make a free choice between the two goal arms. This sequence (trial) was repeated 10 times per day for 2 days. The percentage of alternation was averaged over the 2 days. Trials that were not completed within 90 s were terminated and disregarded during analysis 67 .
Contextual fear conditioning test. Mice were first handled for 5 min for 3 days. On the training day, after 2 min in the conditioning chamber, mice received two pairings of a tone (2,800 Hz, 85 dB, 30 s) with a co-terminating foot shock (0.7 mA, 2 s), after which they remained in the chamber for an additional minute and were then returned to their cages. At 24 h after training, mice were tested for freezing (immobility except for respiration) in response to the training context (training chamber). Freezing behavior was hand-scored at 5-s intervals by an observer blind to the genotype. The percentage of time spent freezing was taken as an index of learning and memory 67 .
Statistical analyses. Data are presented as means ± S.E. of the means. Significant differences relative to PBS-challenged mice or appropriate controls are expressed by p values of <0.05, as measured by two-tailed Student's t-test or one-way analysis of variance followed by Dunnett's test or Tukey's test for multiple comparison. Data normality was confirmed using the Shapiro-Wilk test.
Data availability
The data that support the findings of this study are available from the corresponding author upon request. | v2 |
2021-03-18T05:13:56.793Z | 2021-03-01T00:00:00.000Z | 232258659 | s2orc/train | Review of International Clinical Guidelines Related to Prenatal Screening during Monochorionic Pregnancies
We conducted a search for international clinical guidelines related to prenatal screening during monochorionic pregnancies. We found 25 resources from 13 countries/regions and extracted information related to general screening as well as screening related to specific monochorionic complications, including twin-twin transfusion syndrome (TTTS), selective fetal growth restriction (SFGR), and twin anemia-polycythemia sequence (TAPS). Findings reveal universal recommendation for the early establishment of chorionicity. Near-universal recommendation was found for bi-weekly ultrasounds beginning around gestational week 16; routine TTTS and SFGR surveillance comprised of regularly assessing fetal growth, amniotic fluids, and bladder visibility; and fetal anatomical scanning between gestational weeks 18–22. Conflicting recommendation was found for nuchal translucency screening; second-trimester scanning for cervical length; routine TAPS screening; and routine umbilical artery, umbilical vein, and ductus venosus assessment. We conclude that across international agencies and organizations, clinical guidelines related to monochorionic prenatal screening vary considerably. This discord raises concerns related to equitable access to evidence-based monochorionic prenatal care; the ability to create reliable international datasets to help improve the quality of monochorionic research; and the promotion of patient safety and best monochorionic outcomes. Patients globally may benefit from the coming together of international bodies to develop inclusive universal monochorionic prenatal screening standards.
Introduction
Clinical guidelines serve to optimize the care of patients by assisting clinicians and other healthcare professionals [1]. Based on the latest and best available scientific knowledge and, where evidence is scarce, consensus opinion of the experts from the respective field, evidence-based clinical guidelines represent an important step toward the dissemination and implementation of evidence-based treatments into clinical practice [2] and can directly influence the quality of patient care [3]. A recent review examined eight international guidelines related to management of twin pregnancies and found consensus among the guidelines in the areas of (1) first trimester screening including assessment of gestational age as well as identification of chorionicity and amnionicity, (2) nuchal translucency and anatomy screenings, and (3) biweekly ultrasounds for monochorionic and every 4th week for dichorionic pregnancies [4]. Areas of disagreement among the guidelines included utility of cervical length scans for to screen for preterm birth, fetal growth discordance screening, and routine performance of MCA-PSV and UA doppler at every ultrasound scan [4]. However, given the high risk nature of monochorionic twin pregnancies and the possibility of complications [5], further attention is required to understand how national and international guidelines compare with regard to prenatal screening for monochorionic twin pregnancies. Within the topic of monochorionic pregnancy, there exist several internationally dispersed clinical guidelines related to prenatal screening.
Prenatal screening, particularly the use of ultrasonography, is imperative in monochorionic pregnancies, which have long been fraught with a mortality rate that exceeds dichorionic by over seven times [2]. In addition, the incidence of congenital anomalies in monochorionic twin pregnancy is increased by >2-fold over dichorionic twins [6] and 6 to 10-fold over singletons [7]. In monochorionic pregnancies, serial fetal ultrasound examinations are necessary to monitor for development of TTTS and TAPS, as well as SFGR, because these disorders collectively affect 15 to 20 percent of monochorionic gestations, have high morbidity and mortality, and are amenable to interventions that can reduce morbidity and mortality [8]. In monochorionic pregnancy, ultrasound is not only what determines a diagnosis (or diagnoses), but, in the case of TTTS, frequency of ultrasounds is also associated with disease severity at the time of diagnosis. For instance, research shows that women who receive less frequent than bi-weekly ultrasounds are more likely to have advanced stages of TTTS upon diagnosis [9].
In 2010, Doctors Moise and Johnson authored a groundbreaking paper entitled, "There is NO diagnosis of twins" [5]. Within their paper they argue that, from a prenatal screening perspective, monochorionic twins are fundamentally different than dichorionic twins. At that time the authors were urging the American Congress of Obstetrics and Gynecology to establish a prenatal screening standard wherein all monochorionic twins receive bi-weekly ultrasounds beginning in gestational week 16 to provide timely detection of monochorionic compilations and better intervention options [5]. Moise would go on to author another paper in 2014 where he more distinctly specified the host of prenatal screenings (umbilical artery, ductus venosus, MCA-PSV) most advantageous to monochorionic pregnancies [10]. Many of these recommendations can be found in the more recent clinical guidelines discussed here.
The research surrounding monochorionic pregnancy, its associated disorders, and their treatments frequently updates and changes, therefore influencing prenatal screening recommendations. The purpose of this study was to review current international clinical guidelines related to prenatal screening during monochorionic pregnancies. Specifically, to evaluate where they converge and diverge.
Search Strategy
Between June through October 2020, we conducted a search for international clinical guidelines related to prenatal screening during monochorionic pregnancies (see Table 1). We performed searches of databases focused on international guidelines as well as published literature. In order to keep the search broad, we used only general keywords in the searches, including combinations of "twin pregnancy" "or" "monochorionic pregnancy" "or" "multiple pregnancy" with the keyword "guideline." We also reviewed the websites of agencies responsible for guideline creation as well as professional societies related to the management of monochorionic pregnancies.
Criteria for Inclusion
To be included in this review, content must provide clinical guidance related to screening or surveillance for prenatal screening during monochorionic pregnancies. Guidelines could be related to screening (1) during monochorionic pregnancies in general, or (2) for specific complications that typically only occur during monochorionic pregnancies (e.g., TTTS, TAPS, SFGR). Given the quickly advancing nature of research and practice in this field, we limited the search to content published within the last 10 years. Additionally, because we are interested in comparing international guidelines, we included guidelines regardless of language and used online translation software to review titles and abstracts/summaries. We also used online translation software to review full-text materials, and this translation was verified with human translators if the guideline was included in this review.
Criteria for Exclusion
Material was excluded from the review if content was: primary research study, a summary of existing guidelines, letter to editor or commentary, systematic review without resulting guidelines, a repeated publication of guideline in alternate journal or language, or prior version of an updated guideline.
Selection
Two reviewers were responsible for examining the abstracts or summaries of all the references gathered from the database reviews. Summaries published in other languages were translated to English using an online translator. Full-text versions were obtained for sources that passed the first round of review, with an online translator once again used for languages other than English. Two reviewers examined the full-text to ensure fit with inclusion criteria.
Data Extraction
We extracted data from the guidelines based on (1) information related to the guideline (e.g., country, most recent update, type of source), (2) recommendations related to screening during uncomplicated monochorionic pregnancies (e.g., first-trimester screening, umbilical cord insertions, nuchal translucency), (3) recommendations related to screening for signs of specific complications including SFGR, TTTS, and TAPS. We determined topics for extraction based on a review of the literature as well as review and approval from subject matter experts. One reviewer extracted data from each guideline, and a second reviewer assessed and updated, as needed, the work of the first reviewer.
Description of Sources
Our search resulted in a total of 621 titles from all databases and sources, condensed to 554 when duplicates were removed. A total of 78 materials remained after conducting the title review, which reduced to 55 after the abstract review. Finally, after completion of the full-text review, 25 guidelines from 13 different regions/countries were retained (see Tables 2 and 3) [8,. Two organizations (NAFTNET, UpToDate) published multiple guidelines on different aspects of monochorionic screening, and we presented their information together. The United States had the most guidelines produced by six unique organizations/authors, followed by the United Kingdom and international associations/authors (three each). The median year of guideline publication/update was 2016 ranging from 2011 to 2020. Most publications were the product of organizations or societies (88%); however, three were produced by independent authors.
Universal (or Near) Screening Recommendations
Only the establishment of chorionicity and amnionicity, often with the caveat of identification as early as possible, was universally mentioned in all sources. Nearly all resources recommended biweekly ultrasound scans starting around gestational week 16. This included checking for SFGR and TTTS complications by regularly assessing growth, amniotic fluids, and bladder visibility. Most resources also reported the importance of conducting the fetal anatomical scan around weeks ranging from weeks 18-22, given the higher incidence of congenital anomalies among monochorionic twins.
Conflicting Recommendations
While the majority of resources also mentioned nuchal translucency (NT) screenings (Table 2), some noted that NT discordance might be interpreted as an early sign of SFGR and/or TTTS, which could complicate the typical interpretation of NT screening results [12,17,23,32,33]. However, others commented that early NT discordance is not considered predictive of TTTS and should not be treated as such [26,27]. Again, noting the increased frequency of congenital anomalies among monochorionic twins, some guidelines reported that NT discordance may indicate a chromosomal anomaly.
Another identified conflicting recommendation was related to second-trimester screening for cervical length. Some references reported universal screening should be conducted [15,17,20,25,28,34] around week 20 [25]. One guideline stated that evidence is inconclusive regarding this screening but still endorsed performing this scan [13] while another reported that it may be informative in the presence of preexisting risk factors [31]. Others recommended against universal screening [14,19,26,27,29] indicating that there is no evidence of effective intervention to prevent preterm birth in twins. Some guidelines further suggested that this should not be conducted in cases complicated by TTTS [32] or in either asymptomatic or symptomatic women [33].
Approximately half (n = 13, 52%) of the resources recommended routine screening for the complication of TAPS, via MCA-PSV Doppler, as part of the biweekly ultrasound regimen [8,11,12,[15][16][17]20,23,25,[28][29][30]33]. Obtaining MCA-PSV multiples of the normal median (MoM) values of <1.0 and >1.5 [17,20,25,27], indicating risk of TAPS, were generally reported, although some reported looking for the larger MoM range of <0.8 and >1.5 [8,11,20,29,31]. Recommendations for when to start screening for TAPS differed as well, ranging from 16 weeks to 28 weeks [8,12,26,29,30]. Several sources reported that screening for TAPS should only occur post-TTTS laser surgery [31] or if other complications are occurring [26,27] and should not be part of routine screening. Some resources reported that screening was controversial [29] and could not make a statement for or against TAPS screening [22] or the use of calculation of MCA-PSV MoM [23]. However, still other resources indicated that screening for TAPS at any time using MCA-PSV Doppler has not been shown to improve outcomes and therefore could not be recommended [32]. Notes. Empty cells indicate screening topic not mentioned; indicates routine screening recommended; ! indicates screening performed if complication suspected; indicates screening not recommended; ? indicates mixed evidence. * Guideline is topic or complication specific. Topic may be out of scope of guidelines. ** Multiple guidelines provided by same organization are presented together.
We noted that the year of recency of the publication or update appears related to recommendations (Table 4). Specifically, more recent publications promoted screening certain topics while older publications either had no mention of, only recommend with complications present, or recommended against, screening.
Discussion
The main purpose of this study was to review international clinical guidelines related to prenatal screening during monochorionic pregnancies. Clinical guidelines can directly influence the quality of patient care, ref [3] and so we sought to determine where these monochorionic-specific guidelines converge and diverge.
A comprehensive search for international clinical guidelines related to prenatal screening during monochorionic pregnancies was performed. Guidelines were restricted to publication within the last 10 years and could be related to screening (1) during monochorionic pregnancies in general, or (2) for specific complications that typically only occur during monochorionic pregnancies. Guidelines were included regardless of language, and online translation software was used to review titles and abstracts/summaries; human translators were used for guidelines included in the final review. Two independent reviewers analyzed and extracted relevant content from each included guideline.
Our findings were similar to previous research in terms of general areas of guideline agreement and disagreement [4]. Findings reveal a universal recommendation for the early establishment of chorionicity. Near-universal recommendation was found for bi-weekly ultrasounds beginning around gestational week 16; routine TTTS and SFGR surveillance comprised of regularly assessing fetal growth, amniotic fluids, and bladder visibility; and fetal anatomical scanning between gestational weeks 18-22. Conflicting recommendation was found for nuchal translucency (NT) screening; second trimester scanning for cervical length; routine TAPS screening; and routine umbilical artery, umbilical vein and ductus venosus assessment.
Areas of divergence amongst the guidelines are not entirely surprising given that this is a topic with many quickly developing advancements. For instance, TAPS only became a recognized condition in the year 2007 [35], and only in the past 30 years, with the advent of fetoscopic laser ablation surgery and other treatments, has TTTS no longer been associated with an 80-100% mortality rate [36]. While the median year of guideline publication was 2016, some had not been updated since 2011. How recently a clinical guideline had been published or updated became particularly influential when reviewing the conflicting recommendations. A clear trend was observed with more recent clinical guidelines recommending a given prenatal screening. Guidelines need to be quickly updated and become consistent with the latest available research, ref [37] and even the most recently updated publications failed to mention current opportunities for improved screening. For example, recent research reveals other markers for TAPS have been recorded on ultrasound including: starry sky liver for recipients; ref [38] echogenic placenta for donors; ref [39,40] and cardiomegaly in donors [41]. However, despite at least 86% of TAPS cases showing at least one of these markers, ref [41] no guidelines yet mention these TAPS signs.
Aside of more obvious concerns related to patient care and prenatal outcomes, inconsistencies within monochorionic prenatal screening recommendations work to directly limit monochorionic research efforts. That is, when studying something as rare as monochorionicity, and the even-rarer associated disorders (TTTS, TAPS, SFGR), having the ability to use data from outside a given geographic location becomes vital to the compilation of large, reliable datasets. The ability to collaborate internationally to improve our understanding of monochorionic disorders and the efficacy of their potential treatments should be considered an emergent priority. Previous authors have also made note of this valuable opportunity, stating that international multicenter cooperation can improve knowledge and serve as a base for future trials in MC twins with rare conditions [42]. Consistency in clinical guidelines will help influence clinical practice and consistency in clinical practice related to prenatal screening will help produce more reliable data that can be used to better understand monochorionicity, ultimately improving outcomes.
Limitations and Future Research
This review is not without limitations. While we noted approximately half of the resources used GRADE or a similar system to evaluate the quality of evidence associated with their recommendations, we did not compare across the guidelines in terms of their level of evidence for each recommendation. Future research may explore a limited number of topics by level of evidence, particularly those which are newer and/or have mixed evidence. Additionally, most of the resources were limited to Western and English language guidelines and resources. We were unable to locate results for guidelines in several large regions including Africa, most of Asia, Latin America, and the Middle East. Since we used English-powered academic search engines (e.g., PubMed), our inability to find these guidelines likely represents a limitation of the search tools we used as well as our search methods, rather than a lack of guidelines. Future searches should incorporate collaborators across additional regions who can expand this search and explore differences in surveillance for these pregnancies.
The level of inconsistency amongst the clinical guidelines included for review is notable, especially given the fact that most of the countries included for review are high income countries with similar enough characteristics as related to their ability to provide comprehensive, evidence-based prenatal screening. We suspect that if we had the ability to include these low-medium income countries, we would observe even more inconsistency albeit for more varied reasons.
Not all guidelines are created equally, and we recognize that there are limitations created by economic circumstances, access to educational resources, and the geographic/ population disbursement of some countries. It is essential to understand that these factors can have a direct impact on prenatal screening recommendations; however, overseeing bodies should take steps to ensure the highest possible standard of care is recommended.
Our results strongly suggest that the first step to doing this is simply frequent review and keeping clinical guidelines current with emerging evidence.
In some countries, such as the United States, insurance issues subvert the provision of correct screening protocols. In this case, the role of clinical guidelines to establish a base level of expected prenatal care becomes even more important.
We recognize that guidelines are not an absolute standard of care, but rather the minimum standard of care as established by overseeing bodies using the available evidence and resources. In addition to establishing clinical guidelines, overseeing bodies should also be ensuring that their members provide this minimum.
Further, we did not include in our review surveillance for other complications of monochorionic twin pregnancies such as higher-order multiples, monochorionic-monoamniotic pregnancies, TRAP, or conjoined fetuses. Future research should explore these other, less common complications associated with monochorionic twin pregnancies. Finally, this review only examined surveillance and did not include treatment, particularly for complicated monochorionic pregnancies and future research should explore these topics.
Conclusions
We conclude that across international agencies and organizations, clinical guidelines related to monochorionic prenatal screening vary considerably. In every instance wherein conflicting screening recommendations were observed, the median year of publication was higher for clinical guidelines that included the given prenatal screening and lower for those that did not, highlighting the important role emerging evidence plays in the development of clinical guidelines. The observed inconsistencies raise concerns related to equitable access to evidence-based monochorionic prenatal care; ability to create reliable international datasets to help improve the quality of monochorionic research; and the promotion of patient safety and best monochorionic outcomes. Patients globally may benefit from the coming together of international bodies to develop inclusive universal monochorionic prenatal screening standards. | v2 |
2022-11-20T02:17:27.024Z | 2022-11-01T00:00:00.000Z | 253672799 | s2orc/train | Impact of COVID-19 Vaccination on Seroprevalence of SARS-CoV-2 among the Health Care Workers in a Tertiary Care Centre, South India
Global vaccine development efforts have been accelerated in response to the devastating COVID-19 pandemic. The study aims to determine the seroprevalence of SARS-CoV-2 IgG antibodies among vaccine-naïve healthcare workers and to describe the impact of vaccination roll-out on COVID-19 antibody prevalence among the health care centers in tertiary care centers in South India. Serum samples collected from vaccinated and unvaccinated health care workers between January 2021 and April 2021were subjected to COVID-19 IgG ELISA, and adverse effects after the first and second dose of receiving the Covishield vaccine were recorded. The vaccinated group was followed for a COVID-19 breakthrough infection for a period of 6 months. Among the recruited HCW, 156 and 157 participants were from the vaccinated and unvaccinated group, respectively. The seroprevalence (COVID-19 IgG ELISA) among the vaccinated and unvaccinated Health Care Workers (HCW) was 91.7% and 38.2%, respectively, which is statistically significant. Systemic and local side-effects after Covishield vaccination occur at lower frequencies than reported in phase 3 trials. Since the COVID-19 vaccine rollout has commenced in our tertiary care hospital, seropositivity for COVID-19 IgG has risen dramatically and clearly shows trends in vaccine-induced antibodies among the health care workers.
Introduction
COVID-19, a novel viral disease caused by SARS CoV-2 originated in Wuhan, China, during the investigation of a cluster of cases leading to unknown pneumonia in December 2019 [1,2]. SARS-CoV-2 rapidly spread worldwide and still poses a major challenge and threat to public health and healthcare systems [3]. Globally, during this study period, the WHO reported 364,191,494 confirmed cases of COVID-19, including 5,631,457 deaths [4,5]. Healthcare workers (HCW), including doctors, nurses and other paramedical staff, are the leading frontline personnel of a medical health care system. Due to the prolonged period of exposure, HCW are the most vulnerable cohort at a high risk of COVID-19 infections compared to the general population [6]. Infected HCW may pose risk to patients, family members and to the community as well. Therefore, the safety of HCW is essential to safeguard continuous patient care. WHO reports have documented that until 2020, at least 90,000 healthcare workers had been infected by COVID-19 [7].
Serological tests can provide more information on SARS CoV-2 infection as an antibody response of IgG formed following infection [8]. Antibody tests are helpful for detecting previous infection when measured two weeks after the onset of symptoms, but the duration of elevated antibody levels remains unknown. Studies on antibodies against SARS-CoV-2 are important, as it would decrease the number of virions that could infect ACE-2 receptorexpressing cells. The World Health Organization (WHO) has approved the performance of serosurveys in order to estimate the extent of COVID-19 infection in a population group and understand the disease dynamics of COVID-19 transmission [9].
India introduced a mass COVID-19 vaccination programme (Covishield and Covaxin) with two candidate vaccines from 16 January 2021 after the Emergency Use Approval [10]. In many countries, Health Care Workers (HCW) was among the first to be vaccinated. All the Health Care Workers (HCW) in our tertiary care hospital completed two doses of Covishield vaccine by May 2021. The Phase III data for Covishield from randomized clinical trials (RCTs) shows that the vaccine was safe and effective [11]. However, there is still a paucity of information as to the level of immune response this novel vaccine elicits, both at a humoral and cellular level in the community.
We undertook this study to determine the seroprevalence of IgG antibodies in SARS-CoV-2 among vaccine-naïve health care workers in our tertiary care hospital and to further describe the impact of vaccination roll-out on COVID-19 antibody prevalence among HCW. The study was also designed to track adverse effects of Covishield vaccine after the first and second dose of the vaccine.
Material and Methods
This cross-sectional serosurvey was performed among the health care workers in a tertiary care hospital, in Tamil Nadu, South India. This serosurvey was conducted at two different time periods; first during January 2021, before the initiation of the COVID-19 vaccination to HCW by the Government of Tamil Nadu and second during April 2021, when all the health care workers in our hospital had been given the second dose of the COVID-19 vaccine. Institutional Human Ethics Committee of Panimalar Medical College Hospital & Research Institute (PMCHRI-IHEC) approval has been obtained prior to start of the study (Approval Number: PMCH&RI/IHEC/2020/029; dated: 30.12.2020). Following the approval and clearance from the Institutional Human Ethics Committee (PMCHRI-IHEC), eligible individuals were included in the study. The study was conducted according to the Declaration of Helsinki as the current study involves human subjects.
Individuals who agreed to participate answered an interview-based structured questionnaire after providing written informed consent. The questionnaire comprised questions relating to socio-demographic variables, including age, gender and respiratory symptoms or fever in the 6 months prior to enrolment in the study, hospitalization for COVID-19 since March 2020 and usage of masks in the workplace. In addition, the vaccinated health care worker recruited during April 2021 were asked to provide the adverse effects experienced within 48-72 hours and after 7 days of the first and second dose of the Covishield vaccination. Both systemic and local effects of the Covishield vaccination were taken into account. After informed consent, 5 mL serum sample were drawn from the participants and transported to the laboratory immediately, where they were centrifuged. Serum samples were stored at −70 • C until IgG ELISA testing was carried out, as illustrated the study flow diagram ( Figure 1). SARS-CoV-2 serological testing was performed using SARS-CoV-2 ELISA IgG assay (Euroimmun, Lübeck, Germany) in an automated analyzer targeting the S1 domain, including the receptor-binding domain that detects the presence of IgG antibodies against SARS-CoV-2 S proteins in human serum. Results are expressed as a ratio, calculated by dividing the optical densities of the sample by those of an internal calibrator provided with the test kit. The cut-off for samples to be considered positive was ≥ 1.1. The sensitivity and specificity of the SARS CoV-2 IgG ELISA kit was found to be 95% and 96.2%, respectively [12].
Subsequently, the vaccinated group was followed for a COVID-19 breakthrou infection for a period of 6 months. They were grouped into asymptomatic; symptoma but not RT-PCR proven; and symptomatic, RT-PCR proven. We defined a breakthrou infection as a COVID-19 infection that was contracted on or after the 14th day of va cination.
Statistical Analysis
Data were analyzed using STATA 15.0 (Stata Corp, College Station, TX, USA). V ues were expressed as a median, quartiles, frequency and percentages to understand t nature of the data. A chi-square test was used to assess the association between the va cination status and the profile of the participants. A nonparametric Mann-Whitney test was used to identify the significance of the ELISA values observed between the va cinated and unvaccinated population, also used in other subgroup comparison analys Subsequently, the vaccinated group was followed for a COVID-19 breakthrough infection for a period of 6 months. They were grouped into asymptomatic; symptomatic but not RT-PCR proven; and symptomatic, RT-PCR proven. We defined a breakthrough infection as a COVID-19 infection that was contracted on or after the 14th day of vaccination.
Statistical Analysis
Data were analyzed using STATA 15.0 (Stata Corp, College Station, TX, USA). Values were expressed as a median, quartiles, frequency and percentages to understand the nature of the data. A chi-square test was used to assess the association between the vaccination status and the profile of the participants. A nonparametric Mann-Whitney U test was used to identify the significance of the ELISA values observed between the vaccinated and unvaccinated population, also used in other subgroup comparison analysis. An upset plot was used to present the occurrence of the symptoms after the 1st and 2nd dose of the COVID-19 vaccination. A smoothed density plot was presented to disseminate the observed ELISA value over the vaccinated and unvaccinated population.
Results
A total number of approximately 520 Health care workers were working in our tertiary care hospital during the study period (January to April 2021), of whom 313 HCW were willing to participate in the study. Among the HCW, 157 participants were initially unwilling to take the COVID-19 vaccination due to vaccine hesitancy. The first set of samples were collected from the unvaccinated HCW (n = 157) and remaining (n = 156) were taken after the second dose of the COVID-19 vaccination (IQR 12-14 days). The sample population Vaccines 2022, 10, 1967 4 of 10 comprised 107 males, of whom 48.6%and 51.4%were vaccinated and unvaccinated, respectively. Among the 206 female HCW, 50.5% and 49.5% were vaccinated and unvaccinated, respectively. In both vaccinated and unvaccinated individuals, the majority were in the age group between 20-35 years. In sub-population analysis, paramedical staff was represented in higher numbers among the vaccinated group; in contrast, nonmedical staff was a larger proportion in unvaccinated HCW, which is statistically significant (P < 0.0001). During the time of recruitment of HCW into the study, only ten percent of HCW had contracted a RT-PCR proven COVID-19 infection in the past 6 months. Surgical masks (58%) were the most common type of mask used for protection by the HCW, followed by the N95 mask (20%) and cloth mask (21%). The seroprevalence (COVID-19 IgG ELISA) among the vaccinated and unvaccinated HCW were 91.7% and 38.2%, respectively. As shown in Table 1, seropositivity for COVID-19 IgG ELISA was higher (70.4%) among vaccinated HCW than the vaccine-naïve HCW (29.6%), which is statistically significant (P < 0.0001).The majority (87.0%) of the seropositive individuals among the unvaccinated group did not report any symptoms related to COVID-19 infection at the time of the study nor in the past. Age, gender and the history of at least one self-reported symptom suggestive of COVID-19 in the last three months before the study were not associated with positive status (P > 0.05). We compared the COVID-19 IgG ELISA ratio between the vaccinated and unvaccinated HCW with the demographic and clinical characteristics of study population. There is a significant rise in COVID-19 IgG seropositivity in the vaccinated group among the age category of 20-35 years in comparison to the age group of above 35 years. There is no gender difference in COVID-19 IgG positivity both in the vaccinated and unvaccinated group. In the sub-population analysis, seropositivity was significantly higher in non-medical category compared to paramedical and medical faculties. The unvaccinated HCW who had a prior history of COVID-19 infection in the past 6 months showed the baseline COVID-19 IgG ratio of 3.26 (1.69-3.70). This was doubled in the vaccinated group with a COVID-19 IgG ratio of 6.69 (5.12-8.80). The seropositivity ratio of 5.27 (3.28-8.10) was significantly higher in the vaccinated HCW than the unvaccinated group with 0.36 (0.13-1.50) who had no positive history of COVID-19 infection and was found to be statistically significant. There is a stronger association of seropositivity with the usage of cloth and surgical mask than with N95 mask among the vaccinated group, as shown in Table 2. Overall, side effects reported by vaccine recipients after the first dose was 68%, while 32% had side effects after the second dose of the Covishield vaccine. The most common symptoms after the first dose of COVID-19 vaccine were pain at the injection site (82%), body pain (33%) and low grade fever (30%), as shown in Figures 2 and 3. The frequent combination of symptoms encountered were body pain with low grade fever (12%) and in combination with pain at injected site (12%). Two percent (2%) of patients did not elucidate any adverse effects after the first dose of the COVID-19 vaccine, whereas after the second dose of the COVID-19 vaccine, twenty percent (20%) had no adverse effects. Body pain and low grade fever were predominantly seen after the second dose of the COVID-19 vaccine. Certain side-effects, such as vomiting, syncope and allergy, were not observed in the vaccine recipients after first and second dose of COVID-19 vaccine. site (82%), body pain (33%) and low grade fever (30%), as shown in Figures 2 and 3. The frequent combination of symptoms encountered were body pain with low grade fever (12%) and in combination with pain at injected site (12%). Two percent (2%) of patients did not elucidate any adverse effects after the first dose of the COVID-19 vaccine, whereas after the second dose of the COVID-19 vaccine, twenty percent (20%) had no adverse effects. Body pain and low grade fever were predominantly seen after the second dose of the COVID-19 vaccine. Certain side-effects, such as vomiting, syncope and allergy, were not observed in the vaccine recipients after first and second dose of COVID-19 vaccine. There is a statistically significant (P < 0.001) increase in the COVID-19 IgG ELISA ratio between the vaccinated (5.87) and unvaccinated (2.71) HCW, as seen in Figure 4. Eight percent remained seronegative even after the two doses of the COVID-19 vaccine.
Among the subjects vaccinated with both doses of the vaccine, 2% were RT-PCR positive within 60 days after the second dose. They were admitted to the ward with mild symptoms and did not require oxygenation or critical care support. Although there were no RT-PCR positive cases during the further follow up of 180 days, 2-3% had at least one COVID-19 symptom, which was not confirmed by molecular testing. The difference There is a statistically significant (P < 0.001) increase in the COVID-19 IgG ELISA ratio between the vaccinated (5.87) and unvaccinated (2.71) HCW, as seen in Figure 4. Eight percent remained seronegative even after the two doses of the COVID-19 vaccine. Among the subjects vaccinated with both doses of the vaccine, 2% were RT-PCR positive within 60 days after the second dose. They were admitted to the ward with mild symptoms and did not require oxygenation or critical care support. Although there were no RT-PCR positive cases during the further follow up of 180 days, 2-3% had at least one COVID-19 symptom, which was not confirmed by molecular testing. The difference in COVID-19 IgG antibody levels among the asymptomatic, symptomatic RT-PCR proven and unproven cases in vaccinated individuals during the 6 months follow up has been depicted in the Figure 5.
Discussion
The seroprevalence status of SARS-Cov-2 among the vaccine-naïve HCW in our centre was 38% in January 2021, which is almost one year after the index COVID-19 case was identified in India. Seroprevalence among HCW (38%) observed in this study was a little higher compared with the 26% prevalence estimated in a large sero-surveillance conducted among the HCW during the same period between December 2020 to January 2021 in India [13,14]. The confounding factors, such as the age and sex of the unvaccinated group, did not show much difference in COVID-19 IgG seropositivity. We identified variations in the seroprevalence of SARS-CoV-2 antibodies among the different groups of healthcare workers.
The highest seroprevalence was observed in the non-medical (20%) category with a lower seroprevalence among medical (9%) and para-medical (9%) HCW. This could be due to the differential risks of SARS-CoV-2 exposure that exist within the hospital environment and strict adherence to PPE by medical and paramedical workers. Furthermore, the study demonstrates that the magnitude of COVID-19 antibody responses were significantly greater in individuals with prior symptomatic illness compared with those who remained asymptomatic. This result was found to be consistent with a similar report from a hospital in northern India [15].
The rates of side-effects following the Covishield vaccine were lower than expected. The phase 2-3 trial of the ChAdOx1 nCoV-19 vaccine reported local and systemic adverse effects in 88% and 72% of participants who received the first injection and second dose, respectively [16,17]. In contrast, we found a lower rate of adverse effects of 68% after the first dose and 32% after the second dose of Covishield. None of the participants included in this report had any suspected unexpected serious adverse reactions, as observed in phase 2-3trials [18].
This cross-sectional study reported an overall 91.7% (143/156) seropositivity rate after two complete doses of vaccines in all the study participants. Similarly high seropositivity rate of (95%) were reported in the cross-sectional study of corona virus vaccine-induced antibody titre (COVAT) conducted in West Bengal [19]. Although there was no significant rise in COVID-19 IgG ratio amongst age and gender variables, the nonmedical category demonstrated increased COVID-19 IgG value in comparison to other categories. Various multicentric studies highlight the robust immune response that was produced by a natural COVID-19 infection and the additional protective effect that vaccination contributed to it [20,21]. Similarly, we observed the considerable increase in COVID-19 vaccine-induced antibody levels in those vaccinated HCW who had a history of COVID-19 in the past 6 months.
Eight percent (8%) of healthy HCW did not elucidate any immune response to vaccination against SARS-CoV-2. It is unclear whether this may lead to reduced protection from COVID-19 infection and disease [22][23][24]. Further immunological studies were needed to confirm the long-term effectiveness of SARS-CoV-2 vaccines and to determine the duration of protection in order to assess the need and ideal schedule for revaccination.
The difference in COVID-19 breakthrough infection has been more noticeable in the period after the Delta variant became dominant [25,26]. Overall, 2% of patients had a breakthrough infection during the follow up after completing the course of vaccination. Although the emergence of the Delta variant in India was devastating with high mortality, breakthrough cases tended to be substantially less severe compared with prevaccination COVID-19 cases, regardless of a person's immune status. The data confirmed that SARS-CoV-2 vaccinations are highly successful and the importance of full vaccination for preventing breakthrough infection is emphasized [27].
To the best of our knowledge, this cross-sectional sero-surveillance study is the first of its kind that has involved HCW from Southern India reporting anti-spike antibody kinetics among the unvaccinated and vaccinated populations. However, we also acknowledge several limitations in the present study; first, we screened the two different cohorts for COVID-19 IgG sero-surveillance because of the logistic issue. Ideally, a baseline COVID-19 IgG titre along with two values of anti-spike antibody after the first and second dose of the Covishield vaccine would have added more value in inferring the immune response to COVID-19 vaccine. Second, we measured only anti-spike binding antibody and could not assess NAb and cell-mediated immune responses [28][29][30], although a recent study has demonstrated a high correlation between spike protein-based ELISA and different antibody classes, including NAb in COVID-19 patients.
Conclusions
Systemic and local side-effects after Covishield vaccination occurred at lower frequencies than reported in phase 3 trials. Since the COVID-19 vaccine rollout commenced in our tertiary care hospital, seropositivity for COVID-19 IgG has risen dramatically and clearly shows trends in vaccine-induced antibodies among the health care workers. This adds to the evidence for the impact of the COVID-19 vaccine on the seroprevalence of SARS-CoV-2. Future studies to identify the protective thresholds of antibody responses may help in triaging the HCW who are at greatest risk for breakthrough infections. | v2 |
2022-03-25T15:37:52.711Z | 2022-03-01T00:00:00.000Z | 247636939 | s2orc/train | Acute Cytomegalovirus Infection Associated With Splenic Infarction: A Case Report and Review of the Literature
Splenic infarction associated with acute cytomegalovirus infection (CMV) in immunocompetent patients was initially described as a very rare occurrence but has been reported in recent years with increasing frequency. Many cases undergo multiple investigations only to leave acute CMV as the likely cause. There is a risk of splenic rupture and, although this complication is rare, fatalities have occurred. Although the exact mechanism of CMV as a vascular pathogen is unclear, there are now multiple reports describing venous thrombosis and arterial infarction in the presence of this acute viral infection. Our case prompted a review of the literature, and we suggest splenic infarction should be recognised as a possible complication of acute CMV.
Introduction
The first known case of cytomegalovirus infection (CMV)-related splenic infarction was reported by Jordan et al. in 1973 in a 26-year old, previously healthy American woman [1]. Over 30 years later, in 2008, Atzmony et al. described two large splenic infarctions in a 36-year-old Caucasian woman, which the authors reported to be only the third known case worldwide of CMV-related splenic infarction in an immunocompetent patient [2].
Since these early days, cases have been reported with increasing frequency in the literature-Kassem et al. [3] in France, Shimizu et al. [4] in Japan, and Rawla et al. [5] in the USA, all reporting in 2017 splenic infarction with acute CMV in immunocompetent patients aged 32, 37, and 62 years, respectively. In 2019, another two cases were reported-Schattner et al. [6] in Israel, describing a healthy 34-year-old woman, and Redondo et al. [7] in Spain, reporting on a 63-year-old HIV-negative woman. A further case was reported by Pakkiyaretnam et al. in 2020 in England, describing another previously healthy 23-year-old female where meningitis was the initial suspected diagnosis but acute CMV with splenic infraction was subsequently confirmed [8].
Case Presentation
Our case was a 28-year-old male chef who presented with a four-day history of fevers, sweats, cough, headaches, myalgia, and left upper quadrant pain. He had previously been in good health and, on physical examination, he had tenderness in his left upper abdominal quadrant with a temperature of 38.1 °C, but his vital signs were otherwise normal. Initial blood tests showed a white blood cell count of 12. He continued to report abdominal pain and underwent ultrasound, which showed splenomegaly with three hypoechoic focal abnormalities, and then underwent an abdominal CT scan, which demonstrated multiple wedge-shaped hypodensities. The CT did not identify thrombus in the splenic artery, but the scan was reported as consistent with splenic infarcts (Figure 1). A surgical opinion was obtained regarding the risk of rupture and the need for splenectomy, but he was treated conservatively. The patient proceeded to have more investigations, including a trans-thoracic echocardiogram, which was normal, and flow cytometry, showing no evidence of leukaemia or lymphoma. D-dimer was mildly elevated at 1.08 mg/L (ref <0.5 mg/L), a thrombophilia and autoimmune screen were unremarkable, HIV serology was negative, and no other pathogens were identified. The patient was not given anti-viral therapy but improved after two weeks and made a full recovery. Follow-up imaging did not occur, and repeat serology one month after presentation revealed that the CMV IgG had increased to 99.2 AU/ml.
Discussion
Splenic infarction can occur in association with a variety of conditions, including haematological malignancies, hypercoaguable states, thromboembolic disorders, and trauma. It has also been reported with parasitic infections such as malaria and babesiosis [9], and also with acute EBV and CMV. Whilst these viral infections are both very common, EBV largely occurs in teenagers and young adults, whereas CMV infection increases with age, with serological evidence of exposure rising from 36% in 6-11-year olds to 91% of the population aged 80 years or older [10].
CMV has been reported in association with thrombosis. In 2010, Atzmony et al. studied 140 hospitalised patients with acute CMV matched to 140 consecutive controls and reported nine patients with thrombosis (6.4%) in the CMV group with no episodes of thrombosis in the control group. Five of these patients had arterial thrombosis (four splenic and one renal infarct), four had venous thrombosis, and the authors concluded that acute CMV is associated with thrombosis independently of other risk factors [11].
In 2010, Justo et al. conducted a meta-analysis on 97 cases with thrombosis associated with acute CMV, of which 64 were immunocompetent and 33 were immunocompromised. Although deep vein thrombosis/pulmonary embolism was the most common vascular complication with 52% of the total affected, splenic infarction occurred in 12 patients (12.4%), with 10 patients in the immunocompetent group and two patients who were immunocompromised. The authors concluded that there is a true need for a prospective study on hospitalised and ambulatory patients with thrombosis to be tested for recent CMV infection [12].
The pathophysiology of CMV-associated vascular complications is not fully understood. Westphal et al. reported in 2006 that CMV-DNA in smooth muscle cells induces local growth factor expression as well as endothelial activation and suggested that CMV plays a crucial role in mediating the progression of atherosclerosis [13]. In 2014, Protopapa et al. reported another case of CMV and splenic infarction and described several mechanisms for vasculopathy, including platelet and leucocyte adhesion to infected endothelial cells [14].
Irrespective of the mechanism, acute CMV infection appears to be a vascular pathogen. In our case, there was an initial reluctance to attribute the splenic lesions to CMV, which led to additional investigations for other causes. The issues in management are related to the role of antivirals for immunocompetent patients, which is not clear, and the possibility of splenic rupture. This complication appears to be rare in acute CMV, but it was reported in 2014 by Vidarsdottir et al. in a 53-year old woman who concluded that primary CMV infection can cause splenic rupture without a history of trauma in immunocompetent adults [15].
More commonly, splenic rupture has been reported in infectious mononucleosis (IM), though many of the early studies did not differentiate between EBV and CMV, both of which are known to cause IM [16]. Although a rare event, fatalities have occurred with acute splenic haemorrhage the most common cause of death in IM [17]. The risk of death from splenic rupture specifically associated with CMV is unknown, but a systematic review by Bartlett et al. of 85 cases of splenic rupture with IM reported a 9% mortality [18]. This 2016 review examined published cases between 1984 and 2014, and although this review did not identify the cause of mononucleosis, it is considered that CMV is more likely to be the cause of splenic rupture associated with IM rather than its more benign EBV relation.
Conclusions
Previously rarely reported, splenic infarction associated with acute CMV has been described with increasing frequency in recent years. This unexpected complication causes diagnostic and management difficulties, with cases tending to undergo multiple investigations only to leave CMV as the likely pathogen and cause. There is a risk of splenic rupture, though it appears rare, and most cases can be managed conservatively. Whilst CMV can often present as a relatively mild infection, it appears to be pro-thrombogenic and questions remain over the role of screening for thrombosis and prophylactic anticoagulation. There are now multiple reports describing a similar clinical picture to our case, and we suggest that splenic infarction should be recognised as a possible complication in acute CMV infection.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | v2 |
2020-06-04T09:04:47.490Z | 2020-05-29T00:00:00.000Z | 219758879 | s2orc/train | Simulation of Subcritical Vibrations of a Large Flexible Rotor with Varying Spherical Roller Bearing Clearance and Roundness Profiles
: In large rotor-bearing systems, the rolling element bearings act as a considerable source of subcritical vibration excitation. Simulation of such rotor bearing systems contains major sources of uncertainty contributing to the excitation, namely the roundness profile of the bearing inner ring and the clearance of the bearing. In the present study, a simulation approach was prepared to investigate carefully the effect of varying roundness profile and clearance on the subcritical vibration excitation. The FEM-based rotor-bearing system simulation model included a detailed description of the bearings and asymmetricity of the rotor. The simulation results were compared to measured responses for validation. The results suggest that the simulation model was able to capture the response of the rotor within a reasonable accuracy compared to the measured responses. The bearing clearance was observed to have a major effect on the subcritical resonance response amplitudes. In addition, the simulation model confirmed that the resonances of the 3rd and 4th harmonic vibration components in addition to the well-known 2nd harmonic resonance (half-critical resonance) can be significantly high and should thus be taken into account already in the design phase of large subcritical rotors.
Introduction
Large rotor-bearing systems are commonly used in the industry as a part of, e.g., electric motors and generators, turbines in renewable or fossil energy production, and paper, steel and non-ferrous metal manufacturing machinery. The rolling element bearings act as a considerable source of excitation, which is seldom modeled accurately. The simulation of such rotor bearing systems contains major sources of uncertainty contributing to the excitation, namely the roundness profile of the bearing inner ring and the clearance of the bearing. The subcritical vibration excitation originating from the bearings causes response in the rotor system, leading occasionally to a subcritical resonance, when the excitation frequency and the natural frequency coincide. Elevated responses cause increased wear leading to an inclined need for maintenance. In addition, the increased vibration responses affect negatively on the end-product in industries, which use large rotors to manipulate the end-product with the rotor surface.
Over the years, many studies have extensively used the Finite Element Method (FEM) to model rotor bearing systems. Nelson [1] utilized the shear deformable Timoshenko beam elements to model the rotor shaft. Jei and Lee [2] extended the modeling process to design asymmetrical rotor bearing systems. Along with the asymmetricity, they also accounted for effects of inertia, gyroscopic moments, internal damping, and gravity. Using modal transformation to generate a reduced model, they generated responses, which were accurate compared to analytical responses. Kang et al. [3] studied the steady state response of asymmetric rotors by considering the deviatoric inertia and the change in stiffness due to asymmetry of a flexible rotor. Using the harmonic balance method, they numerically demonstrated that among other factors such as stiffness and damping of bearings, asymmetry of the shaft affected critical rotor speeds. Similarly, Ganesan [4] studied the effect of bearing stiffness and shaft asymmetry on the vibration response for cases where excitation frequencies are closer to the natural frequencies of the rotor. They concluded that a proper combination of bearing stiffness and asymmetricity promotes more stability in a rotor due to the mitigation of unbalance response. Overall, the existing literature describes the effects of asymmetry on the vibration response of rotors, verified by different methods based on numerical analysis.
Another key factor affecting the vibration response in rotors is the waviness profile in rolling elements, rotor and bearing inner and outer rings. In general terms, waviness refers to the measurable unevenness in the surface of real components, caused by manufacturing inaccuracies or surface wear. As one of the early researchers on this topic, Wardle [5,6] investigated how surface waviness affects vibration, using both numerical and experimental analyses. Meyer et al. [7] devised an analytical method to study vibration response due to different distributed defects. They verified the analytical models for waviness of the balls, the inner and the outer races of bearing with experimental results. Aktürk [8] investigated how surface waviness of bearing affects rotor vibration. His research focused on the inner and outer races, and the bearing balls of a rigid rotor. The study showed that vibration arises at the rotating speed of the inner ring, multiplied by the number of the harmonic waves (lobe number). Using a similar model, Arslan and Aktürk [9] studied the rolling element vibration in the radial directions in both time and frequency domains, with and without considering the defects. Furthermore, Harsha and Kankar [10] conducted a study on how the surface waviness effects the stability of a rigid rotor. The study showed that the number of balls and the order of waviness have a significant effect on the nonlinear vibrations of the system due to bearing waviness.
The past two decades have seen more research on the analysis of rotor vibration due to bearing waviness. Zhang et al. [11] studied the effect of multiple excitations, including bearing waviness on rotor stability by exciting one parameter at a time. The study showed that waviness amplitude has the highest effect in the instability regions, compared to initial phase of waviness, unbalance and bearing preloads. Sopanen and Mikkola [12,13] modeled a deep-groove ball bearing, which included the effect of surface waviness amongst other defects on the inner and outer races. The second part of the study consisted of a comparison between the model-based numerical results with those existing in the literature. They concluded that diametrical clearance inside the bearing considerably affects the vibration response of the system.
A few studies have also incorporated a multibody dynamic approach for modeling of a ball bearing. In a theoretical study, Liu et al. [14] developed a ball bearing using the multibody dynamic approach. Their research suggested that, for a bearing operating at high speed, the waviness in the outer race results in higher vibration in the system, compared to waviness in the inner race. Recently, Halminen et al. [15,16] used a multibody approach and studied the waviness of touchdown bearings in an active magnetic bearing supported system. In the event of contact, the highest effect of surface waviness was observed for case studies with inner race eccentricity and ellipticity. Sopanen et al. [17] combined an extensive rotor-bearing model with multibody approach to perform dynamic analysis of subcritical superharmonic vibrations. As a test case, they considered a paper machine roll with non-idealities such as variation in shell thickness and bearing waviness. Compared to experimental modal analysis, the optimized simulation model was able to accurately predict the half-critical resonance (2X, i.e., the excitation is occurring twice per revolution resulting in resonance peak at half the critical speed), which is significant for such industrial application.
Viitala et al. [18] studied the effect of the bearing inner ring by introducing different roundness profiles and detecting the subcritical vibrations in a paper machine roll. They introduced five different waviness profiles to the inner ring, which was mounted on the rotor shaft. The aim of the study was to minimize the subcritical harmonic resonance responses occurring at half (2X) to one-fourth (4X) times the natural frequency of the first bending mode. This was achieved by minimizing the roundness error of the inner ring. In addition, four other cases with varying roundness profiles were studied for the bearing inner ring. These include the original (as manufactured), oval, triangular, and quadrangular roundness profiles. The different profiles were achieved by insertion of slim metal strips between the bearing installation shaft and the conical adapter sleeve of the bearing.
Heikkinen et al. [19] used asymmetric 3D beam elements to model a paper machine roll to study the subcritical vibrations. The responses from the simulation model were compared with measured responses. In the study, it was concluded that the asymmetry only has a minor effect on the responses, unlike the bearing inner ring roundness profiles, which had a notable effect. The study focused mainly on capturing the half-critical resonance (2X). The 2X frequencies were captured in a reasonable accuracy compared to the measurements. However, as stated by Heikkinen et al. [19], all measured resonance response amplitudes were not repeated accurately by the simulation model. This reveals that other factors are also affecting the results, e.g., damping and as Sopanen and Mikkola [12] revealed, diametral clearance has a great effect on the system response. Harsha [20] studied the radial bearing internal clearance effect in the dynamic response and categorized the responses into three levels: first, very small clearance yields a very linear and predictable system; second, with small clearances, the system response is very sensitive to changes, e.g., rotation speed and clearance; third, chaotic behavior occurs with large clearance. Bearing clearance has an effect on the system dynamics and the 2X response amplitude, as with large clearance (loose bearing) the system excitation is greater than with very small clearance. The clearance is usually very small, within tens of micrometers to hundreds of micrometers and by dissembling the bearing and assembling it back to the rotor, the end result for clearance varies.
The engineering practice in large rotor design is currently widely aware of the vibration problems resulted by half-critical resonance. However, Viitala et al. [18] showed experimentally that remarkable resonance amplitudes can be observed also at one-third (3X) and one-fourth (4X) rotational frequencies. This creates a need to develop efficient and accurate simulation tools to consider these phenomena already in the design phase. The current state-of-the-art commercial FEM software does not include sophisticated nonlinear bearing models that could be used to study, for example, rotor responses due to bearings waviness or defects in a transient domain.
This study presents a novel simulation approach to investigate the frequencies and amplitudes of the first bending mode 2X, 3X, and 4X resonance responses of a large flexible rotor-bearing system subcritical vibration due to varying bearing roundness profile and varying radial bearing clearance. The investigation was limited to harmonic lateral vibration; axial bearing clearance, non-synchronous vibration, torsional, and axial vibrations were neglected. Higher-order harmonic resonance responses were neglected in the present study, since their resonances occur at very low frequencies, which are considered to be outside the operating frequency range in typical applications. The vibration responses obtained by simulation were validated against measurements conducted by Viitala et al. [18].
The results of this study and the resulting simulation method contribute to large rotor design and its dynamical operation optimization. Conceptually, the validated simulation model can be utilized, e.g., to generate teaching datasets for machine learning algorithms to classify certain root failure causes, as in Sobie et al. [21]. Parameters such as bearing clearance, bearing roundness profile, foundation stiffness, external load, material properties, and rotor design can be varied in the proposed simulation model with a systematic approach and doing rapid prototyping in the design phase for the large rotor cases with low effort.
Simulation Model and Experimental Setup
This chapter presents the theoretical background and methods of the study. The bearing model and the bearing waviness measurements are presented in Section 2.1. Section 2.2 describes the investigated rotor system, its FEM based model, and the measurement setup, which was used to capture the validating measurement data. Finally, the simulation procedure is introduced in Section 2.3.
Spherical Roller Bearing Model
A spherical roller bearing (SRB) model proposed by Ghalamchi et al. [22] is utilized in this study. In the following, the main aspects of the model as well as modeling of the bearing clearance and inner ring waviness are described. Figure 1 illustrates the radii of curvature of the roller, outer race, and inner race of a spherical roller bearing. The total elastic deformation of i th rolling element, j th row located at angle β i j can be determined from the relative displacements between the inner and outer races. Based on the geometry, the distance (A 0 ) between inner and outer raceway curvature centers (O in and O out ) can be calculated as [22] A 0 = r out By + r in
A-A plane
where r out By and r in By are inner and outer raceway curvatures, respectively, while d r is the roller diameter and c d is the diametral clearance. The displacements for roller i, in row j in the axial direction (δ i zj ) and radial directions (δ i rj ) can be calculated as: where e x , e y , and e z represent the relative displacements between the inner and outer race and φ 0 is the initial contact angle. The contact angle is negative for the 1st row and positive for the 2nd row of the bearing. The resulting loaded distance, i.e., the distance difference of loaded and non-loaded, for roller i in row j as a function of attitude angle of roller (β i j ) can be written as follows: The compression of a single roll along the common normal as a function of attitude angle due to relative motions of inner and outer rings is As shown in Equations (1) to (4), the bearing clearance c d contributes to the contact deformation of individual rollers and thus changes the load distribution between rollers. In case of radially loaded zero-clearance bearing, the load is theoretically distributed to rollers in 180°circumference, while, in case of large clearance, the load can be shared by only few rollers. However, it should be noted that the effect of clearance is not so straightforward in case of non-circular bearing rings with waviness. The bearing inner ring waviness profile is included to the elastic deformation of the roller i in the row j as follows: where d w β i j is the elastic deformation due the bearing inner ring waviness, β ij is the angular position of the roller, θ is the rotation angle of the inner ring, k harmonic waviness order, c k is the k th order waviness amplitude, and φ k is the phase angle of k th order waviness, respectively. The modeling approach assumes that the elastic compression is completely in the rollers while the not-round inner race and perfectly circle outer race are assumed to be rigid. The total elastic compression for an individual roller can be written as follows: In the loaded condition, the contact angle for individual roller elements can be defined as Using Hertzian contact theory, the contact force of each roller having positive contact deformation can be calculated as where k tot c is the roller contact stiffness coefficient and is calculated as where k in c and k out c are the inner and outer race contact areas, respectively. The total bearing forces in different translational directions can be calculated as a sum of all individual roller forces. Bearing clearance and the waviness of the inner ring consequently affect the total bearing force by changing the internal load distribution of the bearing. In practice, this leads to time-dependent dynamic excitation and time-varying stiffness of the bearing. In this study, the values of clearance (c d ) shown in Equation (1) are varied from 10 µm to 120 µm. In addition, the waviness amplitude c k , order of waviness amplitude k and phase of waviness amplitudes φ k shown in Equation (5) are varied to the measured values, and these are shown in Section 2.1.2 in Table 1.
Bearing Waviness Measurement
The rotor was suspended by a double row of spherical roller bearings (SKF 23124 CCK/W33) at both ends ( Figure 2). The d 1 is the bore diameter, d 2 is the approximate inner race diameter at bearing end, d r roller diameter, D is the outer race diameter, D 1 is the outer race approximated inner diameter at bearing end, B is the bearing width, and B 4 the lock nut width. In the bearing model, the z is the number of rollers, n z rows of rollers, r out By outer race contour radius, r in By inner race contour radius, φ 0 is the free contact angle, and E roller and E ring are the Modulus of Elasticity of the roller and rings and ν roller and ν ring the Poisson's ratios, respectively. The bearing clearance of the experimental rotor system was identified to contain remarkable uncertainty due to several assembly and disassembly procedures of the service sided bearing. The roundness profile of the installed bearing inner ring was modified by inserting thin steel shims between the rotor shaft and the conical bearing adapter sleeve as depicted in Figure 3.
The bearing was disassembled to modify the bearing geometry. The bearing clearance was varied in the simulation model in a controlled manner to investigate its effect on the rotor response. The bearing inner ring roundness profile was measured while installed on the rotor shaft ( Figure 4). This ensured that the acquired roundness profile was the actual roundness profile of the inner ring in the operating conditions. The roundness profile was measured utilizing the four-point method. [18,23,24]. The four-point method is able to differentiate the roundness profile and the error motion of the workpiece in a reliable way. Traditional roundness measurement machines could not be used due to the large size of the workpiece. Five different roundness profiles of the bearing inner ring were measured at the service end of the rotor: original, oval, triangular, quadrangular, and minimized roundness error. Figure 5 depicts the roundness profiles. The original geometry was measured as such without any intended modification. The drive end was not modified, but the roundness profile was measured nevertheless. Table 1 depicts the values of roundness error including the amplitude and phase of the waviness of different orders that can be combined by means of a Fourier series. Figure 6 illustrates the studied rotor system. The rotor is supported by two spherical roller bearings located in both ends. The bearing housings are bolted to 2.5 cm steel plate that is welded to two OD 19.5 cm steel tubes with wall thickness of 2 cm. The other end of the steel tubes are welded to 5 cm thick steel plate that is bolted from the centre to the stiff steel foundation. The rotor is driven though a coupling between the rotor drive-end and electric motor controlled by a frequency converter.
Simulation Model of the Rotor System
The asymmetric rotor is modeled using the Finite Element Method (FEM) including the description of asymmetry. The rotor was modeled employing Timoshenko beam elements. The tubular rotor wall thickness variation was included in the model by defining the area moments of inertia individually for both cross-sectional principal axes as described in [19]. The rotor is supported with a double row of spherical roller bearings (SKF 23124 CCK/W33) that are implemented in the model using the modeling method discussed in Section 2.1.1. In the FE model, the rotor is discretized into 24 beam elements using 25 nodes along the rotor's length (Figure 7). The discretization facilitates the integrations of other components into the model, using the nodes at each key location of the rotor. Each rotor node has two translational and two rotational degrees of freedom (DOFs). The geometry and physical parameters of the rotor contribute as input to the model. For instance, the bearings are located at nodes 2 and 24. The support is connected at bearing locations with additional nodes. The support structures contribute to the total mass (bearing, bearing housing and the two cylinder bed) an additional mass of 190 kg each at their respective locations. Furthermore, the overhanging part from the tube end modeled as an additional mass point, i.e., not providing additional stiffness to the system, of approximately 6 kg at node 6 and node 20, respectively. Lastly, for the initial balancing of the tube roll, the balancing planes are located at nodes 6 and 20 where a balancing mass of 2.6 kg and 2.8 kg are added at each plane.
The foundation stiffness of the simulation model was defined by utilizing foundation design and the corresponding stiffness values obtained from static FE analysis and then matched with the measured peak response frequencies in the horizontal and vertical directions. The corresponding stiffness values in the horizontal and vertical directions are 18 MN/m and 195 MN/m, respectively. Damping of the support is fine-tuned according to the shape of the measured response curves and the damping ratios are evaluated as 1.3% in the vertical direction and 2.3% in the horizontal direction.
Measurement of the Rotor System
The wall thickness of the tubular rotor is variable due to a commonly used manufacturing method where the rotor is made from a thick sheet plate that is bent into a tubular form, welded and turned from the outer circle. The thickness variation was measured to provide input data for the asymmetric rotor simulation model. Figure 8 depicts the thickness variation [25]. An ultrasound probe was used to measure the rotor shell thickness. Water was used as an ultrasound transmitting medium between probe and the roll. The thickness measurement consisted of 20 measurements along the circumference of the rotor and 36 measurements along the rotor axial direction leading altogether to a measurement grid of 720 points. The result showing thickness variation up to 2 mm, as the nominal thickness was 16.5 mm, is presented in Figure 9. The experimental rotor system and the measurement results are presented by Viitala et al. [18]. The dynamic response of the rotor was measured utilizing a measurement environment built on a commercial roll grinding machine that was presented previously in Figure 6. The response was measured in horizontal and vertical directions at the middle cross-section of the rotor at a rotational frequency range of 4-18 Hz with 0.2 Hz increments using the four-point method to extract the roundness profile and the rotor center point movement from the measured signals of the four laser sensors. An encoder was used to collect 1024 samples per revolution. A hundred revolutions of data were acquired at each rotational frequency step. The data sets were synchronously averaged to reduce noise and measurement uncertainty [26][27][28]. Finally, the center point movement data was analyzed using FFT to obtain the response spectra of the rotor in horizontal and vertical directions.
Simulation Procedure
The tubular rotor model dynamic behavior was verified by experimental modal analysis measurements of the loosely supported (free-free) rotor. In the measurement, the rotor was lifted from the pedestal and suspended by flexible ropes in order to obtain the free-free modes. The measured free-free frequencies are 70 Hz for the first bending and 168.5 Hz for the second bending mode. In the model, the first bending frequency is 70 Hz and 179 Hz for the second bending mode. As typical in the beam element-based rotor models, the first bending mode can be tuned well to match the measured frequencies while the second bending mode is usually slightly higher than measured. Next, the bearing model and the description of the foundation stiffness was included in the rotor model. The critical speeds of supported rotor were compared with the measurements to verify the supported model dynamical behavior. The simulation procedure for the created rotor model was conducted in seven steps for each of the 60 studied case as follows: 1. Define varying input parameters including bearing clearance and rotational speed 2. Define a static equilibrium position of the system 3. Analyze transient response of nine-second time span at constant rotational speed 4. Convert the transient response into frequency domain using FFT 5. Repeat from Step 1 until all rotational speeds are calculated 6. Plot response curves as functions of rotational speed i.e., three-dimensional waterfall graph 7. Capture subcritical peak responses from waterfall graphs The simulation was performed in a time domain, i.e., transient response of the rotor was observed. The transient response was studied at constant rotational speed and the simulation started at the equilibrium position of the static rotor. The studied time span was nine seconds and the simulation time step was 0.0005 s. The studied rotational speeds were between 4 and 17 Hz with a 0.05 Hz rotational speed increment resulting in 281 studied rotational speeds. A total of five different cases were studied according to the studied waviness configurations in which the bearing clearance varied from 10 µm to 120 µm with a 10 µm increment. The bearing clearance was identified as one of the key sources for change in the response amplitudes.
The response data in the time domain were converted to frequency domain by Fast Fourier Transform (FFT). In addition, 16,384 FFT points (2000 Hz/16,384 = 0.122 Hz increment) and Hanning windowing was used. The individual response curves in the frequency domain are then plotted as a function of rotation speed generating waterfall graphs for each of the 60 studied cases. The maximum peaks at 1/2, 1/3, and 1/4 rotation speeds are captured from waterfall graphs for further analysis.
Results
In the simulation model, the responses in the middle cross-section of the rotor were studied, similarly to the measurement by Viitala et al. [18]. The left side of Figure 10 shows a waterfall plot in Case 1 with 120 µm bearing clearance. The resonance peaks of the second, third, and fourth (2H, 3H and 4H) subharmonic response components were studied in both horizontal and vertical directions. On the right side of the figure, the corresponding response peaks are shown in a two-dimensional graph, the x-axis representing the bearing clearance. The response peak values with all bearing geometries (Cases from 1 to 5) with varying bearing clearance (10 µm to 120 µm with 10 µm increment) at the middle of the rotor are shown in the results. The nominal clearance for the bearing is 60 µm. The proposed simulation method enables investigation of the combined effect of the bearing clearance and geometry on the rotor response. 10 20 Figure 10. Peak responses for 2H, 3H, and 4H captured from the waterfall graphs. The peak responses were collected into a figure with all different bearing clearances. In this particular waterfall plot, the clearance was 120 µm.
In Figures 11-15, the solid lines correspond to the simulated values and the dashed lines to the measured values. In the analysis, the drive end waviness remains fixed and the service end is variable. However, in the simulation, the bearing clearance was changed for the drive end and the service end simultaneously as both clearances were unknown. Table 2 presents the bearing inner ring roundness profile amplitudes and phases used as inputs for the simulation model for the service and drive end. The drive end roundness profile remains the same for cases 1 to 5 while the service end varies according to the studied case. The measured roundness profile was adapted to the simulation model. The highest roundness profile amplitudes were measured for the second harmonic component. Case 1 represents the 'original case', in which the bearing was installed on the rotor shaft according to the bearing supplier instructions without manual modification. Table 2. Average amplitudes and phases of the two roller element paths (see Figure 5) used in Case 1 for the service end and the drive end. For the phase, the circular average was utilized. Figure 11 presents the simulation results produced with the original bearing inner ring roundness profile in the service end of the rotor.
Amplitude
Consequently, the results show that larger clearance between the roller elements of the bearing and the bearing outer ring increased the subcritical response amplitudes at the subcritical resonance frequencies. Even though the velocity increment utilized in the simulation model was small (0.05 Hz), some variation can be observed particularly in the horizontal 2nd harmonic component, when the clearance is 0.05-0.06 mm, but, in general, the trend seems clear. Lower change rate can be seen in the vertical direction. However, this relates to the foundation stiffness and is also observed with measured responses. Suggested by the results presented in Figure 11, the clearance value in the range from 0.05 to 0.08 mm provides the best agreement between the simulation and the measurement in horizontal direction. However, some difference can be detected in the vertical 3rd harmonic component, which attains the measured amplitude level in high clearance value. In the vertical direction, the 2nd harmonic component has substantially higher response compared to the measured values. Table 3 presents the bearing inner ring roundness profile amplitudes and phases used as inputs for the simulation model. In the second case, the amplitude of the second harmonic roundness component was intentionally increased using thin steel shims between the shaft and the conical adapter sleeve. Table 3. Average amplitudes and phases of the two roller element paths (see Figure 5) used in Case 2 for the service end and the drive end. For the phase, the circular average was utilized. Figure 12 presents the simulation results produced with the oval bearing inner ring roundness profile in the service end of the rotor. The results are presented as a function of the bearing clearance. Again, the response amplitudes at subcritical resonances (2H, 3H, and 4H) are shown. The simulation model clearly reacted to the increased 2nd harmonic roundness component of the bearing inner ring with elevated second harmonic response amplitudes. The 2nd and the 4th harmonic component in the horizontal direction behaved similarly compared to Case 1. However, the 3rd harmonic component increased only moderately with the increasing clearance and remained clearly lower than in the original case also with the largest clearances. Accordingly, the highest increases were in the 2nd and 4th harmonic components of the bearing inner ring roundness profile, whereas the 3rd harmonic roundness component remained almost the same compared to the original bearing geometry.
Amplitude
The main effect on the vertical direction response can be seen in the increased 2nd harmonic component, especially with larger clearance values. The 3rd and 4th harmonic component changed only slightly; similar to the horizontal response, the 3rd harmonic remained even lower than in the original case, and the 4th harmonic increased notably more with the inclining clearance. These remarks apply both to the measured and simulated responses.
The results presented in Figure 12 show that the simulated clearance value has divergence when trying to find agreement with the measurement result presented with the dashed line. Horizontal direction 2nd and 4th harmonic components as well as vertical direction 4th harmonic component suggest a clearance of 0.06-0.07 mm. However, in the vertical direction, the 2nd harmonic attains the measured response at 0.01-0.02 mm clearance and 4th harmonic in vertical direction at 0.01 mm clearance. The 3rd harmonic in the horizontal and vertical directions did not attain the measured amplitude at all. Table 4 presents the bearing inner ring roundness profile amplitudes and phases used as inputs for the simulation model. The roundness profile of the bearing inner ring in the service end was intentionally modified to a triangular geometry. As a consequence, the 3rd harmonic horizontal resonance had the largest amplitude. Table 4. Average amplitudes and phases of the two roller element paths (see Figure 5) used in Case 3 for the service end and the drive end. For the phase, the circular average was utilized. Figure 13 presents the simulation results produced with the triangular bearing inner ring roundness profile in the service end of the rotor.
Amplitude
In the horizontal direction, the simulation model reacts clearly to the increased 3rd harmonic roundness component. Hence, the 3rd harmonic resonance peak response attains the measured value with a clearance circa 0.09 mm. The 2nd and 4th harmonic components incline as well with the increasing clearance, however, does not attain the measured response values.In contrast, the vertical direction result shows a disagreement between the simulation model and the measurement. The 2nd and the 3rd harmonic components increased significantly more with the increasing clearance than in the previous cases, but all the components were notably far from the measured values.
With clearance values from 0.01 mm to 0.03 mm, a decrease or a very low increase in the 3rd harmonic response of both directions can be observed. This may be explained with the 3rd harmonic roundness component of the bearing inner ring, which was circa 16 µm (0.016 mm), creating a 'three-point support', and thus preventing the bearing excitation three times per revolution with smaller bearing clearances. Table 5 presents the bearing inner ring roundness profile amplitudes and phases used as inputs for the simulation model. In the fourth case, the amplitude of the fourth harmonic roundness component was intentionally increased. As a consequence, the 4H component had the largest amplitude. Table 5. Average amplitudes and phases of the two roller element paths (see Figure 5) used in Case 4 for the service end and the drive end. For the phase, the circular average was utilized. The simulation model clearly reacts to the quadrangularity with elevated 4th harmonic response of the rotor in both horizontal and vertical directions. The 3rd harmonic roundness component has a very low value (in the order of 3 µm), which may partly explain the attenuated 3rd harmonic responses.
Case 4-Quadrangular Bearing Inner Ring Roundness Profile in the Service End
The vertical direction 4th harmonic response is limited compared to measurement, but presents, however, a clear increase with increasing bearing clearance. The measured 2nd harmonic response in the vertical direction was very low and the simulated response surpasses it clearly.
The horizontal direction of the 2nd and 3rd harmonic components suggests a clearance in the range of 0.07-0.08 mm. However, other components present significantly divergent results. Similar to Case 3, a very low increase can be observed in the horizontal 4th harmonic component with clearance values from 0.01 mm to 0.02 mm. The explanation may be the same, now consequently 'four-point support' preventing the bearing-based excitation four times per revolution with smaller bearing clearances. Table 6 presents the bearing inner ring roundness profile amplitudes and phases used as inputs for the simulation model. In the fifth case, the roundness error, and thus also the amplitude of all the roundness components, was minimized with thin steel shims. As a consequence, all the roundness components had relatively small amplitudes. Table 6. Average amplitudes and phases of the two roller element paths (see Figure 5) used in Case 5 for the service end and the drive end. For the phase, the circular average was utilized. The measured results showed evidently the lowest total response of the rotor. The simulation model also produced the lowest responses. In the horizontal direction, the 2nd harmonic component measured and simulated responses coincide already with the smallest clearance. The 3rd and 4th harmonic component responses suggests clearance of 0.04-0.06 mm. In the vertical direction, the simulated 2nd harmonic response exceeds the measured response at 0.02 mm clearance and 3rd harmonic response does not attain the measured response even at the highest clearance. The 4th harmonic response in the vertical direction suggest clearance of 0.05-0.06 mm. The results of Case 5 show the lowest growth of the response amplitudes against the bearing clearance.
Case 5-Minimized Roundness Error of the Bearing Inner Ring in the Service End
3.6. Bearing Clearance Effect on the Critical Speeds Figure 16 depicts the average critical speeds in cases from 1 to 5 with respect to the bearing clearance. The average critical speeds were obtained by calculating the average of the critical speeds of the subcritical harmonic resonance frequencies. For example, multiplying 2H resonance frequency by 2, multiplying 3H resonance frequency by three and multiplying 4H resonance frequency by four and finally taking the average yielded the average critical speed. The critical speeds presented in Figure 16 show a clearly decreasing trend with the increasing bearing clearance, especially in the horizontal direction. In the vertical direction, the critical speed remains merely the same in cases 1 to 5. Comparison of the horizontal direction critical speed between the measurement (21.6 Hz) and the simulation (Figure 16 left) suggests that the bearing clearance was circa 0.07-0.09mm.
Discussion
The present study introduces a rotor-bearing system simulation model, which takes the bearing clearance into account and is able to emulate the bearing inner ring waviness. The results suggest that the simulation model output reacts clearly to the varying bearing inner ring roundness profile and the bearing clearance. The frequencies of the subcritical resonance peaks were captured accurately compared to a measurement case used for validating, suggesting that the simulation model is able to predict the critical speeds of the rotor system in horizontal and vertical directions. Separated critical speed suggests a clear difference in the foundation stiffnesses of the horizontal and vertical directions. The response amplitudes of the subcritical resonance peaks were investigated in five different cases with varying bearing clearance. The accuracy of the amplitude capture was found reasonable, despite the fact that variations in different cases were detected in comparison against the validating measurement case.
Results Compared to the Validating Measured Rotor System
Both the simulation model and the measurements suggest that the rotor system has a different stiffness in horizontal and vertical directions. This can be observed from the natural frequency estimates calculated from the subcritical resonance peak frequencies (see Figure 16) and generally lower amplitudes in the vertical direction. Consequently, the structure of the test bench provides a stiffer foundation in the vertical direction, and, since the simulation model mimicked the measurement test bench, the phenomenon was successfully seen in the simulation results.
In Case 1, the simulated and measured responses were very well aligned in horizontal and vertical directions. In addition, the measured and simulated results coincide close to the 50-70 µm bearing clearance (except vertical 2nd and 3rd harmonic responses). This suggests that 60 µm bearing clearance at both the drive end and service end bearings results in an almost similar behavior that was measured.
In Case 2, the increase of the 2nd harmonic amplitude was clearly captured in the simulation results in both horizontal and vertical directions. Compared to the measurements, the simulated 4th harmonic resonance response had higher responses than the measured values. In the vertical direction, the simulated 3rd harmonic responses were lower than the measured values.
In the horizontal direction of Case 3, the responses obtained with the simulation model were well aligned with the measured values. However, in the vertical direction, the measured values showed substantially higher responses compared to the other four cases, being even higher than the horizontal direction responses. Compared to the simulation, the responses were off by a factor of three. In general, the Case 3 vertical responses were significantly higher compared to the vertical direction amplitudes in the other four cases. The average waviness component amplitudes of the bearing roundness profile (2nd 6.74 µm, 3rd 16.19 µm and 4th 4.87 µm) were at a similar range compared to the other cases. The measurement result in this particular case might be erroneous or 3rd harmonic waviness consequently results in a dynamic phenomenon, which was not captured using the simulation model.
In Cases 3 and 4 ( Figures 13 and 14), very limited responses for 3rd and 4th harmonic components were obtained with small bearing clearances. The authors suggest that, because the waviness amplitudes are larger than the bearing clearance, there is no room for 3X and 4X excitations and thus the response is attenuated.
In Case 4, the 2nd and 4th harmonic components showed the highest responses, even though the 4th harmonic bearing roundness profile component had clearly the highest input amplitude. In addition, in most other cases, the 2nd harmonic vibration resonance remained dominant. There are many other sources for 2X excitation as well, such as bending stiffness variation of the tubular rotor, which partially provides an explanation for this phenomenon.
In Case 5, the simulated 2nd harmonic response in the horizontal and vertical directions reached the highest responses already at low clearance values. In the horizontal direction, 3rd and both horizontal and vertical direction 4th harmonics reach equal responses compared to the measured case approximately at a 60 µm clearance range.
Uncertainties in Capturing the Peak Responses
The accuracy of capturing the response peaks in the simulation and in the measured validation case relates consequently to the sampling rate and the resolution of rotational frequency steps of the rotor, as the resonance peak occurs at a certain frequency. Coarse resolution results in missing the highest values of the response peaks. In the measurements, a rotational speed increment of 0.2 Hz was used. This was identified to be one of the sources causing uncertainty in the results, consequently leading to lower resonance responses. In contradiction, a 0.05 Hz rotational speed increment was used in the simulation, which resulted in a relatively low uncertainty in capturing the highest response peaks. Increasing the resolution in future studies is considered to reduce the uncertainty and to increase the quality of the resulting data sets.
Foundation stiffness has a significant impact on critical speeds, as suggested by the results of the present study as well. The horizontal direction foundation stiffness was approximately one-tenth of the vertical direction stiffness, resulting in critical speeds of 21.6 Hz and 30.0 Hz in horizontal and vertical directions, respectively.
Limitations and Further Research
In the simulation model, the bearing clearance was varied simultaneously for the drive end and the service end. However, in the actual system, the clearances were not identified or controlled, which could be considered in future studies. The bearing clearance was also changing due to different case studies since the bearing was disassembled and assembled multiple times during the measurement procedure. In future considerations, an accurate method to measure the bearing clearance should be investigated to reduce the uncertainty of the input parameters to the simulation model. In addition, the bearing model may have to be improved to include the flexibility of the outer race to provide an accurate bearing excitation mechanism and thus capture dynamic responses more accurately.
In the vertical direction, the responses obtained via simulation are generally lower than what was obtained with measurements in all cases. This suggests that the bearing outer race roundness error could affect the responses. In the simulation model, the assumption of a rigid bearing outer race is used, and thus the flexibility of the outer race is neglected. The flexibility and roundness profile of the outer race are topics of further research and could explain the lower responses in the vertical direction compared to the measured ones.
Practical Considerations
The rotor system simulation model including a rotor modeled using FEM combined with a spherical roller element bearing model resulted in similar behavior as was observed in a validating measurement case. Figure 17 depicts the harmonic response components in horizontal and vertical directions and their amplitudes on the cases 1 to 5. On the left are the measured responses and left the simulated responses at nominal bearing clearance (60 µm). It compares the 3rd and 4th harmonic response of the rotor to the well-known 2nd harmonic component for all the cases, and as hypothesized in the research, the figure shows that those components have significant contribution to the rotor response. The bearing inner ring waviness was observed to have a notable effect on the rotor subcritical behavior, as was discussed by Viitala et al. [18], and now confirmed also by the present simulation study. As a similar behavior could be replicated with the simulation model, it suggests that the bearing excitations and the 3rd and 4th harmonic vibration resonances, in addition to the industrially well-known 2nd harmonic (half-critical), could be considered already in the design phase of large flexible rotors. In addition, it was pointed out and confirmed by Sopanen and Mikkola [12] and Harsha [20] that the rolling element bearing clearance has a great effect on the observed responses. For example, in Case 2 in the horizontal direction, the second subharmonic response with a 10 µm bearing clearance resulted in a circa 120 µm response and with 120 µm bearing clearance over a 540 µm response indicating a magnitude difference of factor 4.5. In practical engineering, this result promotes the importance of assembling the bearings using the correct clearance, or to select a bearing with small clearance.
Another practical way of using the simulation model is generating teaching data for machine learning. The simulation model could be used for computing tens of thousands of different scenarios, e.g., by varying the bearing clearance, material properties, rotor design and foundation stiffness components. Providing a similar data set through measurements is hardly possible and would require an unavailable amount of time and resources. The resulting data-based model could be used as a proxy to the simulation model to predict rotor behavior in various states with low computational effort and time. In addition, various kinds of virtual sensors could be created using the simulation model to generate the virtual measurement data.
Conclusions
The present study introduces a novel rotor-bearing system simulation approach, which takes the bearing clearance and the bearing waviness carefully into account and investigates their effect on the subcritical rotor response. The simulation model was validated with measured responses of a large-scale flexible rotor system, where different roundness profiles of the supporting bearing inner ring were introduced. The measurement results confirm the simulation results, which show that the bearing inner ring roundness errors have a great impact on the dynamic behavior of the rotor. The bearing clearance was identified to have a great effect on the subcritical resonance response peaks, and thus it should be carefully considered when assembling bearings. The rotor response in the middle inclined three-fold by increasing the bearing clearance within the defined realistic clearance range. This kind of behavior consequently increases the wear of the rotor-bearing system components and the need of maintenance, and decreases the quality of end product in industries, such as paper, steel, and non-ferrous metal manufacturing, in which the end product is manipulated, formed, and transferred with the surface of such rotors. In addition, the 3rd and 4th harmonic resonance components were found to have notable responses, which suggests considering those already in the design phase in addition to the industrially well-known half critical (2nd harmonic) response. | v2 |
2022-05-15T15:19:11.689Z | 2022-05-13T00:00:00.000Z | 248785089 | s2orc/train | A Fast Maritime Target Identification Algorithm for Offshore Ship Detection
: The early warning monitoring capability of a ship detection algorithm is significant for jurisdictional territorial waters and plays a key role in safeguarding the national maritime strategic rights and interests. In this paper, a Fast Maritime Target Identification algorithm, FMTI, is proposed to identify maritime targets rapidly. The FMTI adopts a Single Feature Map Fusion architecture as its encoder, thereby improving its detection performance for varying scales of ship targets, from tiny-scale targets to large-scale targets. The FMTI algorithm has a decent detection accuracy and computing power, according to the mean average precision (mAP) and floating-point operations (FLOPs). The FMTI algorithm is 7% more accurate than YOLOF for the mAP measure, and FMTI’s FLOPs is equal to 98.016 G. The FMTI can serve the demands of marine vessel identification while also guiding the creation of supplemental judgments for maritime surveillance, offshore military defense, and active warning.
Introduction
The maritime environment and international environment are becoming increasingly complex and changing, resulting in a rise in the type of vessels. The diversification of maritime targets, including warships, fishing vessels, cargo ships, etc., presents a great challenge for effective territorial water management [1]. Automatic ship detection algorithms are essential for effective territorial water surveillance. In a variety of sea conditions, this algorithm is capable of reliably recognizing arriving and leaving vessels. Currently, China's marine surveillance monitoring methods are divided into two groups based on how the information is obtained-active and passive approaches. The active methods get information from radar, video surveillance, remote sensing, underwater sonar, etc. On the other hand, the passive approach is usually used for the ship's automated identifying system.
Recently, deep learning technology has made great progress in various fields. Additionally, Convolutional Neural Networks (CNN) have made outstanding contributions in plenty of areas [2], including image classification and recognition, video recognition, etc. The traditional approach to feature extraction required manual tasks. Because different types of targets have different dependent features, the manual extraction approach is ineffective. CNN [3] can extract probable features automatically, saving time on human extracted features. Neuro networks along with improvements in big data technology have led to a shift in maritime target identification from wasteful manual monitoring methods to deep learning automatic identification methods. Artificial intelligence advances are pushing the field of computer vision forward and providing practical answers to the problem of recognizing objects at sea. Many CNN-based recognition models have emerged, e.g., SSD [4], Faster RCNN [5], and YOLO [6]. There are two types of target detection algorithms-one-stage and two-stage ways. The end-to-end concept is used in the one-step method. After feature extraction, the image directly outputs the target class probability together with the location coordinate prediction frame, resulting in a faster detection. SSD and YOLO are typical representatives. The two-stage recognition procedure requires the identification of probable detection zones first and then classification of the objects. Two-stage profound network models are represented, e.g., RCNN series [7,8] and SPPNet [9] have higher recognition accuracy and are independent of factors such as perspective, light, and cover.
The main contributions of the article are as follows: (1) A novel detection algorithm, FMTI, is proposed for maritime target detection. (2) A multiscale feature fusion method is proposed to enrich the information of a single map. (3) FMTI can offer essential references for marine-related government functions to make their decisions.
The rest of this paper is organized as follows. Section 2 shows the related work. Section 3 demonstrates the methodology and the implementation details of our FMTI algorithm. The performance metrics and the experimental results are presented in Section 4. Finally, Section 5 concludes the work and proposes future works.
Related Work
Maritime rights and interests have once again become a focus of the world since the beginning of the 21st century, and the strategic position of coastal nations has been promoted like never before. Coastal countries constantly improve their ability to protect their unique marine interests and rights. They also aim to protect the coastal and marine ecosystems and further develop the marine economy.
Although technology has progressively become mature and the speed of recognition has gradually increased, the two-step algorithm fails to achieve real-time because the model is divided into multiple stages and calculates work redundantly. To resolve this problem, scholars have introduced the YOLO series [10,11], SSD, and other single-stage target detection methods; however, these approaches listed above improved the speed while reducing the accuracy.
Marine target detection work differs from land-based recognition in that maritime vehicles are constrained by waves, which impact vessel behavior [12], including six sorts of activities [13], such as surging, swaying, heaving, rolling, pitching, and yawing. The semantic data was markedly deficient. Thus, CNN-based target detection methods are available for maritime target recognition in natural scenes. CNN was constructed by several various layers, with the network being trained to understand the relationship between the data, and the model describes the mapping relationship between the input and the output data. Both traditional and deep learning target detection methods together constitute the current dominant target detection methods.
Many scholars have conducted research to obtain the best possible precision and speed in balance. Tello et al. [14] proposed a discrete wavelet transform-based method for ship detection that relied on statistical behavior differences between ships and surrounding sea regions to help in the interpretation of visual data, resulting in more reliable detection. Chang et al. [15] introduced the You Only Look Once version 2 (YOLO V2) approach to recognizing ships in SAR images with high accuracy to overcome the computationally costly accuracy problem. Chen et al. [16] used a combination of a modified Generative Adversarial Network (GAN) and a CNN-based detection method to achieve the accurate detection of small vessels. Contemporary technologies and models have limitations, such as the inability to recognize closed-range objects. Other scholars have performed outstanding work and contributed to the implementation of neural network algorithmic methods by migration to solve practical problems in different fields. Arcos-García et al. [17] analyze target detection algorithms (Faster R-CNN, R-FCN, SSD, and YOLO V2), combined with some extractors (Resnet V1 50, Inception V2, darknet-19, etc.) to improve and adapt the traffic sign detection problem through migration learning. It is worth noting that ResNet's network structure has been introduced, resulting in exceptional performance in fields including non-stationary GW signal detection [18], Magnetic Resonance Imaging identification [19], the CT image recognition [20], and agricultural image recognition [21]. Feature Enrichment Object Detection (FEOD) framework with weak segmentation Loss based on CNN is proposed by Zhang et al. [22], the Focal Loss function is introduced to improve the algorithm performance of the algorithm. Li et al. [23] proposed a new decentralized adaptive neural network control method, using RBF neural networks, to deal with unknown nonlinear functions to construct a tracking controller. A new adaptive neural network control method was proposed by Li et al. [24] for uncertain multiple-input multiple-output (MIMO) nonlinear time-lag systems. To address the problem of Slow Feature Discriminant Analysis (SFDA) which cannot fully use discriminatory power for classification, Gu et al. [25] proposed a feature extraction method called Adaptive Slow Feature Discriminant Analysis (ASFDA). A fast face detection method based on convolutional neural networks to extract Discriminative Complete Features (DCFs) was proposed by Guo et al. [26] it detaches from image pyramids for multiscale feature extraction and improves detection efficiency. Liu et al. [27] establish a multitask model based on the YOLO v3 model with Spatial Temporal Graph Convolutional Networks Long Short-Term Memory to design a framework for robot-human interaction for judgment of human intent. To resolve the real-time problem of recognition, Zheng et al. [28] introduce an attention mechanism and propose a new attention mechanism-based real-time detection method for traffic police, which is robust. Yu et al. [29] proposed a multiscale feature fusion method based on bidirectional feature fusion, named Adaptive Multiscale Feature (AMF), which improves the ability to express multiscale features in backbone networks.
Additionally, scholars have been working on meaningful improvements based on them and making assistance for further enhancement of maritime target recognition. The integrated classifier MLP-CNN was proposed by Zhang et al. [30] to exploit the complementary results of CNN based on deep spatial feature representation and MLP based on spectral recognition to compensate for the limitations of object boundary delineation and loss of details of fine spatial resolution of CNN due to the use of convolutional filters. Sharifzadeh et al. [31] proposed a neural network with a hybrid CNN and multilayer perceptron algorithm for image classification, which detected target pixels based on the statistical information of adjacent pixels, trained with real SAR images from Sentinel-1 and RADARSAT-2 satellites, and obtained good performance. For the pre-processed data, Wu et al. [32] employed a support vector machine (SVM) classifier to classify the ships by assessing the feature vectors by calculating the average of kernel density estimates, three structural features, and the average backward scattering coefficients. Tao et al. [33] proposed a segmentation-based constant false alarm rate (CFAR) detection algorithm for multi-looked intensity SAR images, which solves the problems related to the target detection accuracy of the non-uniform marine cluster environment, and the detection scheme obtains good robustness on real Radarsat-2 MLI SAR images. Meanwhile, a robust CFAR detector based on truncation statistics was proposed by Tao et al. [34] for single-and multi-intensity synthetic aperture radar data to improve the target detection performance in high-density cases. SRINIVAS et al. [35] applied a probabilistic graphical model to develop a two-stage target recognition framework that combines the advantages of different SAR image feature representations and differentially learned graphical models to improve recognition rates by experimenting on a reference moving and stationary target capture and recognition dataset.
In order to tackle the collision avoidance problem for USVs in complex scenarios, Ma et al. [36] suggested a negotiation process to accomplish successful collision avoidance for USVs in complicated conditions. Li et al. [1] suggested employing the EfficientDet model for maritime ship detection and defined simple or complex settings with a positive recognition rate in the above circumstances, which provides an important reference for maritime security defense. For USV systems with communication delays, external interference, and other issues, Ma et al. [37] suggested an event-triggered communication strategy. Additionally, an event-based switched USV control system is proposed, and the simulation results show that the proposed co-design process is effective.
Traditional target detection methods include different color specificities of own color space models and manual design to extract features. This method is susceptible to visual angle, light, etc., and has a large volume of computation, low recognition efficiency, and a slow speed, which cannot meet the requirements of detection efficiency, performance, and speed. Target detection based on deep learning brings a new trend for maritime target recognition.
The acquisition and transmission of maritime data are growing sophisticated and becoming crucial in maritime supervision increasingly. However, at this stage of maritime regulation, the active early warning technology is eager to improve. The early warning of proactive detection requires quick and efficient detection of surrounding targets, but an unavoidable problem is that it will be impacted by a reduction in detection speed, as the algorithm accuracy rate rises. Therefore, to balance the speed and accuracy of the algorithm detection, this paper adopts a deep learning technique to design an FMTI model for maritime vessel detection.
Methodology
The successful one-stage detector adopts a Feature Pyramid Network (FPN) owing to the divide-and-conquer scheme of the FPN for the optimizations in object detection, which has not employed multi-scale feature fusion. In terms of optimization, Chen et al. [38] introduced the You Only Look One-level Feature (YOLOF), instead of complex feature pyramids, only single-level features are applied for detection. Extensive experiments on the COCO benchmark are to verify the effectiveness of the proposed model. Additionally, the YOLOF model is partially updated to fit the demands of offshore operations, based on the research presented in this paper.
Process of FMTI
The FMTI algorithm is proposed in this paper for the detection of maritime targets, and its specific process is described subsequently. When there are one or more targets (including multiple targets) in the image to be recognized, the network is required to make a judgment for each prediction frame. Thus, the model divides the process into the following three steps.
1.
The classified image is gridded and there are the Bounding Boxes (Bbox) in the grid cell. Each Bbox contains five features, (x, y, w, h, Score confidence ). Where (x, y) is the offset of the Bbox center relative to the cell boundary, (w, h) denotes the ratio of width and height in the whole image, and Score confidence is the Confidence Score.
Pr(object) means whether the target exists or not. The existing value is 1, and the opposite value is 0.
The GIOU [39] was optimized from the IOU, ( Figure 1A). The intersection of Prediction and Ground Truth is shown by IOU. Where Area(pred) denotes the area of the detection boxes and Area(true) denotes the area of the true value.
of 13
To calculate GIOU, it is necessary to find the smallest box that can fully cover the Prediction box (Area(pred)) and the Ground Truth box (Area(true)), named Area (full). The schematic diagram is indicated in Figure 1.
3. Setting the detection limitation of Scoreconfidence, adjusting and filtering the borders with scores lower than the default value. The remaining borders are the correct detection boxes and the final judgment results are outputted sequentially.
Multi-Scale Feature Fusion
Scholars strove to find better feature fusion methods for the greater robustness of information. The initial development of the target detector was used to obtain the whole logical information of the object by a single layer for making prediction judgments. For example, the last layer's output was adopted for subsequent processing in a series of R-CNN.
A typical representative application of multiscale feature fusion is FPN [40]. The multi-scale information obtained from feature fusion and improved the network performance for different scale targets (including tiny targets). To calculate GIOU, it is necessary to find the smallest box that can fully cover the Prediction box (Area(pred)) and the Ground Truth box (Area(true)), named Area(full). The schematic diagram is indicated in Figure 1. 2.
The second step is feature extraction and prediction. Target prediction is performed in the final layer of the fully connected. If the target exists, the Cell gives the Pr(class|object), and the probability of each class in the whole network is calculated, then the detection Score confidence is calculated. The comprehensive calculation is as 3.
Setting the detection limitation of Score confidence , adjusting and filtering the borders with scores lower than the default value. The remaining borders are the correct detection boxes and the final judgment results are outputted sequentially.
Multi-Scale Feature Fusion
Scholars strove to find better feature fusion methods for the greater robustness of information. The initial development of the target detector was used to obtain the whole logical information of the object by a single layer for making prediction judgments. For example, the last layer's output was adopted for subsequent processing in a series of R-CNN.
A typical representative application of multiscale feature fusion is FPN [40]. The multiscale information obtained from feature fusion and improved the network performance for different scale targets (including tiny targets).
YOLOF involves two key modules of a projector and residual blocks. In the projector, 1 × 1 convolution is applied to reduce the number of parameters, and then 3 × 3 convolution is done to extract contextual semantic information (similar to FPN). Residual blocks are four residual modules with different rates of dilation stacked to generate output features with multiple fields of perception, in Figure 2. For residual blocks, all convolution layers are followed by a BatchNorm layer [41] and a ReLU layer [42], but just convolution layers and BatchNorm layers are used in Projector. To accept varying target sizes, 4 consecutive residual units are employed to allow the integration of numerous features with different perceptual fields in a one-level feature.
Appl. Sci. 2022, 12, x FOR PEER REVIEW 6 of 14 YOLOF involves two key modules of a projector and residual blocks. In the projector, 1 × 1 convolution is applied to reduce the number of parameters, and then 3 × 3 convolution is done to extract contextual semantic information (similar to FPN). Residual blocks are four residual modules with different rates of dilation stacked to generate output features with multiple fields of perception, in Figure 2. For residual blocks, all convolution layers are followed by a BatchNorm layer [41] and a ReLU layer [42], but just convolution layers and BatchNorm layers are used in Projector. To accept varying target sizes, 4 consecutive residual units are employed to allow the integration of numerous features with different perceptual fields in a one-level feature. An encoder called Single Feature Map Fusion (SFMF) is presented here because it has been developed as the key component of the detector, distinguished from a feature pyramid based on multiple maps. It was obtained from the optimizations of YOLOF, to design the featured fusion components upon a single feature layer [38]. By the residual module, the YOLOF encoder obtains semantic information on multiple scales.
In Figure 3, L1-L5 is generated on the backbone paths with feature maps containing different scale information, path-1 integrates the results of L1-L4 and L5. The results of path-1 produce the final outcome P5. Remarkably, it ignores the preprocessing. In practice, the use of ReLU in a backbone network may result in the loss of information about the destination. This study tries to employ Meta-ACON [43] (refer to Section 3.3), which is employed in the backbone network to learn to activate or inactivate automatically. An encoder called Single Feature Map Fusion (SFMF) is presented here because it has been developed as the key component of the detector, distinguished from a feature pyramid based on multiple maps. It was obtained from the optimizations of YOLOF, to design the featured fusion components upon a single feature layer [38]. By the residual module, the YOLOF encoder obtains semantic information on multiple scales.
In Figure 3, L1-L5 is generated on the backbone paths with feature maps containing different scale information, path-1 integrates the results of L1-L4 and L5. The results of path-1 produce the final outcome P5. Remarkably, it ignores the preprocessing. In practice, the use of ReLU in a backbone network may result in the loss of information about the destination. This study tries to employ Meta-ACON [43] (refer to Section 3.3), which is employed in the backbone network to learn to activate or inactivate automatically. Preliminary validation of the fusion path method is tested in the 2007-COCO dataset, and the results are shown in Table 1. Table 1. Consideration of additional channels or whether to use shortcuts results are shown in Table 2. The shortcut retains the original information and overwrites all scale targets in YOLOF. The SFMF retains the lower-scale information for subsequent fusion. The results indicate that the SFMF can create better results with shortcuts.
Activation and Loss Function
The most ordinary nonlinear functions, including Sigmoid and ReLU, are employed to activate the outputs in deep learning. Ma [43] proposed a novel Meta-ACON to learn automatically to activate the output. Likewise, the same activation function uses a smoothed maximum for approximating the extremum. Its smooth and differentiable approximation is 6, which x represents input. It considers the standard maximum function max(x 1 , . . . , x n ) of n values.
Additionally, the switch factor β is The loss functions are categorized into classification and regression loss. The classification [46] is optimized via a focal loss (FL) algorithm at the one-stage detector. The function of the focal loss is to calculate the cross-entropy loss of the predicted outcomes for all non-ignored categories. The loss function serves to evaluate the comparison between the predicted and actual values of the model, where the smaller the loss function, the better the model performance. This work follows the original settings in YOLOF, e.g., FL and GIOU.
Dataset Composition
Currently, most datasets are designed for land targets. However, maritime target images lack open datasets because maritime targets differ greatly from land targets. In this paper, the typical target objects as maritime ships are divided into five typical types, including passenger ships, container ships, bulk carriers, sailboats, and other ships. It is worth noting that the island can be accurately judged by the model, so the boxes are hidden. The purpose of this operation is to keep the display tidy.
The images in the dataset have been augmented with data to minimize overfitting the model in order to improve detection accuracy. How can I deal with the overfitting issue? The most efficient method is to enhance the data set. The purpose of supplementing the data will be to allow the model to meet more 'exceptions', allowing it to constantly correct itself and provide better results. This is usually accomplished by either gathering or enhancing more of the initial data from the source, or by copying the original data and adding random disturbance or faulty data, which accounts for 3% of the total in this study. To improve the model's generalization and practical application by enriching the dataset, a selection of real-world ship images was obtained from the open-source network to supplement the dataset. Horizontal and vertical flipping, random rotation, random scaling, random cropping, and random expansion are all common augmentation procedures. It is worth noting that a detailed annotation of the dataset is necessary, although this is a time-consuming and complicated operation. There are 4267 images in total in the dataset, with 20% designated for the test set, and the rest for the training set. In COCO, the batch size is set to 48, the learning rate is set to 0.06, and the maximum number of iterations is set to 8 k. Additionally, use the parameters in YOLOF for supplemental choices, such as FL and GIOU. In the own self-built dataset, these parameters are recommended. The batch size is set at 24 and the learning rate is set to 0.03. For debugging purposes, there is personal experience data, batch size set to 8/GPU.
Establishment of Computer Platform
The experimental platform includes the following components. An Intel(R) Xeon(R) Gold 6130 CPU @ 2.10 GHz, three NVIDIA TAITAN RTX 24 G GPUs, ResNet50 as the basic algorithm framework, Python 3.7.0 as the programming language, Opencv4.5 as the graphics processing tool, and Detectron2 from FACEBOOK as the training framework, as shown in Table 3.
Evaluation Indexes
In this paper, the indexes including Frames per second (FPS), mAP, and FLOPs are used to evaluate the overall performance of detection results. The C TP indicates the number of ships classified as true positives. Precision is denoted by precision = C TP all detections , the Recall rate recall = C TP all ground truths . Typically, the higher recall, the lower accuracy, and vice versa. AP combining the different accuracy and recall rates reflects the overall performance of the model as Mean Average Precision (mAP) denotes the average of each AP category as Additionally, the floating-point operations per second (FLOPs), the number of floatingpoint operations performed per second, are used to measure the computing power of a computer.
Results Analysis
The results of target recognition by the model are shown in Figure 4. A target detector on a ship with good performance can provide maritime authorities with an objective reference for data visualization and reduce the ship's collision risk due to human negligence.
tion. The recognition effect by the FMTI algorithm was so accurate that it surpassed the labeling, i.e., the number of targets identified successfully was more than the number of the ones labeled manually. Similarly, the hull pieces in the second image are partially overlapping but can still be distinguished. In the third photo, the ships are separated, and this allowed for the best recognition.
The FMTC algorithm has a good performance of recognition for not only the multitarget tasks but also the simple or single target(s) tasks, like in Figure 4B. In particular, ResNet-101 [47] was introduced as a backbone network for cross-sectional comparison of models, denoted by Res101. Table 4′s data is rounded; however, this does not affect the overall assessment. Unavailable or useless data is indicated by/. For the first image of Figure 4A, the far ship targets were not labeled in detail at the beginning of the experiment, which was subject to an 'accident' of erroneous recognition. The recognition effect by the FMTI algorithm was so accurate that it surpassed the labeling, i.e., the number of targets identified successfully was more than the number of the ones labeled manually. Similarly, the hull pieces in the second image are partially overlapping but can still be distinguished. In the third photo, the ships are separated, and this allowed for the best recognition.
The FMTC algorithm has a good performance of recognition for not only the multitarget tasks but also the simple or single target(s) tasks, like in Figure 4B.
In particular, ResNet-101 [47] was introduced as a backbone network for cross-sectional comparison of models, denoted by Res101. Table 4 s data is rounded; however, this does not affect the overall assessment. Unavailable or useless data is indicated by /. Table 4 was obtained in the 2017 COCO validation set, and Table 5 was acquired from self-built datasets. The data is generated on an identically equipped device. Following a comprehensive analysis of Table 4, the FMTI and YOLOF models were chosen to be applied to the self-built dataset. The results are given in Table 5, Score confidence = 0.5. Table 4, we acquired 37 percent mAP (YOLOF + SFMF) and 36 percent mAP (YOLOF (Res101)), respectively. FMTI achieves more than 0.7 percent mAP improvement (Baseline: YOLOF + SFMF or YOLOF (Res101) + SFMF) and better than YOLOF (Res101) over 1.7 percent mAP, respectively. Furthermore, YOLOF received 37 percent mAP, an increase of one percent above the YOLOF (Res101) mAP. In terms of mAP, FMTI exceeds YOLOF and the other models, although it has a slightly lower FPS than YOLOF, which does not affect the processing performance of the FMTI model.
The results are clearly shown in Table 5. Additionally, it is worth highlighting that the improvement of mAP is over 7%, which is significant. The computational power has been advanced in parallel with mAP. The model is frequently improved along with memory changes. The accompanying increase in model parameters is so normal that it is within acceptable ranges.
More particularly, when the FMTI algorithm is applied for maritime monitoring to provide early warning for potential danger signals in offshore areas proactively, the occurrence probability of maritime accidents must be reduced. The FMTI model proposed in this paper applies to maritime target detection, meanwhile, it also can broad application prospects in the fields of maritime rescue, maritime traffic monitoring, and maritime battlefield situational awareness and assessment.
Conclusions
This paper addressed an encoder, known as SFMF, which enables multi-scale feature fusion on a map. A cross-sectional assessment of the different component compositions was conducted prior to the experimental application of model choices, then the YOLOF model was selected for comparison with the FMTI model. Although the FMTI model had a slightly lower speed evaluation metric of FPS than the YOLOF model, it had more computational power in the COCO dataset, so the two models mentioned above were chosen for the next experimental comparison. Combining speed and processing power, the FMTI algorithm was able to outperform the previous YOLOF in the marine ship detection data, so it has the potential for future applications.
The FMTI algorithm could offer technical support in the areas of smart coastal transit, naval defense, and smart maritime construction. It could be employed on the video surveillance equipment to detect offshore ships, detecting the ships entering and departing ports, the field of illegal fighting or military defense by recognizing and pre-warning dangerous boats along the shoreline.
However, the image data for the majority of the training data set are captured during good weather conditions in this paper, so further studies may still be done to ensure better performance of the model. Future work will focus on the diversity of test samples. | v2 |
2018-05-01T13:07:17.179Z | 2018-05-01T00:00:00.000Z | 14011619 | s2orc/train | Detection of DNA Double Strand Breaks by γH2AX Does Not Result in 53bp1 Recruitment in Mouse Retinal Tissues
Gene editing is an attractive potential treatment of inherited retinopathies. However, it often relies on endogenous DNA repair. Retinal DNA repair is incompletely characterized in humans and animal models. We investigated recruitment of the double stranded break (DSB) repair complex of γH2AX and 53bp1 in both developing and mature mouse neuroretinas. We evaluated the immunofluorescent retinal expression of these proteins during development (P07-P30) in normal and retinal degeneration models, as well as in potassium bromate induced DSB repair in normal adult (3 months) retinal explants. The two murine retinopathy models used had different mutations in Pde6b: the severe rd1 and the milder rd10 models. Compared to normal adult retina, we found increased numbers of γH2AX positive foci in all retinal neurons of the developing retina in both model and control retinas, as well as in wild type untreated retinal explant cultures. In contrast, the 53bp1 staining of the retina differed both in amount and character between cell types at all ages and in all model systems. There was strong pan nuclear staining in ganglion, amacrine, and horizontal cells, and cone photoreceptors, which was attenuated. Rod photoreceptors did not stain unequivocally. In all samples, 53bp1 stained foci only rarely occurred. Co-localization of 53bp1 and γH2AX staining was a very rare event (< 1% of γH2AX foci in the ONL and < 3% in the INL), suggesting the potential for alternate DSB sensing and repair proteins in the murine retina. At a minimum, murine retinal DSB repair does not appear to follow canonical pathways, and our findings suggests further investigation is warranted.
INTRODUCTION
Inherited retinal dystrophies are disorders which lead to visual impairment and in severe forms, to blindness, and have an estimated prevalence of 1 in 4,000 (Berger et al., 2010). Because it is immuno-privileged tissue that is easily accessed for therapeutic interventions, the eye is both target for use of gene and cell therapies for inherited disorders, and is also a paradigmatic model system for the development of these approaches for inherited disorders affecting other tissues. While gene replacement therapies can be very efficient for small genes that fit into viral vectors for gene delivery (Bennett, 2017), the advent of genomic editing technologies such as CRISPR-Cas9 has opened new possibilities to target even larger genes at endogenous genomic loci (Maeder and Gersbach, 2016).
The idea of using genome editing to repair disease-causing mutations is comparatively young, and relies on highly specific endonucleases and the capacity of the cell to repair double-strand breaks (DSBs) (Carroll, 2008;Carroll and Beumer, 2014;Gaj et al., 2016;Suzuki et al., 2016). This happens either through the error-prone non-homologous end-joining (NHEJ) pathway or, with high fidelity, through homology-directed repair (HDR) in the presence of a DNA donor template (Jasin and Haber, 2016). An alternative repair pathway called microhomology-mediated end joining (MMEJ) has recently been discovered (Truong et al., 2013).
The predominant DSB repair pathway in mitotic cells at all cell-cycle stages is NHEJ, whereas HDR and MMEJ are only active during the G2 and G1 phases of the cell cycle, respectively (Sakuma et al., 2016). Although robust data exist on the complexity of DNA repair mechanisms in dividing cells in vitro, almost nothing is known about post-mitotic neurons such as photoreceptors (PRs). In these cell types the system appears to differ: in mouse rod PRs, DSBs induced by radiation are insufficiently repaired as measured by quantification of repair foci over time (Frohns et al., 2014). This deficit may be associated with changes to the nuclear architecture, as mouse rod PRs exhibit an inverted arrangement of chromatin which is related to the nocturnal nature of rodents and is thought to optimize light perception in the dark (Solovei et al., 2009). In addition, DNA repair activity is also higher in the developing vs. adult mouse retina (Frohns et al., 2014).
Based on in vitro data, DNA damage sensing happens through the binding of H2AX to the DSB site and its subsequent phosphorylation (γH2AX), resulting in the recruitment of checkpoint factors to the DSB (Yuan et al., 2010;Georgoulis et al., 2017). The ubiquitinylation of H2A and H2AX then triggers the further binding by early phase DNA repair proteins, amongst which is p53 binding protein 1 (53bp1) which plays an important role in the pathway of repair proceeding by NHEJ. Presence of 53bp1 at the DSB normally results in the recruitment of proteins that are part of the NHEJ pathway and in the inhibition of BRCA1 activity, which is involved in the HDR pathway (Ward et al., 2003;Ginjala et al., 2011). The occurence of γH2AX and 53BP1 at the DSBs results in so-called repair foci (Manis et al., 2004), which can be detected by colocalized antibody binding (Rogakou et al., 1999;Frohns et al., 2014).
The aim of this study is to determine the cell-specific recruitment of γH2AX and 53bp1 in rod and cone photoreceptors in different mouse model systems in order to shed further light on the capacity of the retina to enable genome editing. We also included retinal organ culture in our study as another model for degeneration in the retina. Many alterations observed during in vitro retina culturing resemble some characteristics of experimental retinal detachment and diabetic retinopathy in vivo, Abbreviations: DSBs, double strand breaks; 53bp1, p53 binding protein 1; γH2AX, phosphorylated form of H2AX; PRs, photoreceptors. respectively (Fisher et al., 2005;Valdés et al., 2016). We detected regular γH2AX repair foci in wildtype as well as degenerating retinae, but it appears that 53bp1 occurrence at the repair foci is not the normal downstream process, indicating cell specific characteristics in these highly-specialized cells.
Experimental Design
Overall, 15 adult animals and 16 immature mice were included in this study comprising both females and males. At least three eyes from different individuals were used for histologic analysis of each explant culture period. Immature rd1 and C3H mice were investigated at ages post-natal day (p) 07, p13, p21, p28. Immature rd10 and C57BL/6J were investigated at p14, p18, p24 and p30. At least two immunostaining procedures were performed for each antibody and each investigated mouse tissue of the respective mouse lines and ages. For microscopic analysis, 2-3 sections of each immunostaining per respective mouse line and age were analyzed.
Animal Handling and Ethics Statement
Three and nine months old wild type C57BL/6J mice (Jackson stock # 000664, Charles River, Germany) were used in this study. In addition, immature rd1 mice (RRID:MGI:3803328), rd10 mice (RRID:MGI:3581193) and their respective wildtype strains C3H and C57BL/6J were subjects in this study and were provided by François Paquet-Durand from Tübingen. Animals were housed and bred in the animal facility of the University of Giessen under standard white cyclic lighting, had free access to food and water. Day of birth was considered as post-natal day 0. All procedures concerning animal handling and sacrificing complied with the European legislation of Health Principles of Laboratory Animal Care in accordance with the ARVO statement for the Use of Animals in Ophthalmic and Vision Research. The protocol was approved by the Institutional Animal Care and Use Committee (IACUC) of the Justus-Liebig-University (TV-No M_474). All efforts were made to minimize the number of animals used and pain and distress.
Preparation of Organotypic Retina Culture
Three and nine month old C57BL/6J retinae were used to generate organotypic retina culture as described previously (Müller et al., 2017). In brief, explanted retinae were cultured on track etched polycarbonate membrane (TEPC), pore size 0.4 µm and 30 mm in diameter (#35006, Laboglob.com GmbH, Germany) with the photoreceptor layer facing the supporting membrane. Inserts were put into six-well culture plates and incubated in complete medium with supplements at 37 • C. Every second day the full volume of complete medium, 1.2 ml per well, was replaced with fresh medium. The culture period was ended by immediate fixation in 4% paraformaldehyde in phosphate buffered saline (PBS) for 45 min at time points ranging from 2 to 10 days. Fresh, i.e., un-cultured retinas were used as controls.
KBrO 3 Incubation Procedure (Positive Control for DNA DSBs)
To initiate DNA DSBs, whole adult C57BL/6J mouse retina was dissected in Hanks' Balanced Saline Solution (GIBCO R HBSS; #14025076, Thermo Fisher, Germany), transferred into a small Petri dish (30 mm in diameter), and incubated in 1.5 mM KBrO 3 (#4396, Roth, Germany) for 1.5 h at 37 • C. Organotypic retina cultures of 2, 4, 6, and 8 days were treated likewise. Working solution of KBrO 3 was prepared with pure water. Respective control retinae were treated with plain water only. Subsequently, all treated retinal tissues were briefly rinsed in HBSS and immediately fixed in 4% paraformaldehyde in PBS at room temperature for 45 min. After washing in PBS, all treated retinal tissue was cryo-protected in graded sucrose solutions (10, 20, and 30% in PBS), frozen, cut, and immunostained with γH2AX antibodies (see next section).
Tissue Processing and Immunohistochemistry
For frozen sectioning, immature and adult mouse eyecups and retinal explants were treated as described previously (Müller et al., 2017). Immunostaining was performed employing the twostep indirect method. Prior to 53bp1 antibody staining, antigen retrieval was performed (Eberhart et al., 2012). Sections were incubated at room temperature overnight in primary antibodies (see Table 1). Immunofluorescence was performed using Alexa Fluor 488-conjugated secondary antibodies (#21202, Thermo Fisher Scientific, Germany) or Alexa 594 (#21207, Thermo Fisher Scientific, Germany).
Laser Scanning Confocal Microscopy
Confocal images were taken using an Olympus FV10i confocal microscope, equipped with Argon and HeNe lasers. Highresolution scanning of image stacks was performed with an UPlanSApo x60/1.35 (Olympus) oil immersion objective at 1,024 × 1,024 pixels and a z-axis increment of 0.3 µm. For analysis of immunolabeled cells and their processes, a stack of 2-12 sections was taken (0.7-µm z-axis step size). Cell processes were reconstructed by collapsing the stacks into a single plane. Brightness and contrast of the final images were adjusted using Adobe Photoshop CS5 (San Jose, CA).
Quantification of γH2AX and 53bp1 Immunoreactive Foci
Quantification of γH2AX and 53bp1 immunoreactive foci was performed on vertical frozen sections of developing retina of rd1, rd10 mice and their respective wildtype C3H and C57BL/6J. An area of 2320 µm 2 was defined in each image in the outer nuclear layer (ONL) and in the inner nuclear layer (INL), and all γH2AX and 53bp1 immunoreactive foci within that square were counted. Only central retinal regions were analyzed. Foci co-localizing 53bp1 and γH2AX immunoreactivity were given as percentage of all γH2AX immunoreactive foci in the respective field. At least three micrographs per time point and mouse line were analyzed. Image stacks 2 µm in depth were taken.
Statistical Analysis
Statistical comparisons among different experimental groups were made using a two-tailed Student's t-test and SigmaPlot 12 software. Error bars indicate SD.
Localization of γH2AX Within the Murine Wildtype Retina
We analyzed retinal tissue from two wild type mouse lines, C3H and C57BL/6J, because the subsequently analyzed models with hereditary retinal degeneration are based on the two lines (rd1 is based on C3H, rd10 is based on C57BL/6J).
In the developing mouse retina, defined γH2AX immunoreactive foci were found numerously in the inner (INL) and outer nuclear layers (ONL) (Figures 1A-L). The occurrence of several foci per nucleus seemed quite common especially in the C57BL/6J mice. At p14, pan nuclear staining of γH2AX was visible in some nuclei of the INL and ONL (Figures 1B,C,H,I). In the mature mouse retina at 3 and 9 month of age, γH2AX immunoreactive foci were found in all nuclear layers but appeared to be less frequent than in the developing retina (Figures 1G-T). Individual nuclei showed γH2AX immunoreactive pan nuclear staining in the INL and GCL. In the ONL, no pan nuclear staining was found in the mature mouse retina. In the OPL of C3H and adult C57BL/6J mice, γH2AX immunoreactivity was observed with the polyclonal antinserum against γH2AX ( Figures 1A,B,D,E,Q). It was viewed as unspecific background staining related to the polyclonal γH2AX antiserum (see Table 1). Specimen treated with the monoclonal γH2AX antibody did not show γH2AX-immunoreactivity in the OPL. Quantification of γH2AX immunoreactive foci in the ONL and INL of immature (p13/p14) and mature (p28/p30) C3H, C57BL/6J, rd1, and rd10 mouse retina revealed highest foci numbers in the ONL (Data not shown). Except for the C3H mouse retina, immature and mature retina showed no significant difference in the number of γH2AX immunoreactive foci in the INL. In the ONL, differences in the number of γH2AX immunoreactive foci were not significant at immature or mature ages. At 4 weeks of age, quantification of γH2AX immunoreactive foci was not possible due to loss of most photoreceptors in the ONL in the rd mouse lines (Figures 2D,J).
Localization of γH2AX in Degenerating Retina
In addition to the rd1 and rd10 mouse lines, we also analyzed tissue from organotypic retina culture (C57BL/6J) as an alternative model of retinal degeneration.
In the developing retina of rd1 and rd10 mice, γH2AX immunoreactive foci were found in the INL and ONL (Figures 2A-L). The occurrence of several foci per nucleus seemed quite common especially in the rd10 mice. Similar to wt retinae at p14, pan nuclear staining of γH2AX was visible in some nuclei of the INL and ONL (Figures 2A-C,G-I). At p30, when PR degeneration has progressed a lot in both mouse lines, many cells of the GCL showed pan nuclear staining of γH2AX (Figures 2D,J). In the organotypic retina culture, pan nuclear staining was even more prominent than defined γH2AX immunoreactive foci (Figures 2M-P). After 4 days of culture, pan nuclear staining was found in all nuclear layers but appeared to be most frequent in the ONL (Figure 2M). In the GCL, only individual nuclei showed pan nuclear staining of γH2AX. In the retinae of 2 week old rd1 and rd10 mice, γH2AXimmunoreactive background staining was observed in the OPL (Figures 2A,G,H).
Localization of P53 Binding Protein 1 (53bp1) Within the Murine Wildtype Retina
With the exception of the developing retina at p07, many nuclei of the INL and GCL showed generally high level of immunoreactivity to 53bp1 (Figures 3A-C). At p14 in C57BL/6J, nuclei with strong labeling were present only in the inner retina (Figures 3G-I). In the retina of p30 mice and adult ages, the intensities of the immunostaining varied between cell types in the different retinal layers (Figures 3D-F
,J-S).
In the GCL, most nuclei showed bright immunoreactivity (Figures 3M,P). Interestingly, the brightly labeled nuclei in the INL were restricted to the inner third, irrespective of age (Figures 3D,J,M). Most of these nuclei belong to amacrine cells, some may be Müller cells. Individual large nuclei close to the OPL showed bright immunoreactivity too (arrow heads in Figures 3A,D,F,J,M,O). Due to their size, location and low occurrence, i.e., big intervals between individual nuclei, staining is consistent with horizontal cells. Double immunostainings with 53bp1 and pATM antibodies clearly revealed the horizontal cell morphology (data not shown). The remaining nuclei in the outer half of the INL showed moderate to light immunoreactivity and are consistent with bipolar cells (Figures 3A,D,J,M). With ongoing maturation, occurrence of bright immunofluorescence was clearly localized to the inner two rows of INL nuclei and all GCL nuclei. In the ONL, cone nuclei showed comparably bright immunolabeling (arrows in Figures 3E,H,K,M). Double immunolabeling of 53bp1 and calbindin antibodies confirmed 53bp1 localization in amacrine and horizontal cells in all investigated mouse lines (Supplemental Figure 3). As well, colocalization of glutamine synthetase in Müller glia cells was confirmed by double immunolabeling of 53bp1 and glutamine synthetase (data not shown). Immunoreactive cone nuclei were found at all ages investigated. Due to the inverted chromatin distribution in rods, 53bp1 immunoreactivity was localized very close to the nuclear envelope forming a lightly stained thin ring around the heterochromatin (examples are marked by asterisk in Figures 3M,N,Q,R). The heterochromatin in rods was brightly stained by DAPI. Nine-month-old mouse retina revealed less intense immunoreactivity in the inner retina, i.e., INL and GCL (Figures 3Q,S).
Localization of 53bp1 in Degenerating Retina
In the developing retina of rd1 and rd10 mice, 53bp1 immunoreactivity was bright and localized in many nuclei of the INL and GCL (Figures 4A-K). At early ages p07 and p14, nuclei of the same cell types were as brightly labeled as those in the respective wildtype mice, i.e., amacrine cells, ganglion cells, horizontal cells, and cones (Figures 4A,G). With maturation and progress in degeneration of the outer retina, bright immunolabeling remained to be localized to the inner retina as in the wildtype mice (Figures 4D-F,J,K). At p28 in rd1 mice, degeneration of the ONL was quite advanced and the remaining cones showed very faint immunofluorescence ( Figure 4E). In contrast, cones were clearly present and positively labeled in the rd10 mouse retina at p30 (Figure 4K, arrows). In the organotypic retina culture of 3 month-old mice, immunoreactivity was brightly localized to many nuclei of the INL and GCL (Figures 4L-O). Up to 4 days in culture, the occurrence of bright labeling was clearly restricted to nuclei of the inner INL and all GCL nuclei ( Figure 4L). With increasing time in culture, intense immunoreactivity dispersed through the whole INL, i.e., the labeling of bipolar cell nuclei in the inner half of the INL was increased after 6 days in culture (Figures 4M-O). Furthermore, moderately stained thin rings around the heterochromatin of rod photoreceptors was observed (Figures 4M-O).
Following the characterization of the localization of the two proteins separately, we investigated whether γH2AX and 53bp1 are co-localized to the same DNA damage site. Double immunostaining on the developing retina of C3H and rd1 mice showed only very few double labeled foci in the INL and ONL (arrows in Figures 5A-H"). At the same time, numerous γH2AX immunoreactive foci were visible throughout the nuclear layers in C3H at p13 and p28, and rd1 mice at p13, resulting in colocalization of γH2AX positive foci in nuclei with pan nuclear 53bp1 staining in the INL (Figures 5B",D",F"; Supplemental Figure 1).
Double immunostaining in the developing retina of C57BL/6J and rd10 mice showed similar results compared to retinae of C3H and rd1 mice, i.e., very few double labeled foci in the INL and ONL (arrows in Figures 6A-H"). Comparison of the two mouse lines and their respective models of degeneration gave a general impression of more co-localized foci in the C3H and rd1 mouse line (Figure 5). Quantification of the few double labeled foci in the INL and ONL confirmed the impression gained through qualitative analysis of the tissue (Figure 7). The comparison between immature (p13/p14) and mature (p28/p30) mouse retina showed double labeled foci mainly in the immature age group (Figures 7A,B). In the ONL, occurrence of double labeled foci was a very rare event (<1% of γH2AX foci) and found only in the immature age group (Figure 7A). In the retina of the rd mouse lines, no co-localizing foci could be found in the ONL. In the INL of immature rd1 mice, co-localization was significantly higher than in mature rd1 mouse retina ( Figure 7B). In immature retina of the wildtype mouse lines, the C3H mouse retina showed the highest and C57BL/6J mouse retina the lowest occurrence of double labeled foci in the INL. However, in the INL, occurrence of co-localizing γH2AX and 53bp1 foci was 2-fold higher than in the ONL, but still a very rare event (< 3% of γH2AX foci).
In summary, our results showed that generally, 53bp1 is not recruited to DSB repair foci positive for phosphorylated H2AX in the ONL and INL, but only occasionally co-localizes. In the INL and GCL, many γH2AX positive foci seem to be localized to the same nucleus as 53bp1, the latter as pan nuclear staining.
Co-localization of γH2AX and 53bp1 to Induced DSB Potassium bromate (KBrO 3 ) is an oxidizing agent used as a food additive, which causes kidney damage as a potent nephrotoxic agent, and the mechanism is explained by the generation of oxygen free radicals that induce many DSB and thus cause genomic instability leading to apoptosis (Bao et al., 2008). Here we incubated whole wildtype mouse retina (Figures 8E,F) as well as tissue from organotypic retina culture (Figures 8G,H) in 1.5 mM KBrO 3 to initiate DSBs. Control retinae were treated with plain water only (Figures 8A-D). While the number of γH2AX foci increased in all samples treated with potassium bromate, 53bp1 immunoreactive foci were only observed in retinal explant culture, in both the KBrO 3 treated and untreated preparations as single events. Hence, double labeled foci could be found only in retinal explant culture (arrow heads in Figures 8C,D,G,H). Double labeled foci in the ONL and INL did not increase due to the potassium bromate treatment. Nuclei with several γH2AX immunoreactive foci were only found in the ONL of KBrO 3 treated tissue (arrows in Figures 8E,G). Moderate pan nuclear γH2AX staining was visible in the ONL and bright staining in some nuclei of the INL of treated retinal explant culture (Figures 8G,H). In the INL of the control retinal tissue, a population of nuclei showed moderate pan nuclear γH2AX staining (Figures 8B,B"). Pan nuclear immunostaining of 53bp1 was not altered due to KBrO 3 treatment in the nuclear layers.
In summary, KBrO 3 treatment only increased the number of γH2AX foci but had no effect on the number of immunoreactive foci or the pan nuclear staining of 53bp1.
Intranuclear Localization of 53bp1
We used lamin B 2 antiserum to label the lamina of the inner membrane of the nuclear envelope. The nuclear lamina consists of a layer of four distinct lamin proteins which are in close apposition to the nucleoplasmic surface of the inner nuclear membrane (Aebi et al., 1986).
Double immunostaining of lamin B 2 and 53bp1 allowed us to localize 53bp1 immunoreactivity within the nuclei. Chromatin counterstaining by DAPI made the localization of 53bp1 clearly distinguishable from the brightly labeled heterochromatin of the chromocenters. This is particularly important for mouse rod photoreceptors, since their chromatin structure is inverted, i.e., condensed in the nuclear center (Figures 9D,I,O,V).
In all investigated mouse retinal tissues, the nuclear lamina was clearly labeled in all nuclei throughout the retina, yet with different immunofluorescent intensities (Figures 9A-X). In general, there was an increase in the labeling intensity starting from low levels in rods and cones, through medium intensity in the INL and highest labeling in the GCL (Figures 9A,B,F,K,R-U). Apart from the nucleoli, 53bp1 was dispersed throughout the entire nucleus and surrounded by lamin B 2 positive nuclear lamina (Figures 9C,H,M,N). Heterochromatin in nuclei of the INL and GCL is counterstained with DAPI (Figures 9E,J,P,Q,W,X). In all retinal layers, 53bp1 immunoreactivity was clearly located close to the nuclear lamina (Figures 9G,H,L-N). Interestingly, lamin B 2 immunoreactivity was brighter around most 53bp1 immunoreactive nuclei compared to 53bp1 immunonegative nuclei, especially obvious in cones (Figures 9G,I,L,O). Irrespective of the age of the mouse retina, the same quality of lamin B 2 and 53bp1 immunofluorescence was observed in the different retinal layers. In cone nuclei, 53bp1 immunoreactivity filled the space between the chromocenters and the nuclear periphery (Supplemental Figure 2A), omitting the central chromocenters (Supplemental Figures 2B,D).
DISCUSSION
In this study, we analyzed the distribution of γH2AX and 53bp1 proteins in all neurons of young, mature and degenerating retinae. Furthermore, we showed that the two proteins only occasionally co-localize and that in the majority of cases, DNA damage sensing does not seem to result in 53bp1 recruitment to repair foci, irrespective of the viability state of the retina.
DNA damage sensing happens through the appearance of H2AX at the DSB site and its subsequent phosphorylation, yielding γH2AX (Thiriet and Hayes, 2005;Yuan et al., 2010). Antibodies to γH2AX allow the visualization of a "focus" at the DSB site (Rogakou et al., 1999). These foci serve as sites for accumulation of other proteins involved in DSB repair, leading to the suggestion that the foci have roles in signal FIGURE 7 | Quantification of immunoreactive foci co-localizing γH2AX and 53bp1 in the ONL and INL of immature (p13/p14) and mature (p28/p30) rd1, rd10, C3H, and C57BL/6J mouse retina. Foci co-localizing 53bp1 and γH2AX immunoreactivity were given as percentage of all γH2AX immunoreactive foci in the respective area. (A) In the ONL, occurrence of co-localizing γH2AX and 53bp1 foci was a very rare event (<1% of γH2AX foci) and was only found in the immature age group. At 4 weeks of age, only C57BL/6J mouse retina showed individual co-localizing foci. Due to degeneration of photoreceptors in the rd mouse lines, no co-localizing foci could be found in the ONL. (B) In the INL, occurrence of co-localizing γH2AX and 53bp1 foci was 2-fold higher than in the ONL, but still a very rare event (<3% of γH2AX foci). It was significantly higher in the immature age group of rd1 mice. Immature C3H mouse retina showed the highest and C57BL/6J mouse retina the lowest occurrence of γH2AX and 53bp1 foci co-localization in the INL. *p < 0.05.
amplification and the accumulation of DNA repair factors that, in turn, facilitate chromatin remodeling, cell cycle checkpoint functioning, sister chromatid-dependent recombinational repair and chromatin anchoring to prevent the dissociation of broken ends (Redon et al., 2011). We observed intranuclear γH2AX immunoreactive foci commonly in the inner and outer nuclear layers of developing retina and less frequently in the mature retina. Especially in developing C57BL/6J and rd10 mice, the occurrence of several γH2AX foci per nucleus seemed quite common in the ONL and INL. The fact that γH2AX positive foci are still present in the adult retina is consistent with reports that the level of reactive oxygen species (ROS), a major cause for DSBs in non-replicating cells, is particularly high in retinas of nocturnal animals (Jarrett and Boulton, 2012;Frohns et al., 2014). In adult mouse retina, we were able to induce DSBs by potassium bromate (KBrO 3 ). This led to increased numbers of γH2AX immunoreactive foci per nucleus in some photoreceptors. Hence, we were able to show DNA damage detection by immunohistochemistry for γH2AX in all investigated retinal tissues, inclusive of developing, degenerating and mature retinas.
In addition to discrete foci, we also observed γH2AX immunoreactivity as a pan nuclear staining. This pattern was found predominantly in individual nuclei of the INL and GCL in both the developing and mature retina. This pan nuclear staining was more frequently seen in degenerating retina, including the retina of rd1 and rd10 mice and in retinal explant culture. During the first two post-natal weeks, neuronal apoptosis is a common incident in the mouse retina (Young, 1984). As well, organotypic retina culture simulates the pathological condition of retinal detachment and photoreceptor cell death becomes more prominent during culture (Ferrer-Martín et al., 2014;Müller et al., 2017). The phenomena of retinal neuronal apoptosis and the previous finding of pan nuclear γH2AX staining associated with preapoptotic single kinase activity (de Feraudy et al., 2010) supports the finding that the γH2AX pan nuclear staining seen in these tissues is indicative of preapoptotic retinal neurons.
The DNA repair and mediator protein 53bp1 (Chapman et al., 2012), is recruited to DSB after enhancement of H2A and H2AX ubiquitinylation by RNF168 (ring finger protein 168) (Brandsma and Gent, 2012). Together with RAP80 (receptor-associated protein 80) (Stewart et al., 2009) it is involved in deciding the fate of the proceeding repair pathway by binding factors which are part of the NHEJ pathway (Ward et al., 2003;Ginjala et al., 2011). The mammalian protein 53bp1 is activated in many cell types in response to genotoxic stress, including DSB formation (Anderson et al., 2001;Rappold et al., 2001;Manis et al., 2004;Lukas et al., 2011). Previously, various studies reported function of 53bp1 as a tumor suppressor gene in breast cancer (Kong et al., 2015). In breast precancerous lesions and cancer tissue 53bp1 immunohistochemical staining was mainly localized in the nuclei of cells (Li et al., 2012;Kong et al., 2015). Also in colorectal cancer tissue nucleus staining of 53bp1 was considered positive (Bi et al., 2015). We consider nucleus staining as pan nuclear staining.
In all retinal tissue investigated in this study, we found 53bp1 immunoreactivity prominently in the nuclei of the inner half of the INL and in most if not all nuclei of the GCL. Apart from the nucleoli, 53bp1 was dispersed throughout the nucleus resulting in a pan nuclear pattern. This distribution of 53bp1 is perfectly in line with the description of 53bp1 immunoreactivity in the INL and GCL of un-irradiated adult mouse retina (Frohns et al.,
2014). Frohns and colleagues investigated the presence of 53bp1
inside the INL after irradiation in a cell-type specific manner using immunefluorescent co-localization of cell type specific proteins and 53bp1. Thereby, amacrine cells, horizontal cells and Müller cells were shown to display 53bp1 pan nuclear staining. In the present study, we confirmed that the most brightly stained 53bp1 nuclei belong to amacrine cells and horizontal cells based on the location within the INL. The two innermost rows of retinal neurons in the INL consist of amacrine cells (Haverkamp and Wässle, 2000), which are laterally connected with bipolar cells and ganglion cells in the IPL (Kolb et al., 2007). The remaining nuclei in the outer half of the INL showed moderate to light 53bp1 immunoreactivity and appear more than likely to be nuclei of various types of bipolar cells (Haverkamp and Wässle, 2000), which are vertically oriented in the retina and connect the two plexiform layers, transmitting the visual information toward the ganglion cells (Kolb et al., 2007). Here we confirmed the occurrence of 53bp1 in horizontal cells and subpopulations of amacrine cells by double immunostaining of 53bp1 and calbindin in degenerating and wildtype mouse retina at 4 weeks of age FIGURE 9 | Localization of 53bp1 immunoreactivity with regard to the nuclear membrane. Double immunostaining with anti 53bp1 (green) and lamin B 2 antibodies (red) in vertical frozen retinal sections of developing retina (p14 and p30) and of 3-month old C57BL/6J mice after 0 and 2 days in culture. DAPI counterstaining (blue) (Continued) following a piggy-back immune protocol (Haverkamp et al., 2003). Somata of the bipolar oriented Müller glia cells reside in the middle of the INL, between the distal bipolar cell somata and the proximal amacrine cell somata (Haverkamp and Wässle, 2000). Frohns et al. (2014) did not find 53bp1 immunoreactivity in bipolar cells or in cells of the ONL, which is in contrast to the results presented here. At all ages investigated, we found 53bp1 immunoreactivity in cones distributed between the nuclear lamina and the two large central chromocenters. In the mouse retina, only 3% of all photoreceptors are cones (Jeon et al., 1998) and can be identified by their location close to the outer limiting membrane and their conventional heterochromatin organization, i.e., more than one chromocenter per nucleus (Solovei et al., 2009). Additionally, in rod photoreceptors, we found faint 53bp1 immunofluorescence, visible as a thin ring very close to the nuclear envelope, the location of the euchromatin in the inverted nucleus of rods (Solovei et al., 2009), in which the central part of the nucleus is taken up by one large chromocenter, the heterochromatin.
The differences concerning the detection of 53bp1 in photoreceptors and bipolar cells in our study and that of Frohns et al. (2014) might have resulted from the different treatment of retinal tissue. In our study, we used frozen sections of lightly fixed retinae (30-45 min fixation in 4% paraformaldehyde). Frohns et al. (2014) applied much longer fixation time (16 h) and retinal tissue was embedded in paraffin, both of which can result in a diminished antigen presentation in the retinal tissue (Osborn and Brandfass, 2004;Eberhart et al., 2012).
In addition to the pan nuclear staining, the 53bp1 antibody revealed some immunoreactive foci which we believe correspond with DSB repair foci within the heterochromatin (Rappold et al., 2001;Manis et al., 2004;Lukas et al., 2011;Frohns et al., 2014). Since only very few double labeled foci appeared in the INL and ONL, we conclude that 53bp1 is not regularly recruited to a DSB after phosphorylation of γH2AX in the ONL and INL.
Overall, we can only partly confirm in our study the observation of Frohns et al. (2014) in the irradiated mouse retina. They showed that in the INL, numerous double labeled repair foci were present in amacrine cells and horizontal cells, which are those cells that showed bright pan nuclear 53bp1 staining before irradiation in their study. Interestingly, after irradiation, pan nuclear staining of 53bp1 was completely gone (Frohns et al., 2014). In contrast, in the present study many γH2AX positive foci were found in nuclei with pan nuclear 53bp1 staining in the INL and GCL of all investigated retinae. It remains enigmatic why within the INL different cell types show different staining behavior concerning 53bp1 but at the same time demonstrated the same repair capacity in the study by Frohns et al. The fact that presence or absence of 53bp1 does not affect the DNA repair efficiency after irradiation in the INL leads to the conclusion that either a different DNA repair mechanism is active in murine post-mitotic retinal tissue or that the repair of DNA damage induced by ionizing radiation is independent of the presence of 53bp1 (Ward et al., 2004;Bunting et al., 2010). Taken together, our results in the mouse retina concerning co-localization of 53bp1 and γH2AX clearly differ from irradiation experiments with respect to the occurrence of 53bp1 repair foci (Rappold et al., 2001;Frohns et al., 2014).
In our study, potassium bromate treatment of adult mouse retina did not increase the number of γH2AX and 53bp1 double labeled foci in either ONL or INL. This finding also supports an alternative in the post-mitotic mouse retina to the DSB repair via the conventional NHEJ pathway. This is a surprising and noteworthy finding, since ordinarily DSBs are repaired by one of two main pathways: either homologous recombination (HR) or NHEJ (Chapman et al., 2012;McKinnon, 2013).
In summary, we observed different intensities of the 53bp1 immunostaining in specific cell types in the different retinal layers in mouse retinae at all ages from wildtype and retinal degenerating mouse lines, as well as in organotypic retina culture. With little of variation, 53bp1 was characterized by pan nuclear staining in amacrine and horizontal cells, the laterally connecting neurons in the retina. Cones showed 53bp1 immunoreactivity distributed between the nuclear lamina and the two large central chromocenters, which we viewed as pan nuclear staining. In the ONL and INL of the developing retina and in retinal explant culture, γH2AX positive DSBs were found more numerously compared to adult retina. Most interestingly, we could show that the two proteins do not co-localize regularly in repair foci and that in the majority of cases, DNA damage sensing does not seem to result in 53bp1 recruitment, irrespective of the viability state of the retina.
In conclusion, our data indicate that DNA double strand breaks are sensed by phosphorylation of H2AX in all neurons of the retina, but this not necessarily leads to the recruitment of 53bp1 to repair foci, indicating the presence of alternative sensing and repair proteins. This observation warrants further investigation into the DNA repair pathway state in post-mitotic neurons of the retina and the central nervous system in general.
AUTHOR CONTRIBUTIONS
BM and KS contributed conception and design of the study. BM acquisition, analysis, editing, and interpretation of data for the study. BM drafted the manuscript. KS and NE were revising it critically for important intellectual content. BL read the manuscript critically. BM Agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. | v2 |
2021-09-27T20:57:19.395Z | 2021-08-02T00:00:00.000Z | 238847289 | s2orc/train | Purification and Characterization of Antibodies Directed against the α -Gal Epitope
: The α -Gal epitope is an immunogen trisaccharide structure consisting of N -acetylglucosamine (GlcNAc) β 1,4-galactose (Gal) α 1,3-Gal. It is presented as part of complex-type glycans on glycoproteins or glycolipids on cell surfaces of non-primate mammalians. About 1% of all antibodies in human sera are specific toward α 1,3-Gal and are therefore named as anti- α -Gal antibodies. This work comprises the purification and characterization of anti- α -Gal antibodies from human immunoglobulin G (IgG). A synthetically manufactured α Gal epitope affinity resin was used to enrich anti- α -Gal antibodies. Selectivity experiments with purified antibodies were carried out using enzyme-linked immunosorbent assays (ELISA), Western blotting, and erythrocyte agglutination. Furthermore, binding affinities toward α -Gal were determined by surface plasmon resonance (SPR) and the IgG distribution of anti α Gal antibodies (83% IgG2, 14% IgG1, 2% IgG3, 1% IgG4) was calculated applying ELISA and immunodiffusion. A range of isoelectric points from pH 6 to pH 8 was observed in 2D gel electrophoresis. Glycan profiling of anti α Gal antibodies revealed complex biantennary structures with high fucosylation grades (86%). Additionally, low amounts of bisecting GlcNAc (15%) and sialic acids (13%) were detected. The purification of anti- α -Gal antibodies from human IgG was successful, and their use as detection antibodies for α Gal-containing structures was evaluated.
The primate immune system is continuously stimulated by antigenic carbohydrate structures on cell surfaces of gastrointestinal bacteria [17,18]. Consequently, primate organisms produce anti-α-Gal antibodies as polyclonal antibodies. Galili et al. conducted pioneer research in terms of purification and selectivity analysis of anti-α-Gal antibodies. They discovered anti-α-Gal producing B-cells in lymphoid tissue along the gastrointestinal tract [19]. The presence of anti-α-Gal antibodies leads to severe immune reactions up to anaphylactic shock when exposed to α1,3-Gal. However, the unconjugated glycan epitope has no effect on the immune system. It is considered a hapten due to its small size and can only develop its immunogenic effect in combination with bigger antigens. Synthetic α-Gal
Purification of Anti-α-Gal Antibodies
The purification of anti-α-Gal antibodies from Octagam ® IgG concentrate was performed using an ÄktaPure 25 Chromatography System (GE Healthcare, Freiburg, Germany). A glass column (Tricorn 5/50, 5 mm column diameter, volume 1.16 mL, GE Healthcare, Freiburg, Germany) was manually packed with 1 mL of α-Gal affinity resin (GlycoNZ, Auckland, Australia) and equilibrated with PBS (0.5 mL/min). The absorption signal at 214 nm was monitored, and an Octagam ® sample (100 mg/mL) was applied. The affinity matrix was loaded with three samples of human IgG concentrate, each with a volume of 5 mL. The chromatography was carried out at room temperature with a constant flow rate of 0.5 mL/min. The third sample injection was followed by an additional washing step with 5 mL of PBS. The elution of retained antibodies was performed with 500 mM galactose in PBS at 0.5 mL/min. A further wash step with 100 mM glycine pH 2.0 followed at the same flow rate. The Gal-eluted antibody fraction was collected separately and stored at 4 • C. The buffer of samples was exchanged to PBS pH 7.4 and concentrated by Amicon ® Ultra centrifugal filters (cutoff: 30 kDa, Merck, Darmstadt, Germany). The Protein concentration was determined by the Pierce bicinchoninic acid (BCA) protein assay kit (Thermo Fisher Scientific, Schwerte, Germany) using IgG as a standard, and adjusted to 1 mg/mL.
Conjugation of Anti-α-Gal Antibodies with Horseradish Peroxidase
The conjugation of purified anti-α-Gal antibodies with horseradish peroxidase (HRP) was accomplished using the HRP conjugation kit of Abcam (Cambridge, United Kingdom). The conjugation reaction was carried out according to the manufacturer's instructions. In brief, modifier reagent was mixed with the anti-α-Gal antibody solution (1 mg/mL) in a ratio of 1:10. This solution was added to HRP and incubated for 3 h in the dark at room temperature. A quenching reagent was added to the modifier-antibody solution in a ratio of 1:10. The mixture was again incubated for 30 min in the dark, without further removal of excess HRP.
ELISA Binding Assays
Biotinylated glycan epitopes were immobilized on a streptavidin precoated, flatbottom, 96-well plate (Thermo Fisher Scientific, Schwerte, Germany) by incubation of 1 µg/mL glycan in PBS overnight at 4 • C while shaking (300 rpm). Unspecific binding was blocked with Roti-Block TM blocking buffer (Carl Roth GmbH, Karlsruhe) for 1 h. The well plate was washed between each incubation step 3 times with PBS/0.1% (v/v) Tween 20. Purified anti-α-Gal antibodies were applied with the indicated concentrations. Samples were incubated for 1 h at 37 • C on the plate by shaking (300 rpm), detected by HRP-coupled goat anti-human IgG Fc (A0170, Sigma-Aldrich, Darmstadt, Germany), and diluted 1:10,000 in Roti-Block TM blocking buffer (Carl Roth GmbH, Karlsruhe). The plate was incubated with tetramethylbenzidine (Carl Roth GmbH, Germany) for 5 min. The absorption at 450 nm was measured (Multiskan Go, Thermo Scientific, Germany | SkanIt Software, version 3.2) after terminating the reaction with 1 M hydrochloric acid. For the representation of glycan structures, the symbol nomenclature of the Consortium for Functional Glycomics was used: green circle, mannose; yellow circle, galactose; blue square, GlcNAc; yellow square, N-acetylgalactosamine (GalNAc); red triangle, fucose; purple diamond, Neu5Ac [41].
Erythrocyte Agglutination Assay
A round-bottom well plate was blocked with 1% BSA (w/v) in deionized water for 2 h at room temperature. Aliquots of 50 µL of samples with different anti-α-Gal antibody concentrations (100 µg/mL, 50 µg/mL, 25 µg/mL, 12.5 µg/mL, 6.25 µg/mL, 3.12 µg/mL, 1.56 µg/mL, 0.78 µg/mL) in physiological sodium chloride solution were added, together with 50 µL of 0.5% (v/v) rabbit red blood cells (Dunn Labortechnik GmbH, Asbach, Germany) or human red blood cells (Deutsches Rotes Kreuz, Berlin, Germany) prepared by Fikoll (GE Healthcare, Braunschweig, Germany) density gradient centrifugation. Dilutions of erythrocyte suspensions were both made in physiological sodium chloride solution. Human erythrocytes were additionally incubated with blood group A antigen antibody (Thermo Fisher Scientific, Schwerte, Germany) in concentrations equal to anti-α-Gal antibodies in physiological sodium chloride solution. The well plate was incubated overnight at room temperature without shaking and was covered with a lid. Agglutination results were documented with the G-Box imaging system (Syngene, Cambridge, UK) and visually inspected.
Surface Plasmon Resonance (SPR)
SPR experiments were performed on a Biacore T200 (GE Healthcare, Freiburg, Germany). Biotinylated glycan epitopes were diluted in 10 mM hydroxyethyl-piperazineethane sulfonic acid (HEPES)/150 mM NaCl/0.02% (v/v) Tween 20, pH 7.5, and covalently coupled with streptavidin precoated SPR chips (Sensor Chip SA, BR100032) with immobilization levels of 200 response units (RU). Biotin was immobilized up to a response of 200 RU on one flow chamber as a reference. Purified anti-α-Gal antibodies were applied in different concentrations (157 nM, 52 nM, 17 nM, 6 nM, 2 nM, 0.67 nM) in 10 mM sodium acetate pH 4.5. Equal concentrations of human anti-factor VIII antibody (Coachrom Diagnostica, Maria Enzersdorf, Austria, MAB-HF8) were used as a negative control. A multicycle kinetic analysis was performed at 20 • C in 10 mM HEPES/150 mM NaCl/0.02% (v/v) Tween 20, pH 7.5, at a flow rate of 10 µL/min. The association and dissociation phases were monitored for 360 s, and the flow chamber surfaces were regenerated with two subsequent injections of 10 mM glycine, pH 2.0, for 10 s at 10 µL/min. The resulting binding data were fitted to a Langmuir 1:1 binding model by global fit analysis, which allowed the calculation of the dissociation constant K D . First-order kinetics were assumed. The experiments were evaluated with the Biacore T200 evaluation software (version 3.1).
After enzymatical pretreatment the samples were adjusted to 0.72% Tris-HCl pH 6.8 (w/v), 2.5% sodium dodecyl sulfate (w/v), 10% glycerin (v/v), 10 mM dithiothreitol, 0.05% bromophenol (w/v) and incubated for 5 min at 95 • C. The dual color protein standard marker (Bio-Rad, Germany) was used. Running conditions were set to 100 V for 10 min, followed by 150 V for 50 min. After completed electrophoresis, the gel was blotted to a nitrocellulose membrane, as described by Towbin et al. [42]. A constant current of 250 mA was set for 1 h. Subsequently, the membrane was blocked overnight with 10% (v/v) RotiBlock TM blocking buffer (Carl Roth GmbH, Karlsruhe, Germany). The membrane was incubated with HRP-conjugated anti-α-Gal antibodies (1:4000) for 1 h at room temperature by soft shaking and washed 3 times with 10 mM Tris-HCl/0.1% tween 20 (v/v). The signal was detected via SuperSignal West Pico Chemiluminescent Substrate (Thermo Fisher Scientific, Schwerte, Germany) and documented by the G-Box imaging system (Syngene, Cambridge, UK).
Determination of IgG Subclasses
IgG subclasses of purified anti-α-Gal antibodies were determined with the human IgG subclass profile kit (Thermo Fisher Scientific, Schwerte, Germany). The assay was performed according to the manufacturers' instructions. In brief, monoclonal antibodies, specific for one of the IgG subclasses 1, 2, 3, and 4 were preincubated with 2 µg/mL of anti-Gal antibody in dilution buffer for 5 min. A flat-bottom 96-well plate, precoated with anti-IgG antibodies, was loaded with the pre-incubated monoclonal antibodies. After 1 h incubation at room temperature by shaking at 300 rpm, followed by 3 washing steps, bound antibodies were detected by HRP-coupled anti-human IgG antibody (1:1000 in dilution buffer). Concentrations of IgG subclasses were calculated using four-parameter logistic regression (software: Excel, Microsoft Office, Version: 2010). IgG subclasses were additionally determined by radial immunodiffusion plates, which contained anti-IgG subclass antibodies (The Binding Site, Schwetzingen, Germany). Different concentrations of calibrators (IgG1: 140, 350, 840, 1400 µg/mL; IgG2: 80, 200, 480, 800 µg/mL; IgG3: 120, 300, 720, 1200 µg/mL; IgG4: 50, 125, 300, 500 µg/mL) were applied to the plates and the relative abundances of IgG subclasses were quantified relatively by measuring the diameter of visible precipitation rings as previously described by Dunn et al. [43].
Two-Dimensional (2D) Gel Electrophoresis
Purified anti-α-Gal antibodies were diluted to a concentration of 300 µg/mL with rehydration buffer ( and applied to lanes of a reswell tray. Immobilized pH gradient (IPG) gel strips (Immobiline TM DryStrip pH 6-11; GE Healthcare, Freiburg, Germany) were set on the samples in the lanes of the reswell tray and incubated overnight. The strips were removed from the box and placed on a DryStrip aligner. Both ends of the used strips were covered with moist paper. The loaded aligner was inserted into a focusing chamber (Multiphor II, GE Healthcare, Freiburg, Germany) and coated with mineral oil. Electrodes were put on, and the focusing process was started at a constant temperature of 20 • C, constant current Focused gel strips were incubated with electrophoresis equilibration buffer (6 M urea, 30% glycerin (v/v), 3% sodium dodecyl sulfate (w/v), 0.05 M Tris-HCl, 5 mM dithiothreitol) for 15 min. The solution was removed and replaced by equilibration buffer with 5 mM iodoacetamide and incubated for a further 15 min. Strips were stored in running buffer (250 mM Tris, 1.9 M Glycin, 35 mM SDS) before the second dimension was developed. The strips were placed on a tris-glycine gradient gel (8-16%, Anamed, Groß-Bieberau, Germany) and coated with warm 1% agarose (w/v). The dual color protein standard marker (Bio-Rad, Germany) was additionally applied. Running conditions were set to 100 V for 15 min, followed by 150 V for 75 min. After complete electrophoresis, the gel was stained with silver [44] and documented by the G-Box imaging system (Syngene, Cambridge, UK).
N-Glycan Analysis
Purified anti-α-Gal antibodies (15 µg) were incubated in 1% (v/v) Rapigest solution (Waters GmbH, Eschborn, Germany) with 10% (v/v) tris(2-carboxyethyl)phosphine (Sigma Aldrich GmbH, Darmstadt, Germany) in deionized water and incubated for 5 min at 95 • C with shaking at 300 rpm. After cooling to room temperature, Rapid PNGase F (Waters GmbH, Eschborn Germany) was added to a total volume of 30 µL. The mixture was incubated for 30 min at 50 • C. A solution containing 12 µL of 9 mg of Rapifluor-MS dissolved in dimethylformamide was added and incubated for 5 min in the dark. The solution was diluted with acetonitrile to a final volume of 370 µL. The cleaning of labeled N-glycans was performed with the GlycoWorks HILIC µElution Plate (Waters GmbH, Eschborn, Germany). The cleaning procedure was performed according to the instructions delivered by the manufacturer. N-glycans were lyophilized in a vacuum centrifuge and resuspended in 10 µL of a solution containing 94% (v/v), acetonitrile 3% (v/v) dimethylformamide and 3% water (v/v). Aliquots of 4 µL of N-glycan solution were analyzed by liquid chromatography coupled to mass spectrometry (LC-MS, Xevo ® G2-XS QTof with AcquityH UPLC ® Class, Waters GmbH, Eschborn, Germany) using an electrospray ionization source (High-Performance Zspray TM -Multi mode source) in positive mode. LC separation was performed on a Waters Acquity UPLC Glycan BEH Amide column (130 Å, 1.7 µm, 2.1 × 150 mm), the temperature was kept constant at 60 • C, and a 35 min gradient of 25% A (50 mM ammonium formate pH 4.4)/75% B (acetonitrile) to 46% A/54% B was run. The mass spectrometer's instrument settings were adjusted for maximum sensitivity and detection selectivity (2750 V capillary voltage, 80 V cone voltage, 120 • C source temperature, 500 • C desolvation temperature, 50 L/h cone gas flow, 800 L/h desolvation gas flow). Calibration was performed with Flu Fibrinopeptide B, and a mass range between 750 Da and 2500 Da was recorded. The assignment of glycan structures was performed according to the respective retention times of the LC elution profile (GU units of glycan standards) and mass-to-charge ratios. For the representation of glycan structures, the symbol nomenclature of the Consortium for Functional Glycomics was used: green circle, mannose; yellow circle, galactose; blue square, GlcNAc; yellow square, N-acetylgalactosamine (GalNAc); red triangle, fucose; purple diamond, Neu5Ac [41].
A threefold application of IgG was necessary to detect significant signals during elution. Collected fractions of several runs were pooled and the buffer was exchanged to PBS. The antibody concentration was adjusted to 1 mg/mL for further analysis. In earlier studies, anti-α-Gal antibodies were purified from human plasma by affinity chromatography using melibiose as affinity ligand [10,11,13]. Other affinity ligands such as α-Gal conjugated beads or bovine thyroglobulin were used [45,46].
The elution and wash fraction each revealed a relative protein amount of about 0.15% of totally applied IgG. This amount represents less than the content of about 1% anti-α-Gal antibodies in human IgG serum reported in the literature [47,48]. The lesser amount of antiα-Gal antibodies in human serum may indicate only partial binding of α-Gal antibodies to the affinity column or an incomplete elution. Additionally, the high specificity of the affinity matrix may have led to a lower yield of anti-α-Gal antibodies because unspecific antibodies did not have sufficient binding affinity and eluted in the wash fraction. Furthermore, shape heterogeneity of the elution peak was observed, emphasizing the existence of different subclasses of anti-Gal antibodies.
The composition of purified anti-α-Gal antibodies was evaluated by sodium dodecyl sulfate-polyacrylamide gel-electrophoresis (SDS-PAGE) under reducing conditions ( Figure 2). em 2021, 1, FOR PEER REVIEW 7 All samples showed bands at 50 kDa (heavy chain of IgG) and 25 kDa (light chain of IgG) and a minor signal of 150 kDa (nonreduced IgG). No major non-IgG impurities were detected. All samples showed bands at 50 kDa (heavy chain of IgG) and 25 kDa (light chain of IgG) and a minor signal of 150 kDa (nonreduced IgG). No major non-IgG impurities were detected.
Subclass Determination
The IgG subclass profile of anti-α-Gal antibodies was determined by ELISA utilizing four subclass-specific monoclonal antibodies. IgG2 was found to be the most abundant immunoglobulin isotype (83%), followed by IgG1 (14%) and tiny amounts of IgG3 (2%) and IgG4 (1%). In an orthogonal approach, radial immunodiffusion (RID) was used for the determination of IgG subclasses distribution. Anti-α-Gal sample and IgG subclass calibrators were applied to gel plates containing anti-subclass specific antibodies. Equal concentrations of the antibodies formed radial precipitation lines around the sample spots. Relative abundances of anti-α-Gal IgG subclasses were calculated by diameter comparison from samples and calibrators (Table 1). In this analysis, IgG2 is the most abundant isotype (76%), followed by IgG1 (24%), whereas IgG3 and IgG4 were not detected. The determined IgG subclass profile confirmed reports from the literature on the average amount of the subclasses of anti-α-Gal IgG (~10% for IgG1 and 80-90% for IgG2) in healthy individuals [49]. In contrast, the general subclass distribution of human IgG is about 60% for IgG1, 32% for IgG2, and 4% each for IgG3 and IgG4 [50], revealing a preference of anti-α-Gal antibodies for IgG2. Anti-α-Gal IgG2 is produced due to the natural stimulation of α-Gal-bearing bacteria in the intestinal flora [51]. This immune reaction is mediated by CD1d-receptor presenting lipid-linked carbohydrate antigens on APCs, which can be recognized by invariant natural killer T cells (iNKT) [52,53].
Anti-α-Gal IgG1 is produced after the occurrence of a tick bite and a subsequent exposure to α-Gal [54]. However, the anti-α-Gal IgG subclass distribution determined here was in accordance with the level of anti-α-Gal IgG subclasses of healthy people who did not suffer from tick bites [49]. The IgG subclass determination by RID was similar to the ELISA results. Nevertheless, in terms of quantitation, RID plates are reported to deliver partly inaccurate results, which may be the reason for minor differences between both assays [55]. The heterogeneity of the anti-α-Gal antibody sample was additionally shown by 2D electrophoresis (Figure 3).
Silver-stained spots at 50 kDa and 25 kDa are referred to as the heavy and the light chain of the reduced anti-α-Gal antibody. The spots represented the range of isoelectric points (IEPs) from pH 6 to pH 8 of anti-α-Gal antibodies, at least in parts due to the different subclasses or different subclass-specific glycosylation. The determined range of IEPs reflects the general range of IEPs for human IgG antibodies reported in the literature [56,57].
Galili et al. performed isoelectric focusing experiments and determined an IEP-range between pH 4 and pH 8.5 [1]. Our experiments revealed a more restricted range between pH 6 and pH 8, which underlined the possibility that anti-α-Gal subpopulations with different net charges exist.
BioChem 2021, 1, FOR PEER REVIEW 9 between pH 6 and pH 8, which underlined the possibility that anti-α-Gal subpopulations with different net charges exist. Figure 3. Two-dimensional gelelectrophoresis analysis of purified anti α Gal antibodies. Proteins were separated by isoelectric focusing, followed by SDS-PAGE (8-16% gradient gel) and silver staining.
ELISA Assay
In order to evaluate the selectivity of purified anti-α-Gal antibodies, commercial biotinylated glycan epitopes were coated on a streptavidin precoated well plate, and the interaction between glycoconjugate and antibody was analyzed (Figure 4). Any glycan epitope without terminal α1,3-Gal did not reveal significant signals. Anti-α-Gal antibodies bound to the trisaccharide GlcNAcβ1,4-Galα1,3-Gal and, with a lower affinity, to the disaccharide Galα1,3-Gal. This result is in agreement with a study [58], in which was observed that the GlcNAc-residue downstream the Galα1,3-Gal disaccharide reinforced the binding via additional hydrogen bonds. Furthermore, binding to the structurally similar blood group B epitope was observed.
ELISA Assay
In order to evaluate the selectivity of purified anti-α-Gal antibodies, commercial biotinylated glycan epitopes were coated on a streptavidin precoated well plate, and the interaction between glycoconjugate and antibody was analyzed (Figure 4). Any glycan epitope without terminal α1,3-Gal did not reveal significant signals. Anti-α-Gal antibodies bound to the trisaccharide GlcNAcβ1,4-Galα1,3-Gal and, with a lower affinity, to the disaccharide Galα1,3-Gal. This result is in agreement with a study [58], in which was observed that the GlcNAc-residue downstream the Galα1,3-Gal disaccharide reinforced the binding via additional hydrogen bonds. Furthermore, binding to the structurally similar blood group B epitope was observed.
The purified anti-α-Gal antibodies are shown to be specific because epitopes without terminally α-Gal-linked Gal residues were not detected. A linkage variation evades antibody binding and indicates a very narrow binding pocket. The weak binding to the blood group B epitope further demonstrates that anti-blood group B antibodies are indeed a subpopulation of anti-α-Gal antibodies, as was already emphasized in the literature [59]. Individuals with blood type B produce lower titers of anti-α-Gal antibodies because of their self-tolerance to blood group B. Consequently, blood type B individuals have a higher susceptibility to α-Gal-bearing pathogens such as malaria [60]. Furthermore, blood group B individuals are less affected by red meat allergy [61]. To our current knowledge, it cannot be excluded that the occurrence of blood group B has an impact on tolerance of cetuximab, but it cannot be confirmed either.
teraction between glycoconjugate and antibody was analyzed (Figure 4). Any glycan epitope without terminal α1,3-Gal did not reveal significant signals. Anti-α-Gal antibodies bound to the trisaccharide GlcNAcβ1,4-Galα1,3-Gal and, with a lower affinity, to the disaccharide Galα1,3-Gal. This result is in agreement with a study [58], in which was observed that the GlcNAc-residue downstream the Galα1,3-Gal disaccharide reinforced the binding via additional hydrogen bonds. Furthermore, binding to the structurally similar blood group B epitope was observed.
Erythrocyte Agglutination
An erythrocyte agglutination assay was performed to provide further evidence of the selectivity of Gal-eluted anti-α-Gal antibodies ( Figure 5). Different concentrations of anti-α-Gal antibodies were applied to a round-bottom well plate. Human and rabbit erythrocytes were added to the wells. Erythrocytes without antibodies served as a negative control. Cross-linked erythrocytes were visible as a fading surface in the round-bottom well when human blood group A erythrocytes were incubated with anti-human blood group A antibodies ( Figure 5A). Unconnected cells slid down the round-bottom well, forming a dot ( Figure 5, negative controls without the addition of antibodies). Anti-α-Gal antibodies bound to rabbit red blood cells in a concentration-dependent manner ( Figure 5C), whereas human erythrocytes were not bound by anti-α-Gal antibodies at all ( Figure 5B). These data confirm the literature, which emphasized that the α-Gal epitope is only present on rabbit erythrocytes [62,63] but not on human erythrocytes. Furthermore, the data demonstrate that purified anti-α-Gal antibodies did not show unspecific binding to the cell surface of human erythrocytes.
In former studies, the selectivity of purified anti-α-Gal antibodies was shown by the agglutination of rabbit red blood cells or ELISA [1,64]. The binding of anti-α-Gal antibodies to other glycans than α-Gal cannot be completely ruled out due to the presentation of many other glycan epitopes, such as the AB0, Diego, or Kell antigens [65] on the surface of erythrocytes.
Binding Affinity
Surface plasmon resonance (SPR) measurements were performed to investigate dissociation constants of purified anti-α-Gal antibodies. Biotinylated glycan epitopes were coated on a streptavidin precoated sensor chip. Measurements with blood group A and blood group O epitopes were omitted since no binding was detected via ELISA. LacNAc, Galβ1,3-Gal, and an anticoagulation factor VIII antibody were used as negative controls. Purified anti-α-Gal antibodies only bound to the α-Gal epitope (K D = 144 ± 20 nM, n = 3) and to the α-Gal disaccharide (K D = 191 ± 18 nM, n = 3). The stabilizing effect of the additional GlcNAc-residue (GlcNAcβ1,4-Galα1,3-Gal) did not significantly affect the interaction. However, a higher affinity of the trisaccharide, compared to the disaccharide (Galα1,3-Gal), was shown by an inhibition assay. Anti-α-Gal antibodies were incubated with different excesses of either the trisaccharide or disaccharide. The binding to immobi-lized bovine serum albumin (BSA)-α-Gal was determined ( Figure 6). Twice the amount of disaccharide than trisaccharide was necessary for a 50% inhibition of BSA-α-Gal binding.
blood group A antibodies ( Figure 5A). Unconnected cells slid down the round-bottom well, forming a dot ( Figure 5, negative controls without the addition of antibodies). Antiα-Gal antibodies bound to rabbit red blood cells in a concentration-dependent manner ( Figure 5C), whereas human erythrocytes were not bound by anti-α-Gal antibodies at all ( Figure 5B). These data confirm the literature, which emphasized that the α-Gal epitope is only present on rabbit erythrocytes [62,63] but not on human erythrocytes. Furthermore, the data demonstrate that purified anti-α-Gal antibodies did not show unspecific binding to the cell surface of human erythrocytes. Figure 5. Agglutination of rabbit and human red blood cells to analyze the anti α Gal specificity toward the α Gal epitope. Agglutinations were carried out as duplicates. Different dilutions (10 to 1280) of antibody solution (1 mg/mL) were applied to a round-bottom well plate and incubated with Figure 5. Agglutination of rabbit and human red blood cells to analyze the anti α Gal specificity toward the α Gal epitope. Agglutinations were carried out as duplicates. Different dilutions (10 to 1280) of antibody solution (1 mg/mL) were applied to a round-bottom well plate and incubated with human or rabbit erythrocytes. Agglutination takes place when a cross-linking between antibodies and erythrocytes becomes visible as a milky surface. The formation of a clot shows no agglutination. (A) Anti-blood group A antibody was applied on human red blood cells prepared from human plasma of an adult with blood group A as a positive control. (B) Anti-α-Gal antibody was applied on human red blood cells. The cells did not show any agglutination. (C) Anti-α-Gal antibody was applied on rabbit erythrocytes. The cells showed agglutination up to an antibody dilution factor of 80. Erythrocytes without additions were used as a negative control. REVIEW 11 human or rabbit erythrocytes. Agglutination takes place when a cross-linking between antibodies and erythrocytes becomes visible as a milky surface. The formation of a clot shows no agglutination. (A) Anti-blood group A antibody was applied on human red blood cells prepared from human plasma of an adult with blood group A as a positive control. (B) Anti-α-Gal antibody was applied on human red blood cells. The cells did not show any agglutination. (C) Anti-α-Gal antibody was applied on rabbit erythrocytes. The cells showed agglutination up to an antibody dilution factor of 80. Erythrocytes without additions were used as a negative control.
2021, 1, FOR PEER
In former studies, the selectivity of purified anti-α-Gal antibodies was shown by the agglutination of rabbit red blood cells or ELISA [1,64]. The binding of anti-α-Gal antibodies to other glycans than α-Gal cannot be completely ruled out due to the presentation of many other glycan epitopes, such as the AB0, Diego, or Kell antigens [65] on the surface of erythrocytes.
Binding Affinity
Surface plasmon resonance (SPR) measurements were performed to investigate dissociation constants of purified anti-α-Gal antibodies. Biotinylated glycan epitopes were coated on a streptavidin precoated sensor chip. Measurements with blood group A and blood group O epitopes were omitted since no binding was detected via ELISA. LacNAc, Galβ1,3-Gal, and an anticoagulation factor VIII antibody were used as negative controls. Purified anti-α-Gal antibodies only bound to the α-Gal epitope (KD = 144 ± 20 nM, n = 3) and to the α-Gal disaccharide (KD = 191 ± 18 nM, n = 3). The stabilizing effect of the additional GlcNAc-residue (GlcNAcβ1,4-Galα1,3-Gal) did not significantly affect the interaction. However, a higher affinity of the trisaccharide, compared to the disaccharide (Galα1,3-Gal), was shown by an inhibition assay. Anti-α-Gal antibodies were incubated with different excesses of either the trisaccharide or disaccharide. The binding to immobilized bovine serum albumin (BSA)-α-Gal was determined ( Figure 6). Twice the amount of disaccharide than trisaccharide was necessary for a 50% inhibition of BSA-α-Gal binding. Figure 6. Anti-Gal antibody inhibition assay. A purified anti-Gal antibody was incubated with different excesses of trisaccharide (GlcNAc-β1,4-Gal-α1,3-Gal) or disaccharide (Gal-α1,3-Gal). Preincubated antibodies were subsequently incubated with immobilized BSA-α-Gal in a well-plate. Data are presented as means +/− SEM, n = 3.
The affinity of anti-α-Gal antibodies to α-Gal is high, compared to the dissociation constant of other carbohydrate-specific antibodies. The dissociation constant KD of highaffinity monoclonal antibodies specific for chlamydial lipopolysaccharide was reported in the range of about 500 to 700 nM [66]. That was a 2.5-to-4-times higher dissociation constant than the KD that was calculated for the binding of anti-α-Gal antibodies to α-Gal in Figure 6. Anti-Gal antibody inhibition assay. A purified anti-Gal antibody was incubated with different excesses of trisaccharide (GlcNAc-β1,4-Gal-α1,3-Gal) or disaccharide (Gal-α1,3-Gal). Preincubated antibodies were subsequently incubated with immobilized BSA-α-Gal in a well-plate. Data are presented as means +/− SEM, n = 3.
The affinity of anti-α-Gal antibodies to α-Gal is high, compared to the dissociation constant of other carbohydrate-specific antibodies. The dissociation constant K D of highaffinity monoclonal antibodies specific for chlamydial lipopolysaccharide was reported in the range of about 500 to 700 nM [66]. That was a 2.5-to-4-times higher dissociation constant than the K D that was calculated for the binding of anti-α-Gal antibodies to α-Gal in our assay. Compared to dissociation constants of specific therapeutical monoclonal antibodies, which were in the picomolar range for their corresponding antigen, the affinity of anti-α-Gal was lower by a factor up to 2000 [67]. To include potential effects of polypeptide backbones on the binding affinity of anti-α-Gal antibodies, we immobilized bovine thyroglobulin on a carboxylmethyl (CM) 5 sensor chip and measured the binding affinity to purified anti-α-Gal antibodies. The dissociation constant was determined as 1.6 nM, which suggests a strong positive influence of the protein part on anti-α-Gal antibody affinity. Binding studies of anti-α-Gal IgE, bovine thyroglobulin, and human serum albumin coated with α-Gal determined dissociation constants of 36 nM (bovine thyroglobulin) and 363 nM (albumin) [64]. Experiments to determine the binding affinity of an engineered antibody against N-glycolylneuraminic acid (Neu5Gc) on proteins resulted in dissociation constants at about 1 µM [68]. This is a 1000-fold higher dissociation constant than the K D values determined here (K D between anti-Gal and bovine thyroglobulin = 1.6 nM), indicating a very high affinity of purified anti-α-Gal antibodies toward α-Gal epitopes. The high specificity of the purified antibodies is the decisive difference to the recombinant anti-Gal antibody variants M86 and G-13. In contrast to the simple enrichment and purification of human, highly specific anti-Gal antibodies, the expression of the recombinant variants leads to numerous problems such as autolysis or self-agglutination [59,[69][70][71].
N-Glycosylation Profile of Anti-α-Gal Antibodies
The glycosylation of antibodies has been extensively studied [72][73][74]. To characterize the N-glycosylation profile of purified anti-α-Gal antibodies, their N-glycans were enzymatically released with PNGase F. Enzymatically released N-glycans were labeled with fluorescent RapiFluor MS reagent, separated via hydrophilic interaction liquid chromatography (HILIC), and applied to the mass spectrometric analysis. The resulting N-glycan profile of purified anti-α-Gal antibodies revealed a typical human IgG-like glycosylation (Figure 7). em 2021, 1, FOR PEER REVIEW 12 our assay. Compared to dissociation constants of specific therapeutical monoclonal antibodies, which were in the picomolar range for their corresponding antigen, the affinity of anti-α-Gal was lower by a factor up to 2000 [67]. To include potential effects of polypeptide backbones on the binding affinity of anti-α-Gal antibodies, we immobilized bovine thyroglobulin on a carboxylmethyl (CM) 5 sensor chip and measured the binding affinity to purified anti-α-Gal antibodies. The dissociation constant was determined as 1.6 nM, which suggests a strong positive influence of the protein part on anti-α-Gal antibody affinity. Binding studies of anti-α-Gal IgE, bovine thyroglobulin, and human serum albumin coated with α-Gal determined dissociation constants of 36 nM (bovine thyroglobulin) and 363 nM (albumin) [64]. Experiments to determine the binding affinity of an engineered antibody against N-glycolylneuraminic acid (Neu5Gc) on proteins resulted in dissociation constants at about 1 µM [68]. This is a 1000-fold higher dissociation constant than the KD values determined here (KD between anti-Gal and bovine thyroglobulin = 1,6 nM), indicating a very high affinity of purified anti-α-Gal antibodies toward α-Gal epitopes. The high specificity of the purified antibodies is the decisive difference to the recombinant anti-Gal antibody variants M86 and G-13. In contrast to the simple enrichment and purification of human, highly specific anti-Gal antibodies, the expression of the recombinant variants leads to numerous problems such as autolysis or self-agglutination [59,[69][70][71].
N-Glycosylation Profile of Anti-α-Gal Antibodies
The glycosylation of antibodies has been extensively studied [72][73][74]. To characterize the N-glycosylation profile of purified anti-α-Gal antibodies, their N-glycans were enzymatically released with PNGase F. Enzymatically released N-glycans were labeled with fluorescent RapiFluor MS reagent, separated via hydrophilic interaction liquid chromatography (HILIC), and applied to the mass spectrometric analysis. The resulting N-glycan profile of purified anti-α-Gal antibodies revealed a typical human IgG-like glycosylation (Figure 7). Figure 7. N-glycan profile of anti-α-Gal antibodies. PNGase F-released N-glycans were separated via hydrophilic interaction liquid chromatography. Peak identification was achieved by independent Q-TOF mass spectrometry. A schematic N-glycan representation is given for signals with a relative peak area >2% only. Evaluations were carried out using the Waters UNIFI 1.9.2 software.
Out of the detected N-glycans, 86% were fucosylated. The most prominent structure was the complex biantennary, fucosylated glycan without antennary galactoses (G0) followed by the mono-and fully galactosylated forms (G1 and G2). Bisecting GlcNAc (15% Figure 7. N-glycan profile of anti-α-Gal antibodies. PNGase F-released N-glycans were separated via hydrophilic interaction liquid chromatography. Peak identification was achieved by independent Q-TOF mass spectrometry. A schematic N-glycan representation is given for signals with a relative peak area >2% only. Evaluations were carried out using the Waters UNIFI 1.9.2 software. Out of the detected N-glycans, 86% were fucosylated. The most prominent structure was the complex biantennary, fucosylated glycan without antennary galactoses (G0) followed by the mono-and fully galactosylated forms (G1 and G2). Bisecting GlcNAc (15% of all structures) and sialylated glycans (12% partial, 1% full) represent further elements. To our knowledge, the N-glycosylation profile of anti-α-Gal antibodies was not reported before. Especially the high grade of core fucosylation (86%), the low grade of sialylation (13%), and a moderate grade of bisection (15%) are typical for human IgG [75,76]. Glycan structure-function relationships, particularly for antibodies, are a major issue to date [77]. A fully elucidated N-glycan profile of anti-α-Gal may help prospective therapeutic applications of the antibody or optimize its recombinant expression.
Verification of the Anti-α-Gal Suitability as Detection Antibody
In this study, purified anti-α-Gal antibodies were investigated for their applicability in Western blot analyses applying HRP-induced chemiluminescence for detection. For proof of concept, the α-Gal-carrying glycoproteins bovine thyroglobulin, cetuximab, and a synthetically produced BSA-α-Gal-conjugate were analyzed utilizing the purified anti-α-Gal fraction as primary antibodies ( Figure 8A).
BioChem 2021, 1, FOR PEER REVIEW 13 of all structures) and sialylated glycans (12% partial, 1% full) represent further elements. To our knowledge, the N-glycosylation profile of anti-α-Gal antibodies was not reported before. Especially the high grade of core fucosylation (86%), the low grade of sialylation (13%), and a moderate grade of bisection (15%) are typical for human IgG [75,76]. Glycan structure-function relationships, particularly for antibodies, are a major issue to date [77]. A fully elucidated N-glycan profile of anti-α-Gal may help prospective therapeutic applications of the antibody or optimize its recombinant expression.
Verification of the Anti-α-Gal Suitability as Detection Antibody
In this study, purified anti-α-Gal antibodies were investigated for their applicability in Western blot analyses applying HRP-induced chemiluminescence for detection. For proof of concept, the α-Gal-carrying glycoproteins bovine thyroglobulin, cetuximab, and a synthetically produced BSA-α-Gal-conjugate were analyzed utilizing the purified antiα-Gal fraction as primary antibodies ( Figure 8A). After reduction and alkylation, bovine thyroglobulin showed bands from 75 to over 250 kDa, representing products caused by the reduction of disulfide bridges, which is in line with reports from the literature [78]. Synthetic BSA-α-Gal revealed a monomer at about 66 kDa and bands from 150 kDa to over 250 kDa, most likely due to aggregation. After reduction and alkylation, bovine thyroglobulin showed bands from 75 to over 250 kDa, representing products caused by the reduction of disulfide bridges, which is in line with reports from the literature [78]. Synthetic BSA-α-Gal revealed a monomer at about 66 kDa and bands from 150 kDa to over 250 kDa, most likely due to aggregation. Cetuximab showed a band at 50 kDa, indicating glycosylation at the heavy chain. After cleavage at the hinge region, catalyzed by IdeZ protease, the Fab fragment was detected at 100 kDa. After reduction, this signal was shifted to~25 kDa. Anti-α-Gal antibodies did neither bind to any α1,3-galactosidase-treated proteins nor the negative control (BSA).
Different amounts of cetuximab were separated by SDS-PAGE, blotted, and developed with HRP-conjugated anti-α-Gal antibodies ( Figure 8B) to evaluate the detection limit of anti-α-Gal antibodies. Cetuximab was detectable down to a value of 0.04 µg (0.28 pmol). The known molar amount of four α-Gal epitopes for cetuximab [79] reveals a detection limit of 1.12 pmol of α-Gal epitopes.
Detailed glycan analysis requires a lot of time, expensive equipment, and skilled scientists. This assay is fast and comparatively uncomplicated. However, an obvious disadvantage of the Western blot assay is that only glycoproteins of sufficiently high mass can be analyzed due to the previous separation via gel electrophoresis. Glycopeptides are too small to be detected with Western blot, whereas mass spectrometric detection of glycopeptides or even smaller molecules has been carried out a lot of times [80][81][82]. Therefore, selected model glycoproteins with high amounts of α-Gal epitopes were analyzed via Western blot in this study only. Purified anti-α-Gal antibodies can also be used for quantification of the amount of α-Gal in specific glycoproteins in ELISA assays with higher sensitivity (see Figure 4 for use of α-Gal antibodies in ELISA in general). For this, more therapeutic glycoproteins with low α-Gal content should be analyzed to identify the detection limit of the anti-α-Gal antibody.
For example, the monoclonal antibodies palivizumab, dinutuximab, necitumumab, and elotuzumab, which are produced in α-Gal synthesizing murine cells [83] may require continuous monitoring of the α-Gal content during bioprocesses.
Author Contributions: Writing-original draft preparation, visualization, investigation, conceptualization, methodology A.Z.; conceptualization, methodology, validation, investigation, data curation, writing-review and editing, J.R.; conceptualization, methodology, validation, investigation, data curation, writing-review and editing, G.K.; supervision, project administration, writing-review and editing, S.H.; supervision, project administration, writing-review and editing, M.K.P. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding. | v2 |
2019-02-22T08:01:39.642Z | 2019-02-01T00:00:00.000Z | 73462199 | s2orc/train | Plasmid-Mediated Colistin Resistance in Salmonella enterica: A Review
Colistin is widely used in food-animal production. Salmonella enterica is a zoonotic pathogen, which can pass from animal to human microbiota through the consumption of contaminated food, and cause disease, often severe, especially in young children, elderly and immunocompromised individuals. Recently, plasmid-mediated colistin resistance was recognised; mcr-like genes are being identified worldwide. Colistin is not an antibiotic used to treat Salmonella infections, but has been increasingly used as one of the last treatment options for carbapenem resistant Enterobacteria in human infections. The finding of mobilizable mcr-like genes became a global concern due to the possibility of horizontal transfer of the plasmid that often carry resistance determinants to beta-lactams and/or quinolones. An understanding of the origin and dissemination of mcr-like genes in zoonotic pathogens such as S. enterica will facilitate the management of colistin use and target interventions to prevent further spread. The main objective of this review was to collect epidemiological data about mobilized colistin resistance in S. enterica, describing the mcr variants, identified serovars, origin of the isolate, country and other resistance genes located in the same genetic platform.
Introduction
The overuse and inappropriate use of antibiotics in diverse settings, such as human and veterinary therapeutics, animal production and agriculture, is widely accepted as one of the major causes of the emergence of antimicrobial resistance worldwide [1,2]. During the past decades, we have witnessed the evolution of bacteria by the selective pressure of antibiotics, with new resistance mechanisms and their spread across bacteria populations from various ecological niches. The antimicrobial resistance was responsible for about 700,000 deaths in 2016 and this number is estimated to increase to 10 million annual deaths by 2050 [2].
In human medicine, the treatment of infections due to multidrug resistant bacteria is a real challenge, like those caused by Pseudomonas aeruginosa, Acinetobacter baumannii and carbapenem-resistant Enterobacteria. The void of effective antibiotics led to the recent use of an old antibiotic, colistin, as one of the last-resort therapeutic options. The World Health Organization reclassified colistin as an antibiotic of critical importance in human clinical settings [3].
However, colistin has been widely used in animal production in several countries for therapeutic, prophylactic and growth promotion purposes [4,5]. The use of low-dose and prolonged course of antibiotics in livestock is clearly associate with selection of zoonotic resistant strains that can be spread by direct contact of animal-to-human or indirectly, like by the food chain [6,7]. The dissemination of resistance determinants is fueled by lateral gene transfer mechanisms, such as conjugation [8]. Animal other Gram-negative bacteria. Currently, eight types of mcr genes (mcr-1 to -8) have been described and deposited into GenBank. The first reported variants were isolated from animals in Europe and China. The mcr-2 gene was found for the first time in E. coli from pigs and calves in Belgium [26], mcr-3 in E. coli from pigs in China [13], mcr-4 in a strain of the monophasic variant S. enterica serovar Typhimurium from pigs in Italy [14], mcr-5 in S. Paratyphi B dTa+ from poultry in Germany [15], mcr-6 (previously named mcr-2.2) in Moraxella spp. isolated from pigs in Great Britain [27], mcr-7 in three isolates of Klebsiella pneumoniae from chickens in China [28] and finally mcr-8 in NDM-producing K. pneumoniae isolates from both pigs and humans in China [25].
All these findings suggest that animals are the reservoir of the mcr genes with emphasis on the pigs, mostly due to the heavy usage of polymyxins in food animal production for therapy, prophylaxis and metaphylaxis purposes, which contributes for selection of mcr producers. Furthermore, the reports of identification of mcr genes have been mostly from animal isolates when compared with human isolates, sustaining animals as the main reservoir. Moreover, some genetic elements, like other resistance genes, insertions sequences and plasmids that are more prevalent and widespread in bacteria of animal origin, are found closely associated with the mcr-like genes [29].
Salmonella enterica: Salmonellosis and Enteric Fever in Humans
S. enterica infections are an important public health concern worldwide. S. enterica serovars can be separated in two main groups: The typhoidal Salmonella that comprise S. enterica serovar Typhi (from now on designated as S. Typhi), S. Paratyphi A, S. Paratyphi B, and S. Paratyphi C, whereas all the other serovars are called as non-typhoidal Salmonella (NTS) [30].
Animals are the primary reservoir of NTS, and NTS infections, generally called salmonellosis, are a huge threat in developing countries especially in infants, young children and in HIV-carriers, while in developed countries infection is mostly acquired through the food chain by ingestion of commercially contaminated produced animal-derived food [7,31,32]. It is estimated that NTS gastroenteritis is responsible for about 93.8 million illness and 155,000 deaths each year worldwide, and of these, it is estimated that 80.3 million cases are foodborne, with very high associated costs, most of them in developing countries, which contrasts with the reality in developed countries, where this rate is lower [33].
Despite food producing animals behave as the main reservoirs of S. enterica, a small group of serovars are capable of infecting and colonizing only determined hosts. For example, typhoidal serovars are human host-restricted organisms that cause typhoidal fever and para-typhoid fever (both also known as enteric fever) [30,34].
All typhoidal Salmonella serovars are responsible for 27 million annual cases of enteric fever, which results in more than 200,000 deaths worldwide [35]. In developing countries, where sanitary conditions and clean water are a problem of public health, enteric fever is generally endemic. Fecal-oral route is the main cause for spread of typhoidal Salmonella. In some countries, especially in Southeast Asia, S. Paratyphi infections are increasing. It is estimated that this serovar is responsible for about half of all enteric fever cases [36].
Currently, colistin is not used to treat human infections caused by this bacterium, and the development of colistin resistance is clinically not relevant. However, in vivo colistin resistance has been observed in S. enterica from food-producing animals [37][38][39][40], and the resistance determinants when inserted in genetic mobile elements (e.g., mcr-like genes) can be laterally transferred to other species, commensals or pathogens of animal and human origin. Moreover, the genetic platforms carrying mcr-like genes frequently host resistance genes that hinder the efficacy of other antibiotic classes [41]. Therefore, the presence of mcr-like genes should not be neglected in this zoonotic pathogen.
Colistin Resistance in Salmonella enterica
S. enterica strains have developed resistance to a variety of antimicrobials. Chloramphenicol was the first antibiotic used in the treatment of typhoid fever, but emergence of resistance soon after its introduction lead to the replacement by trimethoprim-sulfamethoxazole and ampicillin or amoxicillin. Multidrug resistant strains emerged with the overuse of these first-line treatment drugs, and fluoroquinolones, such as ciprofloxacin, and extended-spectrum cephalosporins, such as ceftriaxone, were introduced in the treatment of Salmonella infections. However, resistance to these antimicrobials is now also frequent [7,30,42].
In S. enterica, the chromosomal colistin resistance involve activation of the PmrA/PmrB and PhoP/PhoQ two-component regulatory systems, which are responsible for the biosynthesis of L-Ara4N and PEtn. The activation of these systems is related with environmental stimuli, such as low concentration of Mg 2+ , or with specific mutations in the two-component regulatory systems-encoding genes [4,23,43]. These mutations lead to the constitutive expression of PmrA/PmrB and PhoP/PhoQ, with consequent activation of operons arnBCADTEF and pmrCAB, and permanent addition of L-Ara4N and PEtn, respectively, to lipid A [23].
Other alterations, such as deacylation of lipid A by PagL [23,44], and activation of the transcription of genes involved in adaptation and survival of the bacterial cells by RpoN [23,45], can also lead to colistin resistance in S. enterica, but are less common.
Plasmid-mediated colistin resistance conferred by mcr-1 [46], mcr-2 [47], mcr-3 [48], mcr-4 [14] and mcr-5 [15] genes have been already identified in different serovars of S. enterica. Like in other bacterial species, mcr-like genes have been detected in isolates from different origin, such as food-producing animals, food products and human samples, and are inserted in diverse genetic environments and plasmid backbones. It is of note that the presence of the mcr genes can be associated with low level of resistance to colistin [4,14,15,46,[49][50][51], allowing to persist undetectable. Table 1 summarizes the reports on mcr-like genes and their variants in this species and the key findings of each study. Briefly, S. Typhimurium is the most prevalent serotype harbouring mcr genes. This serotype is also one of the most frequent to cause human infections [52]. Monophasic variants of S. Typhimurium such as 1,4, [5],12:i:-are also widely reported. It is still worth noting that mcr positive Paratyphi B are isolated from animal samples, though this serotype usually infects humans and cause invasive disease [52]. Food-producing animals appear to be the main reservoir of mcr positive S. enterica strains. Poultry and swine animals are the most reported sources of isolates. Nonetheless, there are isolates from human clinical sources, which suggests dissemination from animals to humans along food chain [53]. In addition, China is the country where more mcr positive S. enterica strains are identified. This is consistent with the high rates of use of colistin in livestock and veterinary medicine, which leads to the emergence of resistance [10]. Nevertheless, in European countries, such as Italy and Portugal, where colistin is frequently used for therapeutic and metaphylactic purposes in animal husbandry, the reports are emerging [10,41,53]. On the other hand, European countries are more engaged in screening and surveillance activities, which justifies the high number of European reports [14,20,48,54,55]. These studies evidence the wide and ubiquitous spread of mcr genes around the world. Although the first report of mcr-1 only occurred in 2015 from an E. coli isolate [9], these genes are also carried by S. enterica at least since 2008 [56]. Finally, several mcr-carrying S. enterica isolates show multidrug resistance profiles, with several genes conferring resistance to tetracyclines, beta-lactams including cephalosporins, quinolones, sulfamethoxazole/trimethoprim and streptomycin, which limits the therapeutic options for treatment of S. enterica infections.
The existence of colistin resistance genes embedded into mobile genetic elements, such as plasmids, is a huge concern because they can be horizontally spread across different bacteria. Furthermore, mcr genes can be located in plasmids encoding other resistance genes, such as bla CTX-M , floR and/or qnr, originating strains resistant to several antibiotic classes, including polymyxins, the majority of beta-lactams, including broad-spectrum cephalosporins and monobactams [48,57,58], amphenicols [51] and quinolones [48,59], respectively. For instance, mcr-1 and bla CTX-M-1 genes embedded into plasmid IncHI2 were co-transferred from S. enterica isolated from swine retail meat by conjugation under colistin selection [41]. The co-selection of resistance might compromise treatment of complicated gastroenteritis and invasive infections caused by S. enterica. • First report of the mcr-5 gene • The transfer of colistin-resistance-mediating phosphoethanolamine transferase genes from bacterial chromosomes to mobile genetic elements has occurred in multiple independent events raising concern regarding their variety [15] MDR, multidrug resistant
Conclusion
Here we reviewed the epidemiology of mcr-like genes identified in S. enterica serovars. It is not expected that colistin will be an antibiotic to treat human enteric fever or gastroenteritis caused by this pathogen; nonetheless, mcr-like genes are carried in conjugative plasmids that spread among bacterial populations. The zoonotic feature of S. enterica cannot be neglected and plasmid-mediated colistin resistance genes may reach human microbiota through the food chain. Genetic multidrug resistant platforms can be selected not only by colistin but also by the other antibiotics used in livestock, such as quinolones. It is of paramount importance to understand where resistant pathogens are emerging in order to implement infection control measures to prevent their spread. Emergence of mcr-like genes are not confined to Asia, as initially supposed, and are found in countries where a higher antibiotic restriction is used in animal production, even in strains isolated ten years ago, raising questions of the stability of these plasmids in bacterial populations, their impact on bacterial fitness. Further research on mcr-like genes in zoonotic pathogen populations is necessary to unveil the true impact in human health and to manage colistin use to minimize selection, proliferation and spread of drug-resistant bacteria. | v2 |
2018-12-27T14:02:12.891Z | 2018-12-26T00:00:00.000Z | 84846039 | s2orc/train | Hierarchical feature fusion framework for frequency recognition in SSVEP-based BCIs
Effective frequency recognition algorithms are critical in steady-state visual evoked potential (SSVEP) based brain-computer interfaces (BCIs). In this study, we present a hierarchical feature fusion framework which can be used to design high-performance frequency recognition methods. The proposed framework includes two primary technique for fusing features: spatial dimension fusion (SD) and frequency dimension fusion (FD). Both SD and FD fusions are obtained using a weighted strategy with a nonlinear function. To assess our novel methods, we used the correlated component analysis (CORRCA) method to investigate the efficiency and effectiveness of the proposed framework. Experimental results were obtained from a benchmark dataset of thirty-five subjects and indicate that the extended CORRCA method used within the framework significantly outperforms the original CORCCA method. Accordingly, the proposed framework holds promise to enhance the performance of frequency recognition methods in SSVEP-based BCIs.
Introduction
A brain-computer interface (BCI) is a type of communication system that can directly translate brain signals into digital commands for the control of external devices without the involvement of the peripheral nerves and muscles.
BCI systems show great promise for providing communication access to both people with severe motor disabilities and typically developed individuals alike [1,2,3]. Due to its relative portability and low cost, electroencephalography (EEG) remains the most widely investigated sensing modality in BCI research due in part to its excellent temporal resolution [4]. To date, several types of EEG signals have been used to design and operate BCIs, e.g. ERP [5,6,7], sensorimotor rhythm (SMR) [8,9,10], steady-state visual evoked potential (SSVEP) [11,12,13], and motion-onset visual evoked potentials [14,15], etc. In addition, some researchers have begun to explore hybrid BCI approaches using multiple signals and modalities [16,17,18,19]. In recent years, an increasing number of researchers have focused on SSVEP-based BCIs because of their high information transfer rate (ITR) and minimal user training requirements [20,21,22,23,24].
For SSVEP-based BCIs, an effective frequency detection algorithm plays an important role in overall system performance [4]. In literature, various techniques for SSVEP feature extraction and classification have been developed [25,26]. Among them, the most popular are based on multivariate statistical algorithms, such as canonical correlation analysis (CCA) [27,28], and multivariate synchronization index (MSI) [29,30], etc, which are easy to implement without complex optimization procedures. Recently, we proposed a novel frequency recognition method, termed CORRCA [31], based on the corre-lated component analysis (COCA) which is also a multivariate statistical algorithm [32,33,34]. The CORRCA method significantly outperforms the stateof-art CCA method [31]. As we know, CCA is a multivariate statistical method that measures the correlation between two sets of signals [35]. It requires that the canonical projection vectors be orthogonal and it generates two different projection vectors for the two signals. In contrast, COCA aims to maximize the Pearson Product Moment Correlation coefficient and does not necessitate orthogonality between the projection vectors. Furthermore, it generates only one projection vectors for the two signals this simplifying subsequent analysis [36].
For EEG signal analysis, the COCA method maybe more efficient and practical in real-world applications [31,32].
Although standard CCA and CORRCA algorithms have been applied in literature with satisfactory results, the performance of these system could be further improved by applying sophisticated signal processing technologies. One such approach is the filter bank analysis method [28], which employs several bandpass filters to generate multiple subband components of the input signals.
This filter bank based approach first carries out a frequency recognition method to obtain features on each subband component, then integrates all the features in classification using a weighted combination. In fact, we can consider the filter bank technique as a form feature fusion in frequency domain. This fusion strategy is just one way to explore the discriminating information implicit in the original signals. Here we term it a frequency domain fusion (FD fusion).
In a recent motor imagery based BCI study, it was determined that weighting and regularizing common spatial patterns (CSP) features allow for the use of preclude the need for feature selection thus avoiding the loss of valuable information, and enhancing the performance of CSP [37]. We can consider this type of feature weighted method as a feature fusion strategy in the spatial domain. Analogously, we term it the spatial domain fusion(SD fusion). Both the standard CCA method and CORRCA method provide multiple correlation coefficients to measure the correlation between two multi-dimensional signals. It is worth noting that traditionally only the largest coefficients is selected as feature while all others are ignored [27,28,38,31]. This inevitably results in the loss of the discriminative information. We posit that merging these two fusion strategies will result in a more robust target detection framework for SSVEP-based BCIs. To the best of our knowledge, a framework including both SD and FD fusions has never been introduced in SSVEP-based BCI to date.
Motivated by previous studies, here we propose a hierarchical feature fusion framework for SSVEP target detection by implementing both SD and FD fusions. In order to evaluate the efficiency and effectiveness of the proposed framework, we utilized the CORRCA method as a representative frequency recognition algorithm to investigate if the proposed framework can affect its performance. Furthermore, we conducted extensive experimental evaluation on a benchmark dataset of thirty-five subjects.
The remainder of this paper is organized as follows. Section II describes the proposed framework as well as its implementation and application techniques; Section III describes the dataset and performance evaluation; Section IV presents the experimental results; and the last two sections discuss and conclude this study.
The hierarchical feature fusion framework and its implementation
The hierarchical feature fusion framework includes three main stages, i.e., bandpass filtering, hierarchical feature fusion and frequency recognition (see • In the stage of bandpass filtering, several bandpass filters are employed to filter the EEG signals, respectively, into multiple sub-band components.
The filter bank design for the bandpass filters should be optimized according to the frequencies of the SSVEP stimulus. Here, all the sub-bands cover multiple harmonic frequency bands with the same high cut-off frequency at the upper bound frequency. for a more detailed analysis for the frequency band selection refer to [28].
• In the second stage, the hierarchical feature fusion includes SD fusion and FD fusion, respectively. First, for each pair of sub-band signals, a spatial filtering method yields multiple features that measure the correlation levels. Then, SD fusion is implemented on these features to obtain a new feature for each sub-band. Subsequently, FD fusion is used to generate the final features at each stimulus by combining the new features obtained by SD fusion on all the sub-bands. For both SD and FD fusions, many strategies could be used to fuse the features. In the present study, we used a weighted method with a nonlinear function to combine all the features obtained via the spatial filtering method. For the FD fusion method, we also implemented a weighted strategy with nonlinear function to combine all the features.
• In the third stage, the proposed framework performs the frequency recognition based on the features obtained in the second stage. Supervised learning methods, such as linear discriminant analysis (LDA), deep learning, as well as unsupervised learning methods can be used in this stage.
Here, we employ the latter because of its satisfactory performance in previous studies [27,4,31]. Specifically, the target frequency can be recognized as that with the highest magnitude of final feature.
In the implementation of the framework, no prior knowledge about the weights can be directly inferred in the SD fusion step, and the weights themselves might depend on the chosen spatial filtering method. In this subsection, these weights are denoted as φ 1 , φ 2 , · · · , φ C .
Denote a test sample as Z ∈ R C×N and the i-th frequency template signal as Y i ∈ R C×N (i = 1, 2, · · · , N f ). Furthermore, denote the sub-band signals of Z and Y i after the l-th bandpass filtering as Z l ∈ R C×N and Y i l ∈ R C×N respectively, l = 1, 2, · · · , SN ; i = 1, 2, · · · , N f . With a spatial filtering method, C features in descending order, denoted as λ l 1 , λ l 2 , · · · , λ l C , on the Z l and Y i l can be obtained. Here, C is the number of variables, N is the number of samples, N f is the number of stimulus frequencies, and SN is the number of a test sample Z ∈ R C×N and individual template signals Y i ∈ R C×N at the i-th frequency (i = 1, 2, · · · , N f ), we first divide them to SN sub-band signals using bandpass In each sub-band, we can obtain C features in descending order by a spatial filtering method, e.g., λ 1 1 , λ 1 2 , · · · λ 1 C and λ SN 1 , λ SN 2 , · · · λ SN C in the first and SN -th sub-bands. In SD fusion step, we fuse these features with the weights φ 1 , φ 2 , · · · , φ C in each sub-band to produce new features: Then, in the FD fusion step, the final feature at the i-th frequency for the test sample Z and the Y i can be calculated as Finally, the frequency of Z is detected by the rule defined by formula (4).
Then with the SD fusion, the new feature in l-th sub-and at the i-th fre-quency can be obtained as : With FD fusion, the final feature for the test sample Z and the template signal Y i at the i-th frequency (i = 1, 2, · · · , N f ) can be obtained as : where w l , w 2 , · · · , w SN are the weights. They can be obtained by a weighting nonlinear function as below (refer to [28]): where a 1 and b 1 are two parameters that control the classification performance, respectively.
Then, the frequency f Z of Z is specified as the frequency of the template signal that has maximal feature with Z, as below:
The application of the framework on standard CORRCA method
CORRCA method is developed based on the COCA, which is a technique to maximize the Pearson Product Moment Correlation coefficient between two multi-dimensional signals [32]. Compared to CCA, COCA relaxes the constraint on orthogonality among the projection vectors, and generates a single projection vector for the two multi-dimensional input signals. Mathematically, COCA is an optimization problem, and its projection vectors can be obtained by solving a generalized eigenvalue problem. COCA has been used to investigate crosssubject synchrony of neural processing [34], and inter-subject correlation in evoked encephalographic responses [33,39]. Recently, it was introduced for frequency recognition in SSVEP-based BCIs [31]. Below we provide a brief description of the standard CORRCA method.
Denote X ∈ R C×N and Y ∈ R C×N as two multidimensional variables, where C is the number of variables and N the number of samples. COCA seeks to find a projection vector w ∈ R C×1 such that the resulting linear combinations where ρ denotes the correlation coefficient. (5) with respect to w and setting to zero, we obtain the following eigenvalue equation [32]: With formula (6), we can obtain C correlation coefficients, i.e., ρ 1 , ρ 2 , · · · , ρ C for X and Y . In the standard CORRCA method, only the maximal coefficient among {ρ 1 , ρ 2 , · · · , ρ C } is used as the feature for frequency recognition.
Suppose thatZ ∈ R C×N is a test sample,Ȳ i ∈ R C×N is an individual template signal calculated by averaging SSVEP data across multiple trials at frequency f i , i = 1, 2, · · · , N f . Denote maximal correlation coefficients between Z andȲ i as β i , i = 1, 2, · · · , N f . Then, the frequency ofZ is determined by finding the frequency of the template signal that has maximal correlation with Z, as below: Although the CORRCA method yields significantly better performance compared to the state-of-art CCA method, only the largest correlation coefficients in the set of {ρ 1 , ρ 2 , · · · , ρ C } is retained and the others are optionally discarded.
In our preliminary analysis on the benchmark dataset, we investigated the classification performance of each correlation coefficient in the CORRCA method.
As shown in Fig.2, the correlation coefficient index versus the averaged classifi- The accuracies decrease nonlinearly as the correlation coefficient index increases. cation accuracy across subjects at four time windows is presented. Notice that almost all the coefficients could provide discriminative information for achieving classification accuracies above chance level (1/40). Therefore, exploring the strategy of combining all the correlation coefficients, namely through SD fusion, could be beneficial for enhancing the overall performance of the COR-RCA method. In addition, the previous study demonstrated that the FD fusion can improve the performance of CORRCA method [31]. Accordingly, we believe that the proposed framework could further enhance system performance compared to using either SD fusion or FD fusion alone. In the current study, the CORRCA method was used as a representative method to implement the proposed framework, and investigate how its performance is affected.
In implementing the framework on the CORRCA method, no prior knowledge about the weights in the SD fusion step can be directly inferred. As such, the key question optimal design of feature weights, i.e., the correlation coefficients. Intuitively, the discriminative ability should vary among the features, and the accuracy curve obtained with differing relative feature weights should be nonlinear. As shown in Fig.2, we find that the system accuracy decreases nonlinearly from the largest correlation coefficient to the smallest one, and generally the larger coefficients yield higher accuracies. The accuracy curve most closely follows an exponential function (as seen in Fig.2). Accordingly, we adopted a weighting nonlinear function to calculate the weights in the SD fusion, as below.
where a 2 and b 2 are two parameters that control the classification performance.
In current study, a 2 and b 2 were optimized via a grid search using a standard CORRCA method on the training set. Here, the weights were calculated using formula (3) The parameter values of a 2 and b 2 that led to the highest ITR were selected (i.e. 0.6 and 0), respectively. Then these values were applited onto the test set in the following calculation and analysis. More details of the corresponding procedure and parameter settings employed for bandpass filtering can be found in our previous study [31]. Note that the choice of formula (8) is not unique, other function based may also be adequate. The CORRCA method with the proposed framework is termed as the hierarchical feature fusion CORRCA (HFCORRCA) hereafter.
EEG dataset
The EEG dataset used for evaluation is a publicly available benchmark dataset consisting of offline SSVEP-based BCI spelling experiments on thirtyfive healthy subjects (seventeen females, mean age 22 years) [40]. In the BCI Then, each 6-s segment beginning at 0.5 s pre-stimulus was extracted. After that, the epoch was downsampled to 250 Hz. A more detailed description of the EEG dataset can be found in the literature [40].
In current study, we divided the dataset into two parts, i.e. a training set and testing set. The data from first fifteen subjects were used as training set, and the remaining twenty subjects were used as the testing set.
Performance evaluation
In this study, we carry out an extensive comparison between the spatial filtering method with and without the proposed framework. For the spatial filtering method, CORRCA was used because of its robust performance. The classification accuracy and ITR were adopted as the evaluation metric. The 0.5 s cue time was considered in the calculation of ITR. In the current study, the individual templates of each frequency were generated by averaging SSVEP data across multiple blocks at the corresponding frequency. We adopted the leave-one-block-out cross validation method to evaluate the performance for the comparisons between the two methods. Specifically, one EEG block from a group of five blocks was selected as the training set and the remaining blocks were treated as the testing set. For each subject, this procedure was repeated six times such that the samples of each block were used as the testing set once.
Furthermore, we used the r-square value to evaluate the discriminability of the features obtained by each method. In the current study, the r-square value was computed using feature values of the attended target stimulus and the maximal feature values of the non-attended stimuli [38,31], as the following formula [41].
where F 1 and F 2 are the sets of features of attended target stimulus and the maximal feature values of the non-attended stimuli, respectively. N 1 and N 2 are the numbers of features in these two sets. As the sign of this difference is important, we adopt the signed r-square value calculated as follows [41]: Previous studies have demonstrated that the number of channels and training trials can significantly influence performance [38,31]. Here we further in- vestigate the classification accuracies of HFCORRCA and CORRCA methods at different number of channels and training trials. Fig.6 illustrates the results of both methods using a 1-s time window. As shown, the HFCORRCA method yielded significantly better performance than the CORRCA method under all conditions. These results demonstrate that the proposed framework is a promising strategy to improve the performance of frequency recognition methods, such as the CORRCA method, to further enhance the performance of real-world systems.
Discussion
EEG is the most studied modality in BCI research, however, it is highly prone to be contamination by noise and artifacts. These factors usually distort the useful components in the signal and result in dispersion of the discriminative information across the projected signals or features [27,37]. Mishuhina et al. found that it is beneficial in motor imagery data processing to combine features obtained using CSP spatial filters [37]. Based on this prior research, we assume that fusing the features produced by different frequency recognition spatial filters can also provide more robust features and avoiding loss of the discriminative information for frequency recognition in SSVEP-based BCIs (SD fusion). For instance, the CORRCA method utilizes only the largest correlation coefficient corresponding to the spatial filter of the largest generalized eigenvalues, and all other features are discarded. Fusing all the correlation coefficients generated by all the spatial filters of CORRCA method can enhance overall performance. Additionally, filter bank methods are widely used in SSVEP-based BCI and motor imagery based BCIs to generate more discriminative features [28,37,42,31].
This technique achieves feature fusion in different frequency bands (FD fusion).
Traditionally, these two types of features fusion methods are used independently in the BCI studies. To further boost the performance of SSVEP-based BCI systems, we propose a unified framework to integrate SD and FD fusion together for frequency recognition. The experimental results indicate that the framework can boost the frequency recognition when used with existing method, e.g., the CORRCA method, to enhance its performance. Specifically, the CORRCA method within the proposed framework significantly outperforms the contemporary methods under various verification conditions. Based on the proposed framework, some extended methods of spatial filtering are simply special cases of the proposed method. For instance, filter bank CORRCA method is the HFCORRCA method without SD fusion. The standard CORRCA method can be considered within out framework by applying only the SD fusion on CORRCA method with a step function that has sharp transition from 1 to 0. This result of this is that only the largest coefficient is choosen. Conversely, for the HFCORRCA, we adopted a soft weighted function as formula (8) to combine all the correlation coefficients. We kept all coefficients, and combined them with different weights. As shown Fig.2, we found that the accuracies decreased nonlinearly from the largest correlation coefficient to the smallest one. Intuitively, it is reasonable to use different weights on these coefficients. To verify this idea, as shown in Fig.7, we present the results obtained by the CORRCA, eHFCORRCA (a method that replaces the different weights used in HFCORRCA to equal weights, i.e., φ 1 = 1, φ 2 = 1, · · · , φ C = 1) and HFCORCA. Although eHFCORRCA shows better performance than COR-RCA at short time windows (less than 0.5 s), we find that using equal weights produces inferior results during fusion of features using the CORRCA method.
These results further confirm the justification for adopting a nonlinear weighted function to fuse the features. Note that the weighted function in formula (8) can be changed to other functions, such as w k = k −a + b(k = 1, 2, · · · , C) in formula (3). In future studies, we will investigate different weighting functions for different spatial filtering methods, which might further optimize system performance.
To confirm the contribution of each fusion operation (SD fusion and FD fusion) of the proposed framework, we calculated the classification accuracies using the CORRCA method with SD fusion only and FD fusion(FFCORRCA) only. As shown in Fig.8, although both SD fusion and FD fusion can improve the results compared to standard CORRCA, respectively, combining the two fusion methods resulted in the best performance (except at the shortest time window of 0.2 s). At the time window of 0.2 s, the CORRCA method with FD fusion produced lower accuracies than the original CORRCA method, which may be attributed to the short data length (50 samples) during bandpass filtering. It is believed that SD fusion in general is less sensitive to data length than FD fusion.
In applying the fusion methods, we fused all the features in SD fusion step, and used five bandpass filters based on previous study [28]. We did not optimize the two parameters of C and SN . As shown in Fig.2, the accuracies of the last four correlation coefficients are much lower than those of the first three correlation coefficients. One may argue that it is unnecessary to use all the correlation coefficients in the proposed framework. In order to provide some guidance for future online system development using HFCORRCA, we investigated how these two parameters influence the classification accuracies. The C and SN were limited to [2:9] and [1:5], respectively, and two data time windows, i.e., 0.8 s, 1 s were used. The upper limit of SN is equal to the number of channels in the EEG. As shown in Fig.9, for the two chosen time windows, the values of SN which yielded the best results are same, but those of C are different. We assessed the system accuracies when C ranged from 4 to 9, and found that the accuracy differed by less than 0.5% across the range. That is to say, the first four correlation coefficients can yield satisfactory results on the benchmark dataset. The optimal value of C and SN may depend on the specific dataset. For SN , when the stimulus frequencies belong to the same range used as those used in the benchmark dataset, it can be set to 3. For C, the simplest way is to set it equal to the number of the features produced by the SF method.
When a calibration dataset is available, preliminary experiments can be used to optimize these values.
Lastly, it is worth mentioning that the proposed framework can only be applied directly on the standard frequency recognition methods (e.g., CCA, COR-RCA) for now. In a future study, we will explore integrating this framework into extended frequency recognition methods, such as the two-stage method based on the standard CORRCA method (TSCORRCA) [31]. Moreover, the proposed framework can also be employed on CSP methods to boost classification rates in motor imagery based BCIs [37,42].
Conclusion
In summary, we propose a hierarchical features fusion framework to address the loss of discriminative information the occurs during feature extraction of spatial filtering methods. The framework consists of feature fusion using a nonlinear weighted function in both spatial and frequency domains. Under this framework, more robust features for frequency recognition are generated.
Experimental results performed on a benchmark dataset demonstrated that the framework is effective at enhancing system performance. Specifically, and the improved CORRCA method within this framework significantly outperforms the original CORRCA method. This novel framework may be used to develop efficient methods for frequency recognition in SSVEP-based BCIs. | v2 |
2022-02-26T00:12:54.368Z | 2021-12-31T00:00:00.000Z | 247095159 | s2orc/train | Simultaneous Occurrence of Pneumomediastinum, Pneumopericardium and Surgical Emphysema in a Patient of COVID-19: A Case Report
favourable clinical outcome without any invasive intervention. Patient was discharged subsequently with vitals maintaining on room air. A keen clinical observation and timely imaging study can help in detecting these complications early and direct the management accordingly.
Introduction
Pneumomediastinum is defined as the presence of free air in the mediastinum with an incidence of 1 in every 25000 cases in ages between 5 and 34 years, predominantly in males. 1 Many parenchymal and extra-parenchymal abnormalities the most frequent and early manifestation is parenchymal ground glass opacities. 2 The occurrence of spontaneous pneumomediastinum is an uncommon presentation and spontaneous pneumopericardium with surgical emphysema is even rarer. We are reporting a case of a 47-year-old man admitted for management of infection by the SARS-CoV-2 complicated by pneumomediastinum, pneumopericardium, and surgical emphysema who responded to conservative management.
Case Report
We report a case of 47-year-old gentleman who was hypertensive, hypothyroid, and a recipient of a renal transplant (on immunosuppressive therapy), presenting to the emergency department with complaints of dry cough and shortness of breath for 2 days and was tested positive for SARS-CoV-2 by reverse transcriptase polymerase chain reaction (RT-PCR) on nasopharyngeal swab. The cough and shortness of breath progressively increased and were present both at rest and on minimal exertion. He denied any history of smoking, tobacco, alcohol, or drug use. On physical examination, the patient was conscious and oriented with a blood pressure of 114/68 mmHg, pulse rate of 90 beats per minute, respiratory rate of 24 per minute with the use of accessory muscles of respiration at presentation. He was hypoxic with oxygen saturation (SpO 2 ) of 85% on room air that improved to 97% on 10 L/ min via a non-rebreather mask (NRM). His lung examination revealed bilateral crepitations and the rest of his physical examination was within normal limits.
The patient showed type 1 respiratory failure on arterial blood gas analysis. His haematological profile was unremarkable with total leukocyte count of 12,100 (Normal: 4000-11,000) with neutrophils of 85%. His biochemical profile showed normal liver function teat (LFT), blood urea of 64 mg/dl , and serum creatinine of 1.27 mg/dl (0.66-1.25) with raised inflammatory markers. Among the inflammatory markers, LDH was 312 U/L (normal:120-246 U/L), high sensitive C-Reactive Protein (hs-CRP) was 47.62 mg/L (Normal:0-5 mg/L), D-dimer was 852 ng/ml (Normal:<500ng/ml), IL-6 was 41.3 pg/ml (Normal:0-7pg/ ml), and ferritin was 906 ng/ml (Normal:30-400ng/ml). His chest X-ray (CXR) showed bilateral infiltrates. Hence, a diagnosis of SARS-CoV-2 related severe acute respiratory illness (SARI) with hypoxemic respiratory failure was made in this patient. Patient was started on injectable Ceftriaxone, Azithromycin, Methylprednisolone, Enoxaparin sodium, and injectable Remdesivir was given for 5 days with daily LFT and KFT monitoring. Tacrolimus and Mycophenolate mofetil were continued. He maintained normal vitals on non-invasive supplemental oxygen, requiring 10 litres/ minute via NRM but he was still symptomatic after 1 week of supportive management.
In the second week, the patient complained of persistent cough with increased dyspnea with no history of fever, chest pain, orthopnea, pedal edema, decreased urine output, bluish discoloration of nails or altered behaviour. On examination, vitals were normal with SpO 2 of 80% on room air with swelling and crepitus noted on the left side of the neck and crepitations on chest auscultation. Investigations revealed normal electrocardiography, biochemical profile, and CPK-T. and MB with elevated inflammatory markers. The patient required increased oxygen flow of 15 litres per minute via NRM to maintain saturation >94%. A repeat CXR was done, which showed bilateral infiltrates with surgical emphysema on left side of the neck. Thereafter, the same day, a non-contrast CT (NCCT) chest was done which revealed multifocal areas of consolidation noted in bilateral lower lobes with ground glass opacities in bilateral upper lobe and medial segment of the right middle lobe and evidence of air was noted tracking along the mediastinal vessels, trachea, esophagus extending into the deep infrahyoid neck spaces suggestive of pneumomediastinum with surgical emphysema on the left side of the neck [ Figure 1(a), 1(b)]. It also revealed air in the pericardial space suggestive of pneumopericardium without any fluid infiltration and with no pneumothorax [ Figure 1(c) and 1(d)]. Two-dimensional echocardiography was also done, which revealed normal left ventricular function with an ejection fraction of 60%. The increased dyspnea was attributed to the development of pneumomediastinum and pneumopericardium with surgical emphysema. The patient was kept on intensive observation and was shifted to a higher intensive care unit (ICU) where oxygen therapy was continued with NRM. The patient was continued on conservative management with close monitoring of vitals and oxygen saturation. After 2 weeks, the patient started to improve clinically and was maintaining SpO 2 of 97% on room air at rest, with desaturation of up to 92% on mild exertion. Repeat nasopharyngeal swab was sent for RTPCR for SARS-CoV-2 which came negative.
A repeat NCCT chest was done on day 25 of illness, which showed an improvement and showed few foci of air attenuation in pre-vascular space suggestive of persistent pneumomediastinum but pneumopericardium and surgical emphysema resolved (Figure 2). Eight days later in the fifth week, a third NCCT chest was done which showed resolution of pneumomediastinum as well with typical changes of Coronavirus disease (COVID) in the lung without being drained with a tube (Figure 3).
The patient recovered, remained asymptomatic for further observation period of 3 days and was then discharged, based on existing guidelines. In the absence of a history of chest trauma, positive pressure ventilation, and forceful vomiting with no history of dysphagia or odynophagia, the development of pneumomediastinum and pneumopericardium was attributed to COVID-19 infection.
Discussion
Pneumomediastinum, a condition defined by the presence of air in the mediastinum, was first reported in 1819 by Laennec. 3 In 2015, Kouritas et al. classified pneumomediastinum into spontaneous (primary) and secondary due to iatrogenic; or traumatic; or non-traumatic causes. 4 Spontaneous pneumomediastinum was described by Louis Hamman in 1939, which is why it is also called Hamman's syndrome. 5 Pneumomediastinum has been identified as the most common barotrauma-related event in patients on mechanical ventilation for COVID-19 related acute respiratory distress syndrome (ARDS). 6 Clinical and imaging data of patients seen between March 1, 2020, and April 6, 2020, who tested positive for COVID-19 and experienced barotrauma associated with invasive mechanical ventilation, were compared with patients without COVID-19 infection during the same period. Historical comparison was made to barotrauma rates of patients with acute respiratory distress syndrome from February 1, 2016, to February 1, 2020, at the authors' institution. Comparison of patient groups was performed using categoric or continuous statistical testing as appropriate, with multivariable regression analysis. Patient survival was assessed using Kaplan-Meier curves analysis. Results A total of 601 patients with COVID-19 infection underwent invasive mechanical ventilation (mean age, 63 years ± 15 (standard deviation); 71% men Among patients with COVID-19 pneumonia, 42% could develop (ARDS) with a median time to intubation of 8.5 days from the onset of symptoms. 7,8 The cause of ARDS in COVID-19 is damage to alveolar epithelial cells resulting in hyaline membrane formation in the initial stages, followed by interstitial edema and fibroblast proliferation. 9,10 This causes alveolar rupture, with the leaking of air through the interstitium into the peribronchial and perivascular sheaths backtracking into the hilum and eventually into the mediastinum, a phenomenon which has been described as the Macklin effect, thus leading to pneumomediastinum. 4,11 Pneumomediastinum can be produced, in general, by three different mechanisms: 5 (1) by gas-producing microorganisms present in an infection of the mediastinum or adjacent areas; (2) rupture (whether traumatic or not) of the cutaneous or mucosal barriersespecially perforation of the esophagus or tracheobronchial tree, allowing air to enter the mediastinum from the neck, retroperitoneum or chest wall and (3) the presence of a decreasing pressure gradient between the alveoli and the lung interstitium which may result in alveolar rupture. Air from the pneumomediastinum may decompress into the neck and develop surgical emphysema because visceral layers of the deep cervical fascia are continuous with the mediastinum thus avoiding a pneumothorax and a physiologic tamponade. Pneumopericardium develops in the same way as pneumomediastinum does. It would require an elevated pressure in the deep cervical fascia to redirect air towards other planes which could explain why encountering a pneumopericardium is so infrequent. 11 Mechanical ventilation seldom causes isolated pneumopericardium, it is frequently accompanied by pneumomediastinum. 12 Most of the pneumomediastinum cases previously have been reported among patients on high flow positive pressure ventilation like bilevel positive airway pressure or after invasive mechanical ventilation. Our patient was on a non-rebreathing mask throughout hospital stay with oxygen flow tapered on improvement, and hence barotrauma was ruled out. Also, no other causative factor apart from COVID-19 related lung injury was present in our patient. Spontaneous pneumopericardium is rarer than pneumomediastinum and so far only few cases have been reported in COVID-19 13 but most of a them have been related to barotrauma and had a fatal outcome. To the best of our knowledge, this is the first case of COVID-19 related pneumomediastinum synchronous with pneumopericardium with surgical emphysema unrelated to barotrauma with favourable outcome as shown in comparison to other cases shown in Table 1. In a case series of 12 patients by Juárez-Lloclla et al., four out of twelve patients had pneumomediastinum, pneumopericardium, and surgical emphysema with pneumothorax among two of these four patients. All of these patients succumbed to their illness. 14 In contrast, our patient improved with conservative management without the need of mechanical ventilation and was discharged on room air. In severe COVID pneumonia, due to a hyperinflammatory process, diffuse alveolar injury is common which may make the alveoli more prone to ruptue especially in patients with persistent coughing. Although the exact mechanism of spontaneous pneumomediastinum and pneumopericardium with surgical emphysema is unknown, increased alveolar pressure leads to a pressure gradient between the alveoli and the lung interstitium; this pressure difference can lead to alveolar rupture thereby causing escape of air into the interstitium. Once the air is in the lung interstitium it flows towards the hilum and the mediastinum along a pressure gradient between the lung periphery and the mediastinum. 11 Nevertheless, in the context of elevated pressures, because of continuity, spontaneous pneumopericardium can still be observed, most frequently synchronous with pneumomediastinum.
Figure 3.Axial Non Contrast Chest CT in Lung Window Reveals 3(a)-Resolution of Subcutaneous Emphysema (Blue Arrows); 3(b)-Resolution of Pneumomediastinum (Yellow Arrow) and 3(c) and 3(d)-Resolution of Pneumopericardium (Red Arrow) with Persistence of Typical Changes of COVID in the Lung
Despite pericardial tightness, support of the pericardial reflections is weak at the venous sheaths and could present as a gateway for air. 15 Further studies are needed to discern the risk factors predisposing to the spontaneous alveolar air leak and to find the measures to prevent these complications which can be fatal.
Conclusion
SARS-CoV-2 is a new addition to the causative list of pneumomediastinum, pneumopericardium and surgical emphysema. It is important to look for complications like surgical emphysema and pneumomediastinum whenever a patient with COVID pneumonia does not improve on appropriate therapy. Our patient responded to a conservative approach with oxygen therapy and had a favourable outcome. | v2 |
2022-07-01T13:46:16.724Z | 2022-06-30T00:00:00.000Z | 250149999 | s2orc/train | An Eye Tracking and Event-Related Potentials Study With Visual Stimuli for Adolescents Emotional Issues
Background Psychological issues are common among adolescents, which have a significant impact on their growth and development. However, the underlying neural mechanisms of viewing visual stimuli in adolescents are poorly understood. Materials and Methods This study applied the Chinese version of the DSM-V self-assessment scales to evaluate 73 adolescents’ psychological characteristics for depressive and manic emotional issues. Combined with eye-tracking and event-related potential (ERP), we explored the characteristics of their visual attention and neural processing mechanisms while freely viewing positive, dysphoric, threatening and neutral visual stimuli. Results Compared to controls, adolescents with depressive emotional tendencies showed more concentrated looking behavior with fixation distribution index than the controls, while adolescents with manic emotional tendencies showed no such trait. ERP data revealed individuals with depressive tendencies showed lower arousal levels toward emotional stimuli in the early stage of cognitive processing (N1 amplitude decreased) and with prolonged reaction time (N1 latency increased) than the control group. We found no significant difference between the manic group and the control group. Furthermore, the depression severity scores of the individuals with depressive tendencies were negatively correlated with the total fixation time toward positive stimuli, were negatively correlated with the fixation distribution index toward threatening stimuli, and were positively correlated with the mean N1 amplitudes while viewing dysphoric stimuli. Also, for the individuals with depressive tendencies, there was a positive correlation between the mean N1 amplitudes and the fixation time on the area of interest (AOI) while viewing dysphoric stimuli. For the individuals with manic tendencies, the manic severity scores of the individuals with manic tendencies were positively correlated with the total fixation time toward the positive stimuli. However, no significant correlations were found between the manic severity scores and N1 amplitudes, and between N1 amplitudes and eye-tracking output variables. Conclusion This study proposes the application of eye-tracking and ERP to provide better biological evidence to alter the neural processing of emotional stimuli for adolescents with emotional issues.
Background: Psychological issues are common among adolescents, which have a significant impact on their growth and development. However, the underlying neural mechanisms of viewing visual stimuli in adolescents are poorly understood.
Materials and Methods: This study applied the Chinese version of the DSM-V self-assessment scales to evaluate 73 adolescents' psychological characteristics for depressive and manic emotional issues. Combined with eye-tracking and eventrelated potential (ERP), we explored the characteristics of their visual attention and neural processing mechanisms while freely viewing positive, dysphoric, threatening and neutral visual stimuli.
Results: Compared to controls, adolescents with depressive emotional tendencies showed more concentrated looking behavior with fixation distribution index than the controls, while adolescents with manic emotional tendencies showed no such trait. ERP data revealed individuals with depressive tendencies showed lower arousal levels toward emotional stimuli in the early stage of cognitive processing (N1 amplitude decreased) and with prolonged reaction time (N1 latency increased) than the control group. We found no significant difference between the manic group and the control group. Furthermore, the depression severity scores of the individuals with depressive tendencies were negatively correlated with the total fixation time toward positive stimuli, were negatively correlated with the fixation distribution index toward threatening stimuli, and were positively correlated with the mean N1 amplitudes while viewing dysphoric stimuli. Also, for the individuals with depressive tendencies, there was a positive correlation between the mean N1 amplitudes and the fixation time on the area of interest (AOI) while viewing dysphoric stimuli. For the individuals with manic tendencies, the manic severity scores of the individuals with manic tendencies were positively correlated with the total fixation time toward the positive stimuli. However, no significant correlations
INTRODUCTION
Adolescence is a unique lifespan period when individuals frequently encounter major life transitions and their minds are not yet mature. At the same time, they were also burdened by academic pressure and interpersonal pressure, which can easily breed emotional issues. Emotional issues are highly prevalent and debilitating performances characterized by social fears, worries, depression, and mania in adolescence (1)(2)(3). Also, emotional issues significantly affect the growth and development of adolescents. Data show that about 2-8% of adolescents are affected by emotional issues, and the number is moving in an increasing trend year by year (4). At the same time, the early detection of emotional issues in adolescents encounters many challenges currently. Firstly, the information from the adolescents is subjective. Because most adolescents have no scientific understanding of emotional issues, they often feel ashamed and refuse to talk to doctors when visiting a doctor, which affects the doctor's diagnosis and treatment. Secondly, there are no quantitative diagnostic criteria. Usually, with the assistance of a questionnaire, the final diagnosis results rely on the clinical experience of the doctor, which has a strong subjective influence (5). Thirdly, teenagers are in a rebellious period, and their attitude may not cooperate and follow precisely the doctor's diagnosis procedures, which increases the difficulty of the doctor's diagnosis. Therefore, it is of great importance to find an objective and quantitative method to promote the detection of emotional issues.
Negative childhood experiences, especially emotional issues, played significant roles in developing specific biases in processing information (6). As early as 1976, American Psychologist Beck (7) proposed that affective disorders arise from negative cognitive schemas. Individuals with emotional disorders showed more negative cognitive biases in information processing, including attention, memory, interpretation, and many other aspects, which would aggravate their negative emotional state. Many information-processing studies have shown that individuals with depressive tendencies tended to choose the one consistent with their negative cognitive schemas when external processing information. That is, there was a negative cognitive processing bias. Moreover, mania or hypomania (cardinal symptoms of bipolar disorder) was associated with dysregulated emotional responses (8)(9)(10).
Recent research has made significant progress elucidating an association between emotional issues and biased attention. Eizenman et al. (11) used eye-tracking technology to find that depressed individuals spent significantly more time looking at negative pictures than non-depressed controls. Another study examined college student participants of mania had a positive correlation with an attentional bias toward happy faces (12). Despite these initial results, it remains inconclusive about the patterns of attentional biases in adolescents of emotional issues proneness as well as prior to the formal onset of emotional disorder.
Previous studies have often used primary eye movement indices to describe different emotional problems, such as total fixation duration, total fixation duration of interest areas, and fixation distribution index (13,14). Liu et al. (15) considered eye-movement indices of the total fixation duration in the free-viewing task. Results found manic patients had a shorter total fixation duration on sad images and neutral images than healthy controls which reflected an avoidance of sad expressions. However, there was no significant difference in happy images between the manic and control group. Kim et al. (16) calculated the total fixation duration of interest areas (eyes, nose, mouth). Results found that children with bipolar disorder spent less time looking at the eyes than the healthy controls, regardless of the facial emotion (anger, fear, sadness, happiness, neutral). Philip et.al (13) assessed the fixation distribution using the dispersion coefficient of the eye movement index and found that the spatial distribution of cannabis-induced psychosis was more concentrated than that of first-episode schizophrenia and healthy controls.
To delineate the response of the cerebral cortex that is directly related to stimulus processing in the free-viewing picture paradigm, event-related potentials (ERP) is an effective technique to study cognitive information processes related to real-time brain activity in the range of milliseconds. Combined with self-report measures, ERPs have been shown to predict depression outcomes (17)(18)(19) independently. ERP components have been confirmed to be related to specific aspects of information processing.
The N1 is a specific ERP component that has been systematically related to the processing of emotional stimuli (20). The early negative component N1 is sensitive to physical stimuli and is associated with attentional processing (21), and N1 peaks at around 130-200 milliseconds after stimulus triggers, mainly at the occipitotemporal electrodes (22)(23)(24). Some studies have discovered that N1 amplitude is associated with emotional expression, while others have not found this sensitivity to emotion (25).
The N1 component is commonly associated with emotional issues. Reductions of the N1 components have been consistently reported in depressed individuals (26,27). JA Coffman et al. (26) adopted an "auditory oddball" paradigm. Results found depressed individuals who had a mean age of 50 years had lower N1 amplitude and longer N1 latency than controls before therapy treatments. But after the treatment, they have no significant difference. Urretavizcaya M et al. (27) also adopted an "oddball paradigm" in the auditory modality. Results found depressed adult patients showed a significantly higher latency in N1, N2, and P3 than healthy controls. O'Donnell BF et al. (28) examined the N100 component of ERPs that were elicited during an auditory discrimination task in which participants pressed keys to unusual 1,500 Hz tones interspersed in a series of 1,000 Hz tones. They reported that there was no significant difference between bipolar disorder patients and healthy control in the N1 component. However, less research has been researched on the N1 component for visual stimuli in adolescents with emotional issues.
In order to explore potential systemic changes in eyetracking and ERP components during relatively long stimulus presentations, we adopted positive, dysphoric, threatening, and neutral stimuli and analyzed the eye-tracking gaze characteristics and EEG characteristics of adolescents with different emotional issues. To further explore whether eye-tracking measures and ERP amplitudes elicited by emotional stimuli prospectively predict the severity of emotional issues, we investigated the relationship between eye-tracking measures, ERP components, and scale scores of self-report measures, expecting early detection and early prevention of emotional issues. Based on the moodcongruent attentional bias posited by cognitive patterns of depression raised by Beck et al. (7) and previous research, we hypothesized that adolescents with depressive tendencies pay more attention to dysphoric stimuli and less attention to positive stimuli. According to the study by June Gruber et al. (12) who recruited Emerging Adulthood (ages 18-25) of hypomania proneness and found that hypomania was positively associated with happy faces. Thus we hypothesized adolescents with manic tendencies are positively associated with positive stimuli. The research on the N1 components for depressive and manic tendencies has not yet formed a consistent conclusion, therefore further research is needed in this experiment.
Participants
A total of seventy-three male adolescents (age range 14-17 years, Mean age = 16.14, SD = 0.65) took part in the present study and signed the written informed consent form. All individuals were recruited through advertisements and by word of mouth from local high schools. Inclusion criteria for the current study were as follows: (1) Right-handed; (2) Age 11-17 years old; (3) No medication-taking that affects nervous system function within 2 weeks, no alcohol dependence or other drug dependence; (4) Normal vision or corrected vision, no color blindness or other ophthalmic diseases; and (5) No facial expression recognition disorder and normal language communication. Exclusion criteria for the current study were as follows: (1) Individuals with psychotic symptoms; (2) developmental delay; (3) learning disabilities; and (4) recurrent or chronic pain.
DSM-V online self-assessment scale was used to identify participants who met the criteria for inclusion into one of three groups: (a) individuals with depressive tendencies who met the criteria of the Chinese version of DSM-V level 2-Depression-Child Age 11-17 scale; (b) individuals with manic tendencies who meet the criteria of Altman Self-Rating Mania Scale (ASRM); and (c) healthy controls who had no psychiatric or neurological disorders. In the eye-tracking study, there were 16 individuals with depressive tendencies (mean age 16.13 years), 19 individuals with manic tendencies (mean age 16.05 years), and 20 individuals in the healthy control group (mean age 16.25 years). Due to the previous acquisition, the EEG had not been set, and just only eye-tracking data were collected. In the synchronization of the EEG and eye-tracking study, there were 10 individuals with depressive tendencies (mean age 16.33 years), 12 individuals with manic tendencies (mean age 15.67 years), and 16 individuals in the healthy control group (mean age 16.00 years). Group characteristics such as age and scale scores are listed in Table 1 for all participant groups.
Procedure
The whole experiment was carried out in a quiet and stable electromagnetic environment with constant illumination. After entering the experimental room, participants read the study instructions, signed the consent form, and then were surveyed by the online scale of Diagnostic and Statistical Manual of Mental Disorders (DSM-V), including the DSM-V level 2-Depression-Child Age 11-17 and the ASRM. Then the main tester measured the head circumference, selected a suitable electrode cap, placed and adjusted the electrode location with the Cz electrode as a reference (located in the center of the EEG cap), and injected potassium chloride solution into the sponge of the electrode to reduce the electrode impedance until the impedance of the electrode dropped below 50 k (29,30). Subjects sat 60 cm in front of the eye-tracker system. Camera adjustments were made to best capture the participant's eye, and a 9-point For all tests, mean scores and standard deviations (SD) are reported.
Frontiers in Psychiatry | www.frontiersin.org calibration was complete to confirm that the eye tracker was recording a line of visual gaze within 1.25 • of visual angle. After successful calibration and validation of the eye-tracking position, experimental stimuli were presented with the same pseudorandomized sequence of images. The process took approximately 10 min to complete. Finally, small prizes such as pens and notebooks were distributed to the subjects.
Depressive Symptom Severity
Participants completed the Chinese version of DSM-V level 2-Depression-Child Age 11-17, On 14 items rated on a fivepoint scale, from "never" to "always, " a total score varying from 14 to 70. This measure has demonstrated that higher scores represented higher levels of depression. At the same time, the measure also reflected the extent to which they suffered from depressive symptoms within the past 2 weeks. The depression scale scores were greater than 31 as the selection criteria for the depressed group.
Manic Symptom Severity
The ASRM is a short, 5-item self-assessment questionnaire that can help assess the severity of manic or hypomanic, with a total score ranging from 0 to 20 (31). Although the scale is concise, it is compatible with the CARS-M, YMRS, and DSM-IV diagnostic criteria and can still be used effectively as a screening and diagnostic tool. Participants were instructed to endorse only one of the five statements from each item, rated in increasing severity from 0 (not present) to 4 (present in severe degree), that best described their mood or behavior during the past week. The mania scale scores were greater than five as the selection criteria for the manic group.
Materials
In this study, a total of 72 images were selected from the International Affective Picture System (IAPS), including 12 positive stimuli, 12 dysphoric stimuli, 12 threatening stimuli, and 36 neutral stimuli. The images from the IAPS provide standardized emotional stimuli and have been used widely in neuropathology research. The size and resolution of the images used in this experiment kept the same of 1024 × 768 pixels, as shown in Figure 1.
The 72 images that we used are from IAPS image set of Kellough et al. (32). Not only have these images been systematically evaluated for arousal and valence, but also the arousal and valence are consistent. In addition, the IAPS images are widely used in human emotion research worldwide and make an important contribution to the evolving database in emotional research (33).
Apparatus
In this study, eye movements were recorded from the left eye with our independently developed EyeCatch Bar desktop eye-tracker system, collecting gaze data at a 41 Hz (coordinates were sampled every 24.4 ms), with an accuracy of 1.25 • . Participants' eyes were kept at a distance of 60 cm apart from the eye tracker. In the beginning, each trial was presented a fixation cross in the center of the screen for 1,000 ms, followed by the presentation of stimuli for 6,000 ms. Each block contained four trials with specific emotions and four trials with neutral images. The aim of the neutral images added was to confuse the experimental purpose and reduce subjects to guess the experimental intention, which would affect the collected data. A total of 9 blocks were presented in a pseudo-randomized order.
Before the experiment, each subject underwent eye movement calibration. Nine dots had to be fixed one after the other, in the center of the screen, and the other shifted to the top, bottom, left, right, the upper left, the upper right, the lower left, the lower right, respectively. Last, back to the center. Afterward, the same order was followed for verification. The recorded eye-tracking data was analyzed using custom-made analysis code in Python.
We recorded continuous EEG data from 64 channels (HydroCel Geodesic Sensor Net, Electrical Geodesics, Inc., Eugene, OR, United States) with Net Station EEG Software. All electrodes were physically referenced to Cz (fixed by the EGI system), and then the mean value of the left and right mastoids was used for offline re-reference. The impedance of all electrodes was kept below 50 k during data acquisition. The EEG was amplified with a bandpass of 0.1-70 Hz (half-power cutoff) and digitized online at 250 Hz.
The collected EEG data were preprocessed using EEGLAB environment, a tool library for processing EEG data in MATLAB. In this experiment, all EEG signals were filtered using a 0.1 Hz high-pass filter and a 30 Hz low-pass filter to reduce noise. For the data after ICA, we used the Adjust plugin in EEGLAB to discard artifacts due to eye movement and muscle activity. The data were then segmented relative to stimulus onset (−200 to 800 ms), and the baseline preceding the stimulus (−200 to 0 ms) was subtracted. Trials with excessive movements or eye blinks (voltage exceeding 100 µV) were automatically rejected. We computed the grand-averaged ERP waveforms for different groups.
Eye-Tracking Analysis
Three primary eye-tracking indices of the free viewing task were performed: (a) total fixation time (the sum of durations from all fixations on a specific stimulus); (b) fixation time in the area of interest (AOI) (the sum of durations from the fixation on the most representative emotional area); and (c) fixation distribution index (13,14). To analyze the patterns of attention distribution or clustering of fixations in adolescents with emotional issues when looking at different emotional images, the fixation distribution index was assessed. The value of alpha denoted the degree of fixation distributions. Higher alpha denotes more dispersed distributions (13).
We used descriptive and graphical methods to test whether the data met a normal distribution. Then eye-movement measures were analyzed in 3 Groups (control, depressed, manic) × 4 Emotional Stimuli (positive, dysphoric, threatening, neutral) analysis of variance (ANOVA) in which Groups were a betweensubject factor and Emotional Stimuli were a within-subject factor. Significant interactions were analyzed using simple effects models. The significance level was set at 0.05.
Event-Related Potential Analysis
Statistical analysis was performed using a semi-automatic peak picking program with ERPLAB tools to calculate amplitude and latency. The time windows locked in each peak were selected.
Correlation Analysis
In addition, relationships between the DSM-V scale scores (DSM-V level 2-Depression-Child Age 11-17 Scale and the ASRM) and eye-tracking measures (total fixation time, fixation time in the AOI, fixation distribution) or ERP components (N1 amplitudes) were calculated by the Spearman correlation analysis. Similarly, the correlation between eye-tracking measures and N1 amplitudes was also analyzed by the Spearman correlation analysis.
We also analyzed N1 latency and found a significant main effect of the group [F (2 , 35)
Associations Between Depression Scale Scores and Eye-Tracking Measures Among Participants With Depression
In the depressed group, analyses showed a significant correlation between depression scale scores and total fixation time while looking at the positive stimuli (r = −0.518, P = 0.040) but not the threatening stimuli (r = −0.006, P = 0.983), the neutral stimuli (r = −0.121, P = 0.656), or the dysphoric stimuli (r = −0.235, P = 0.380) (Figure 4A). We further found that depression scale scores were negatively correlated with AOI fixation time while looking at the positive stimuli (r = −0.563, P = 0.029) (Figure 4B
Associations Between Manic Scale Scores for Mania and Eye-Tracking Measures Among Participants With Mania
Analyses revealed a significant correlation between the mania scale scores and the total fixation time while looking at the positive stimuli (r = 0.595, P = 0.007) (Figure 4D), but not the threatening stimuli (r = 0.214, P = 0.379), neutral stimuli (r = 0.279, P = 0.248) or dysphoric stimuli (r = 0.191, P = 0.433). We further found that mania scale scores were positively correlated with AOI fixation time while looking at positive stimuli (r = 0.595, P = 0.009) (Figure 4E), but not with the threatening stimuli (r = 0.267, P = 0.270), neutral stimuli (r = 0.254, P = 0.293) or dysphoric stimuli (r = 0.283, P = 0.241).
No associations were found between the mania scale scores and the alpha value of fixation distribution while looking at the different stimuli among adolescents with mania.
Associations Between Scale Scores and N1 Brain Activity
Analysis revealed that depression scale scores were significantly positively correlated with the mean N1 amplitude while looking at the dysphoric stimuli (r = 0.647, P = 0.043) (Figure 5A), but there were no significant correlations with the mean N1 amplitude while looking at the neutral stimuli (r = −0.202, P = 0.575), positive stimuli (r = −0.514, P = 0.193) or threatening stimuli (r = 1.05, P = 0.773).
However, no significant correlations were found between mania scale scores and the mean N1 amplitude while looking at the various stimuli for manic emotional tendencies.
Associations Between N1 Brain Activity and Eye-Tracking Measures
A positive correlation between the mean N10 amplitude and the AOI fixation time while looking at the dysphoric stimuli in the depressed group (r = 0.694, P = 0.026) (Figure 5B), but not the neutral stimuli (r = 0.003, P = 0.994), positive stimuli (r = 0.458, P = 0.215) or threatening stimuli (r = 0.417, P = 0.230).
For individuals with mania, we found no significant correlation between eye-tracking indices and the mean N1 amplitudes while viewing various stimuli.
DISCUSSION
In this study, we explored the group differences for eye-tracking characteristics, fixation dispersion degree, and ERP components with regard to different emotional visual stimuli in the freeviewing task for adolescents with emotional issues. Furthermore, we examined the association analysis between scale scores, eyetracking measures, and ERP components across groups.
The eye-tracking results suggested no significant difference between the manic, depressed, and control groups in the total fixation time and AOI fixation time. However, there is a significant difference between the depressed group and the healthy control in fixation distribution, but no difference between the manic group and the healthy control. These findings suggest that total fixation duration in the free-viewing task did not reflect differences between depressive-prone and manic-prone adolescents, while the distribution of fixation was sensitive for the depression group. However, our study is inconsistent with the findings of Liu et al. (15) who found manic patients had a shorter total fixation duration on sad images and neutral images than healthy controls which reflected an avoidance of sad expressions, possibly due to inconsistent level of mania. Liu et al. studied manic patients with an average duration of episodes of 84 months, while we studied adolescents with manic tendencies screened by the scale. Individuals with depressive tendencies have more concentrated looking behavior with a smaller fixation dispersion than the control group, regardless of the type of emotional stimuli. These results are consistent with Eva Nouzová (38) studies that major depressive disorder (MDD) showed smaller fixation dispersion in the free-viewing eye movement task. In addition, Eva Nouzová also found that the smaller fixation dispersion was associated with lower verbal intelligence and verbal memory, which is a topic that we would like to address in the future. Previous studies (39,40) already showed that individuals with depressive tendencies reduced their fixation duration toward positive stimuli from different image sets than healthy controls. Our work further found a quantitative linear relationship in that higher depression scale scores were related to less total fixation time and less AOI fixation time toward positive but not threatening or dysphoric stimuli. According to existing theory, one possibility is that individuals with depressive tendencies do not have the motivation to maintain attention toward positive stimuli (40,41). Ellis et al. (41) used Beck Depression Inventory-II (BDI-II) to assess symptoms of depression and designed an eye-tracking task with viewing a 2 × 2 array of emotional words. Results have revealed that individuals with BDI-II scores higher than 20 (which reached the threshold of depression) maintain shorter gazes with positive words relative to controls. Another possibility is that the mood-congruent attentional bias posited by cognitive patterns of depression raised by Beck (7) may have extended to the deficits toward positive affect (39,40,42). Shane et al. found that when participants were instructed to view an emotional picture (positive or negative) paired with a neutral image, depressed group showed less attention to positive stimuli than the healthy control group (42).
The ERP results confirmed that the N1 peak of individuals with depressive tendencies was significantly lower and slower than that of controls. We found that the activation of both groups was most pronounced in the parietal regions of the brain, and the individuals with depressive tendencies exhibited relatively longer latency and relatively lower arousal levels in the early stages of information processing than the healthy control group. This is consistent with previous studies (43,44) that prolonged N1 latency may indicate a slower auto-arousal function in depression. Fotious et al. adopted pattern-reversed visual evoked potentials (PR-VEPs), which used checkerboard-flipped pictures as stimulus materials and recorded potential changes in the visual cortex for a sample of depression. They found depressed individuals were significantly longer N1 latency than non-depressed (44). According to (45), the decreased N1 amplitude may reflect the cognitive impairment of emotional stimuli processing in depression at the early stage. In literature (45), Jiu Chen adopted a visual emotional oddball paradigm in which participants needed to quickly point to a deviant face (happy or sad face) among the standard faces (neutral faces) by pressing a button. Results found depressed group had lower N170 amplitudes when identifying happy, neutral, and sad faces than healthy control. Therefore, these findings provide an excellent electrophysiological basis to prove that there exists processing bias toward emotional stimuli in depression in the early perceptual processing stage.
Based on the correlation between the N1 component and the DSM-V online self-assessment scales, depression scale scores were positively correlated with N1 amplitude toward dysphoric stimuli but not positive or threatening stimuli. This result may imply that individuals with higher depression scale scores allocated more cognitive resources toward dysphoric stimuli. According to (43), individuals with depressive tendencies are characterized by sensitivity to negative stimuli and impaired attention to positive stimuli at an early stage of the ERP signal. Therefore, individuals with depressive tendencies allocate more cognitive resources to process dysphoric stimuli, which would further affect their behavior and mood. In the long run, the vicious circle would seriously affect their physical and mental health (46).
The association between eye-tracking measures and ERP components in emotional issues has not yet been studied in the literature we read. Our study has yielded promising results that there were positive associations between AOI fixation time and N1 amplitudes toward dysphoric stimuli in individuals with depressive tendencies, but not positive or threatening stimuli. It indicates that the longer the AOI fixation time toward the dysphoric stimuli, the more cognitive resources were allocated. Researchers have shown that the attentional bias of adolescents with depressive tendencies is a product of a contemplative cognitive style in which they indulge in dysphoric content (47,48). Also, individuals with depressive tendencies cannot get away from dysphoric content once they are focused, which is more pronounced with higher levels of rumination (47).
The ERP results confirmed no significant difference between the manic group and the control group in the N1 component. In contrast to the control group, these findings suggest that individuals with manic tendencies have relatively intact initial information processing. This is consistent with previous studies (28) that the N100 component in patients with manic or mixed bipolar disorder was studied, during an auditory discrimination task in which participants pressed keys to unusual 1,500 Hz tones interspersed in a series of 1,000 Hz tones. They reported that patients with bipolar disorder, which is also emotional issues, exhibited no reductions in N100 components compared to healthy controls. Our findings also confirmed that higher mania scale scores were associated with increased total fixation time or AOI fixation time toward positive stimuli. These results are consistent with previous studies (12) from Gruber et al. who adopted a dot-probe task to investigate attentional bias toward emotional faces and found manic proneness was positively associated with an attentional bias toward happy but not angry or fearful faces. However, our study is inconsistent with the findings of Rock et al. (49) who found no group difference in attentional bias toward positive stimuli among high-risk and low-risk undergraduate graduates with bipolar disorder, possibly because their study was not with emotional images stimuli, they employed the dot-probe task with emotional words and a different assessment for mania instead of DSM-V. In addition, no correlation between mania scale scores and N1 amplitude, or between N1 amplitude and eye-tracking output variables was found. Therefore, we have no conclusion about the association between their mania intensity and electrophysiological characteristics, and between visual behavior and electrophysiological characteristics.
Our study revealed the correlation between the participants' attention and the persistence of emotional stimuli. Recent studies indicate that training anxious individuals diverted their attention from threatening stimuli, which reduced symptoms (50,51). In addition, some studies suggest that the bias toward negative stimuli could be changed through training, which may have clinical effects (52). Therefore, our research can provide behavioral and electrophysiological targets for follow-up interventions to help adolescents with emotional issues recover.
We acknowledge several study limitations. First, the limitation of this study is the small sample size. However, it is worth noting that our findings did discover some potential associations which exist among eye-movement measures, ERP amplitudes and emotional scale scores despite the limitations associated with the current samples. Second, we assessed depressive and manic severity using DSM-V online self-assessment scales, with no formal clinical interviews to determine whether participants have reached the clinical diagnosis standard. Future studies should replicate this study in adolescents with depressive or manic tendencies and expand it to patients diagnosed with emotional disorders. Third, participants are limited to all-male adolescents, thus it is not clear whether gender differences have an effect on attentional bias. June Gruber et al. (12) found that when the participants were mainly females, emerging adults with hypomanic tendencies still had an attentional bias toward positive stimuli. It is consistent with our study that male adolescents with mania have an attentional bias toward positive stimuli. Fourth, due to no other scale data available (e.g., HAMD, HAMA, Young's Mania Scale), the assessment of the severity of emotional issues was determined only based on DSM-V online self-assessment scales. Therefore, we will increase the ratings of other scales in future research.
CONCLUSION
Our findings have shown that adolescents with depressive tendencies exhibit concentrated looking behavior, with reduced N1 amplitude and prolonged N1 latency, reflecting low and delayed arousal during the free-viewing task. The adolescents with depressive tendencies also showed impaired attention to positive stimuli and sensitivity to dysphoric stimuli. However, the adolescents' behavior with manic tendencies coincides with heightened positive emotional responses. These findings provide preliminary support that attentional deviations, underlying neural mechanisms, and self-reported emotional issues occur at the same time, specifically helping to determine the interaction or potential causality among different emotional states, brain cognitive, and attentional bias. Future work is warranted in larger samples to continue to unpack the nature of cognitive processes. Thus, attentional processing and neural mechanisms may be of great significance to the maintenance and recovery from emotional disorders.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethics Committee of the First People's Hospital of Hefei. Written informed consent to participate in this study was provided by the participants or their legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
BH conceptualized the study and research design. QW and XW designed the study, conducted the main data analysis, and drafted the manuscript. XW, RD, FZ, and SY collected the data. XW, RD, and SY managed the literature. All authors provided revisions to the final version and approved the submission. | v2 |
2014-10-01T00:00:00.000Z | 2012-02-06T00:00:00.000Z | 13229819 | s2orc/train | Method for Reading Sensors and Controlling Actuators Using Audio Interfaces of Mobile Devices
This article presents a novel closed loop control architecture based on audio channels of several types of computing devices, such as mobile phones and tablet computers, but not restricted to them. The communication is based on an audio interface that relies on the exchange of audio tones, allowing sensors to be read and actuators to be controlled. As an application example, the presented technique is used to build a low cost mobile robot, but the system can also be used in a variety of mechatronics applications and sensor networks, where smartphones are the basic building blocks.
Introduction and Motivation
Most robots and automation systems rely on processing units to control their behavior. Such processing units can be embedded processors (such as microcontrollers) or general purpose computers (such as PCs) with specialized Input/Output (I/O) accessories. One common practice is the use of USB devices with several I/O options connected to a PC. Another approach consists of using a microcontroller unit that runs a software algorithm to control the system. Frequently this microcontroller is connected to a computer to monitor and set parameters of the running system.
• Camera: can be used with a variety of algorithms for visual odometry, object recognition, robot attention (the ability to select a topic of interest [5]), obstacle detection and avoidance, object tracking, and others; • Compass: can be used to sense the robot's direction of movement. The system can work with one or two encoders. If only one encoder is used, the compass is used to guarantee that the robot is going in the expected direction and to control the desired curves angles; • GPS: can be used to obtain the robot position in outdoor environments, altitude and speed; • Accelerometer: can be used to detect speed changes and consequently if the robot has hit an object in any direction (a virtual bumper). It can also detect the robot's orientation. It is also possible to use Kalman filtering to do sensor fusion of the camera, encoders and accelerometer to get more accurate positioning; • Internet: WiFi or other Internet connection can be used to remotely monitor the robot and send commands to it. The robot can also access a cloud system to aid some decision making process and communicate with other robots; • Bluetooth: can be used to exchange information with nearby robots and for robot localization; • Bluetooth audio: As the standard audio input and output are used for the control system, a bluetooth headset can be paired with the mobile device, allowing the robot to receive voice commands and give synthesized voice feedback to the user. The Android voice recognizer worked well for both English and Portuguese. The user can press a button in the bluetooth headset and say a complex command such as a phrase. The Android system will then return a vector with most probable phrases that the user has said; • ROS: The Robot Operating System (ROS) [6] from Willow Garage is already supported in mobile devices running Android using the ros-java branch. Using ROS and the system described in this article, a low cost robot can be built with all the advantages and features of ROS.
Contributions
The main contributions of this work are: • A novel system for controlling actuators using audio channels • A novel system for reading sensors information using audio channels • A closed loop control architecture using the above-mentioned items • Application of a camera and laser based distance measurement system for robotics • A low cost mobile robot controlled by smartphones and mobile devices using the techniques introduced in this work
Organization
This paper is structured as follows: Section 2 describes previous architectures for mechatronics systems control. Section 3 introduces the new technique. Section 4 presents the experimental results of the proposed system. Section 5 describes a case study with an application of the system to build a low cost mobile robot and in Section 6 are the final considerations.
Related Work
This section reviews some of the most relevant related works that uses mobile devices to control robots and their communication interfaces.
Digital Data Interfaces
Santos et al. [4] analyze the feasibility of using smartphones to execute robot's autonomous navigation and localization algorithms. In the proposed system, the robot control algorithm is executed in the mobile phone and the motion commands are sent to the robot using bluetooth. Their experiments are made with mobile phones with processor clocks of 220 MHz and 330 MHz and they conclude that it is possible and robust to execute complex navigation in these devices even with soft real-time requirements. The tested algorithms are well-known: potential fields, particle filter and extended Kalman filter.
Another example of the use of smartphones to control robots is the open source project Cellbots [1] which uses Android based phones to control mobile robots. The project requires a microcontroller that communicates with the phone via bluetooth or serial port and sends the electrical control signals to the motors. The problem is that not all mobile devices have bluetooth or serial ports. Moreover, in some cases the device has the serial port available only internally, requiring disassembly of the device to access the serial port signals. When using bluetooth, the costs are higher because an additional bluetooth module must be installed and connected to the microcontroller.
The work of Hess and Rohrig [7] consists of using mobile phones to remotely control a robot. Their system can connect to the robot using TCP/IP interfaces or bluetooth. In the case of the TCP/IP sockets, the connection to the robot is made using an already existing wireless LAN (WiFi) infrastructure.
Park et al. [8] describe user interface techniques for using PDAs or smartphones to remotely control robots. Again, the original robot controller is maintained and the mobile device is used simply as a remote control device. Their system commands are exchanged using WiFi wireless networks.
Analog Audio Interfaces
One interesting alternative is using a dedicated circuit to transform the audio output of the mobile device in a serial port signal [9], but the problem with such approach is that only unidirectional communication is possible, and still, as in the other cases, a microcontroller is needed to decode the serial signal and execute some action.
On the other hand, the telecommunications industry frequently uses the Dial Tone Multi Frequency (DTMF) system to exchange remote control commands between equipments. Section 3.1 contains a description of the DTMF system. The system is more known in telephony for sending the digits that a caller wants to dial to a switching office. DTMF usage to control robots is not new. There are some recent projects that use DTMF digits exchange to remotely control robots: Patil and Henry [10] used a remote mobile phone to telecommand a robot. DTMF tones are sent from the mobile phone to the remote robot's phone and decoded by a specific integrated circuit and the binary output is connected to a FPGA that controls the robot. Manikandan et al. [11] proposed and built a robot that uses two cell phones. One phone is placed in the robot and another acts as a remote control. The DTMF audio produced by the keys pressed in the remote control phone is sent to the phone installed in the robot, and the audio output of this phone is connected to a DTMF decoder via the earphone output of the cell phone. The 4-bit DTMF output is then connected to a microcontroller that interprets the codes and executes the movements related to the keys pressed in the remote control phone. Sai and Sivaramakrishnan [12] used the same setup, where two mobile phones are used, one located at the robot and another used as a remote control. The difference is that the system is applied to different type of robot (mechanically). Naskar et al. [13] presented a work where a remote DTMF keypad is used to control a military robot. Some DTMF digits are even used to fire real guns. The main difference is that instead of transmitting the DTMF tones using a phone call, the tones are transmitted using a radio frequency link.
Still about DTMF based control, Ladwa et al. [14] proposed a system that can remotely control home appliances or robots via DTMF tones over telephone calls. The tones are generated by keys pressed in a remote phone keypad, received by a phone installed in the system under control and decoded by a DTMF decoder circuit. A microcontroller then executes some pre-programmed action. A similar work is presented by Cho and Jeon [15] where key presses in a remote phone are sent through a telephone call to a receiver cell phone modem. Its audio output is connected to a DTMF decoder chip which is then connected to a robot control board. As the user presses keys in the remote phone, the robot goes forward, backwards or do curves according with the pressed key (a numerical digit that is represented by a DTMF code).
Recently, a startup company created a mobile robot that can be controlled by audio tones from mobile phones [16]. The system is limited to controlling two motors and does not have any feedback or sensor reading capability. The alternative is using camera algorithms such as optical flow to implement visual odometry, but even if so, such limitations make it difficult to build a complete mobile robot because of the lack of important sensory information such as bumpers and distance to objects. Also, the information provided in the company website does not make it clear if the wheel speed can be controlled and which audio frequencies are used.
With the exception of the example in the last paragraph, all other mentioned examples and projects use DTMF tones to remotely control a distant robot over radio or telephone lines. This system, in contrast, uses DTMF tones to control actuators and read sensors in a scheme where the control unit is physically attached to the robot, or near the robot (connected by an audio cable). The advantage is that DTMF tones are very robust to interference and widely adopted, making it easy to find electrical components and software support for dealing with such system.
System Architecture
This section describes the proposed system architecture. Its main advantage is to provide a universal connection system to read sensors and control actuators of mechatronics systems. The data is exchanged using audio tones, allowing the technique to be used with any device that has audio input/output interfaces.
Theoretical Background
The DTMF system was created in the decade of 1950 as a faster option to the (now obsolete) pulse dialing system. Its main purpose at that time was to send the digits that a caller wants to dial to the switching system of the telephone company. Although almost 60 years old, DTMF is still widely used in telecommunication systems and is still used in most new telephone designs [17].
DTMF tones can be easily heard by pressing the keys on a phone during a telephone call. The system is composed of 16 different audio frequencies organized in a 4 × 4 matrix. Table 1 shows these frequencies and the corresponding keys/digits. A valid digit is always composed by a pair of frequencies (one from the table columns and one from the table rows) transmitted simultaneously. For example, to transmit the digit 9, an audio signal containing the frequencies 852 Hz and 1,477 Hz would have to be generated. As DTMF was designed to exchange data via telephone lines that can be noisy, the use of 2 frequencies to uniquely identify a digit makes the system efficient and very robust to noise and other sounds that do not characterize a valid DTMF digit. In fact, when the DTMF system was designed, the frequencies were chosen to minimize tone pairs from natural sounds [18].
The described robustness of DTMF led its use in a variety of current remote automation systems such as residential alarm monitoring, vehicle tracking systems and interactive voice response systems such as bank's automatic answering machines menus that allows the user to execute interactive operations like "Press 1 to check your account balance; Press 2 to block your credit card".
The wide adoption and reliability of the DTMF system led the semiconductor industry to develop low cost integrated circuits (ICs) that can encode and decode DTMF signals from and to digital binary digits. The system proposed in this article uses such ICs to transmit and receive information. Both actuators control and sensors data are encoded using DTMF tones. The following sections describe the system design.
Device Control
To control actuators, a mobile device generates a DTMF tone. The tone is decoded by a commercial DTMF decoder chip (such as the MT8870), converting the tone to a 4-bit binary word equivalent to the DTMF input. The decoded output remains present while the DTMF tone is present at the input. The resulting bits can feed a power circuit to control up to four independent binary (on/off) devices such as robots brakes, lights or a pneumatic gripper. Figure 1 shows the basic concept of the system. The audio output from the mobile device can be directly connected to the input of the DTMF decoder, but in some specific cases an audio preamplifier should be used to enhance the audio amplitude. Figure 2 shows a direct current (DC) motor control application where the 4-bit output of the decoder is connected to a motor control circuit (an H-bridge, for example, using the L298 commercial dual H-bridge IC). As 2 bits are required to control each motor, the system can control 2 DC motors independently. Table 2 shows the DTMF digits and corresponding motor states. Note that these states can be different according to the pin connections between the DTMF decoder and the H-bridge. In order to control the DC motor's speed, the mobile device turns the DTMF signals on and off in a fixed frequency, mimicking a pulse width modulation (PWM) signal. To control more devices it is possible to take advantage of the fact that most audio outputs of mobile devices are stereo. Thus, generating different audio tones in the left and right channels doubles the number of controlled devices (8 different on/off devices or 4 DC motors). One interesting option, possible only in devices with USB Host feature, such as netbooks and desktop computers, is to add low cost USB multimedia sound devices, increasing the number of audio ports in the system. Another possibility consists in directly connecting servo-motors (the ones used in model airplanes) control signals to the output of the DTMF decoder. As each servo needs only one PWM input signal, each stereo audio channel can drive up to eight servo-motors.
Sensor Reading
Most mechatronics systems and sensor networks need sensors to sense the surrounding environment and their own state in order to decide what to do next. To accomplish this task in this system, sensors are connected to the input of a DTMF encoder chip (such as the TCM5087). Each time a sensor state changes, the encoder generates a DTMF tone that is captured and analyzed by the mobile device. According to the digits received it is possible to know which sensor generated the tone. More details on how to identify which sensor and its value are provided in Section 5.1.
As shown in Figure 3, up to four sensors can be connected to a DTMF encoder that generates tones according to the sensors states. The generator's output itself is connected to the audio input of the mobile device which continuously samples the audio input checking if the frequency pair that characterizes a DTMF digit is present in the signal. To accomplish this task, the discrete Fourier transform (DFT) is used, according to Equation (1).
where X(m) is the frequency magnitude of the signal under analysis at index m, x(n) is the input sequence in time (representing the signal) with n index and N is the DFT number of points. N determines the resolution of the DFT and the number of samples to be analyzed. For performance reasons, a Fast Fourier Transform (FFT) [19] is used to identify the frequency components of the input signal and consequently detect the DTMF digit generated by the DTMF generator that encodes sensor data. For clear and detailed information about these digital signal processing concepts, please refer to Lyons' book [20] on Digital Signal Processing. To optimize the FFT computation it is necessary to specify adequate values for N and Fs (the sample rate of the input signal). From Table 1, the highest frequency present in a DTMF tone is 1,633 Hz. Applying the fundamental sampling theorem results in Fs = 3,266 Hz (the theorem states that the sample rate should be at least twice the highest signal to be captured). For implementation convenience and better compatibility, a 8 KHz sample rate is used, which most mobile devices can perform.
The lower the number of points in the FFT, the faster the FFT is computed, and more digits per second can be recognized, leading to a better sensor reading frequency. To compute the smallest adequate number of points for the FFT, Equation (2) is used.
In Equation (2), f(m) is each frequency under analysis, Fs is the sampling frequency (8 KHz) and N the FFT's number of points to be minimized. Using N = 256, results in an analysis resolution of about 30 Hz, that is enough to differ from one DTMF frequency component to another. This is consistent with the DFT parameters used by Chitode to detect DTMF digits [21]. The work developed by Khan also uses N = 256 and Fs = 8 KHz to detect DTMF tones [22].
Later in this text, Section 5.1 and Table 3 explains the use and application of this technique to read four digital (on/off) sensors simultaneously.
One of the limitations of the described method is that it is restricted to binary (on/off) sensors. As it is shown is Section 5.1, this is enough for many applications, including measuring angles and speeds using incremental optical encoders. In any case, additional electronics could be used to encode analog signals and transmit these signals using the audio interface. As each DTMF digit encodes 4 bits, the transmission of an analog value converted with a 12-bit analog-digital converter would take 3 transmission cycles. It would also be possible to use digital signal multiplexing hardware to encode more information in the same system (with worse performance). The demultiplexing would be done in the mobile device by software.
Experimental Results
In order to evaluate the proposed system, an Android application was developed using the Android development kit, which is available for free. Experiments were executed in several mobile devices and desktop computers.
For the actuator control subsystem, experiments showed that generating PWM signals by software is possible, but the resulting signal shows variations (a software generated PWM with 1 ms ON time and 1 ms OFF time produces a real signal with 50 ms ON time and 50 ms OFF time). A better option is to use pre-recorded PWM DTMF tones resulting in high reliability PWM of DTMF tones with frequencies greater than 1 KHz. As mobile devices have mature software support for playing pre-recorded audio, the PWM plays smoothly with low processor usage. In these experiments it was also observed that another practical way of doing fine speed adjustments consists in controlling the audio output volume, resulting in proportional speed changes in the motor(s).
Experiments of the sensor reading subsystem are based on the FFT. The experiments showed that in the worst case, the FFT computation time is 17 ms, leading to a theoretical limit of executing up to 58.8 FFTs per second. Figure 4 shows experimental results of the system running on 3 different devices. The tested devices were an early Android based phone, the HTC G1 with a 528 MHz ARM processor, an Android based tablet computer with a dual core 1 GHz ARM processor and a 1 GHz PC netbook with an Intel Celeron processor (in this case a version of the Android operating system for the x86 architecture was used). The FFT was implemented using Java and executed in the virtual machine (dalvik) of the Android system. Using the native development system for Android, thus bypassing the virtual machine, would enhance these results. Another performance improvement can be reached using the Goertzel algorithm [23,24]. From Figure 4 it is possible to note that even the device with less processing power is able to handle about 40 DTMF digits per second with zero packet loss. There are several causes for the increasing packet loss that starts at 40 Hz in the plot. One of the causes are the different audio input timings [25] caused by the different audio hardware of each device. Another cause is related to the task scheduler of the Android operating system (and the underlying Linux kernel) that can be indeterministic when the CPU load is high.
As a reference for comparison, some performance tests were made in a Lego Mindstorms (TM) robotics kit that is commonly used in educational robotics and some scientific researches. When connected to a computer or smartphone via a bluetooth wireless link, the maximum sensor reading rate of the Lego-NXT brick is 20 Hz. If several sensors are used, the bandwidth is divided. For example, using 2 encoders and 2 touch sensors reduces the sensor reading rate to 5 Hz per sensor or less. If the NXT brick is connected to a computer using the USB port, then the maximum sensor reading frequency rises to 166 Hz. If two encoders and two touch sensors (bumpers) are used, then each sensor will be read at a rate of 41.5 Hz. The performance of the system proposed in this article is comparable to this commercial product as a 40 Hz rate can be sustained for each sensor in a system with 4 sensors.
Case Study Application
The system described in Section 3 can be applied to several situations where a computing device needs to control actuators and read sensors, such as laboratory experiments, machine control and robotics. In this section, a mobile robot case study is described.
Low Cost Mobile Robot
As an application example, the presented technique was used to build a low cost educational mobile robot. For the Robot's frame, wheels, gears and two motors 24 US dollars were spent. For electronics parts more 6 US dollars were spent summing up a total of 30 US dollars to build the robot. As most people own a mobile phone or a smartphone, there is the assumption that the control device will not have to be bought because a mobile device that the user already has will be used.
Even if the control device needed to be purchased, the option of using a smartphone would still be good because single board computers, typically used in robots or other robot computers, are more expensive than smartphones. Furthermore, smartphones include camera, battery, Internet connection and a variety of sensors that would have to be bought separately and connected to the robot's computer. With multi-core smartphones running with clock speeds faster than 1 GHz and with 512 MB or 1 GB of RAM memory, they are a good alternative to traditional robots computers.
Important sensors in such kind of robot are the bumpers to detect collisions and encoders to compute odometry. Figure 5 shows a block diagram connecting bumpers and 2 wheel encoders to the DTMF generator. Table 3 shows a truth table with the possible states of each sensor and the corresponding DTMF digits. Instead of using commercial encoders discs, several encoders were designed and printed with a conventional laser printer. The discs were glued to the robot's wheels and a standard CNY70 light reflection sensor was mounted in front of each disc.
As can be seen in Table 3, there is a unique DTMF digit that corresponds to each possible sensor state. Using basic binary arithmetic it is possible to obtain the individual state of each sensor. For example, from Table 3 it is known that the bumpers are the bits 0 and 1. Using a bitwise AND operation with the binary mask 0001 will filter all other sensor states and the result will be either 0 or 1, indicating the left bumper state. For the right bumper, the same AND operation can be applied with the binary mask 0010. Furthermore, using the binary 0011 mask and the AND operation will only return a value different than zero if both bumpers are activate at the same time. Using these types of comparisons it is then possible to know the state of each sensor. In the case of the optical encoders, the system's software monitors for state transitions and add a unit for each transition to a counter that keeps how many pulses each encoder generated. Figure 5. Sensors connection in a mobile robot with differential drive.
As seen in the Figure 5, up to four sensors can be connected to each mono audio channel, allowing closed loop control of up to 4 motors if 4 encoders are used. Using the number of pulses accounted for each encoder it is possible to compute displacement and speed for each wheel as it is done with other incremental encoders. This information can be used in classical odometry and localization systems to obtain the robot's position in a Cartesian space [26,27].
To properly design a robot with the presented technique, a relation between wheel dimensions and the maximum linear speed that can be measured is introduced here. In Equation (3), V M ax is the maximum linear speed of the robot that can be measured, r is radius of the wheel, c is the maximum digits per second detection capacity of the mobile device and s is the encoder disc resolution (number of DTMF digits generated at each complete wheel revolution). Table 4 shows the distance measurement resolution and maximum speed that can be measured according to the given equation considering several encoder resolutions. Figure 6 shows odometry experimental results for this low cost robot. The error bars are the standard deviation of the real displacement that occurred. The blue line shows the real traveled distance and the red line shows the distance measured by the mobile phone using the proposed technique. Each point in the graph is the average value of ten samples.
According to McComb and Predko, odometry errors are unavoidable due to several factors such as wheels' slip and small measurement errors in the wheel radius that accumulate over time. They say that a displacements of 6 to 9 meters leads to 15 centimeters odometer error [28] or more, which is a percentual error of 1.6%-2.5%. The greatest odometry error of the system was 3.7% for 74 cm range. But for 130 cm displacements the error was 1 centimeter (0.76%). These values show that the proposed system performance is consistent with classical odometry errors described in the literature. Figure 6. Experimental odometry results. X axis is the real traveled distance manually measured with a tape measure. Y axis is the distance computed by the mobile device using the proposed system with data from the encoders.
To close the control loop, the computed odometry information is sent to a classical PI (Proportional-Integral) controller that has as set-points (or goals) the desired distance to be displaced by the robot. The encoders are read at a 40 Hz rate, the position is computed and sent to the controller to decide if the robot has to go faster, stop or walk. If any of the bumpers are activated in the meantime, the control loop is interrupted and the robot immediately stops.
Although most mobile devices have the possibility of recording and reproducing sounds, not all of them have physical connectors for both the audio input and output. To solve this problem in one of the tested devices that does not have an audio input connector, an earphone is attached near the built-in microphone of the device using a suction cup. In this particular case, an audio preamplifier must be used to generate tones with sufficient amplitude to be detected. The DTMF tones encoding sensors data is generated, amplified and sent to the earphone fixed very near the built-in microphone of the mobile device. It is worth mentioning that this scheme works reliably because DTMF system was designed to avoid interference from natural sounds such as music and people's voices [18].
Human Machine Interface
Users can control this robot from the web or using voice commands. Both a web server and a voice recognition system were implemented. The web-server is embedded into the application, therefore no intermediate computers or servers are needed. Any Internet enabled device can access the web-page and issue several commands to move the robot forward, back, or do curves. For debugging purposes the web-server also shows variable values such as distance, encoder pulses and recognized DTMF pulses from sensors. The voice recognizer system is straightforward to implement thanks to the Android API. When the user issues a voice command, the operating systems understands it (in several languages) and passes a vector of strings to the robot's control application with the most probable phrases said. The application just has to select the one that better fits the expected command. Example voice commands are "Walk 30 centimeters" or "Forward 1 meter". The numbers said by the user are automatically converted to numeric values by the Android API, making it easy to implement softwares that makes the robot move for some distance using closed loop control.
Distance Measurement
An important sensor to aid the navigation of autonomous mobile robots is the distance measurement from the robot to obstacles in front of it. This task is typically performed by ultrasound or laser sensors. Another approach is based on stereo vision, but the computational costs are high. To support distance measurement in this low cost robot, a laser module (laser pointer) is used to project a brilliant red dot in the object in front of the robot. The camera then captures a frame and uses the projected dot position on its image plane to compute the distance to the obstacle based on simple trigonometry. This method is described by Danko [29] and better explained by Portugal-Zambrano and Mena-Chalco [30]. The algorithm assumes that the brightest pixels on the captured image are on the laser projected dot. Figure 7 depicts how the system works. A laser pointer parallel to the camera emits a focused red dot that is projected in an object at distance D from the robot. This red dot is reflected and projected in the camera's image plane. The distance pfc (pixels from center) between the center of the image plane (in the optical axis) and the red dot in the image plane is proportional to the distance D.
Equation (4) shows how to compute the distance using described system. The distance between the camera and the laser (H) is known previously, the number of pixels from the image center to the red laser dot (pfc) is obtained from the image. The radians per pixels (rpc) and the radian offset (ro) are obtained Figure 7. Distance measurement system using a camera and a laser pointer. H is the distance between the camera optical axis and the laser pointer, D the distance between the camera and the object, theta is the angle between the camera's optical axis and the laser reflected by the object. pfc (pixels from center) is the distance in pixels from the center of the image and the red dot. Figure adapted from Danko and Portugal-Zambrano [29,30].
calibrating the system, which consists of taking several measurements of objects at known distances and their pixels distance from center (pfc). Then a linear regression algorithm finds the best ro and rpc. Details on this calibration can be found on the work of Portugal-Zambrano and Mena-Chalco [30].
As can be seem in Equation (4), the measurement range depends mainly on the baseline H given by the distance between the laser and the camera and the number of pixels from the image center, pfc, that has a limit given by the camera resolution. This equation can be used to determine the measurement range. As the object gets farther away, its pfc tends to zero. Assuming pfc to be zero it is possible to simplify Equation (4) to Equation (5) which gives the maximum distance that can be measured. In the same way, the minimum distance is given by half the camera resolution (because the measurement is made from the point to the center of image). Equation (6) specifies the minimum measurement distance. Table 5 shows some possible range values computed using these equations. Figure 8 shows a block diagram of the system. Each task is executed in a separated thread, thereby reading sensors and controlling motors do not interfere with each other. A task planner module allows the system to integrate the distance measurement, voice commands and web interface. The system running with all these subsystems used between 10% and 45% of the processor in all devices, leaving room to also execute complex algorithms embedded on the robot.
Experimental Results
The algorithm implementation is straightforward: the system has to scan a region of the image for a group of pixels with the greatest values (255) in the red channel. The current implementation searches for a pattern of 5 pixels in a cross shape. The center of this cross is the pfc value. Figure 9 shows the results for 3 different distances. The red dot found by the algorithm is shown by a green circle, and the green line shows the distance from the laser dot to the image center.
The baseline used is 4.5 centimeters and after executing a linear regression with a spreadsheet, the calibration values found are ro = 0.074 and rpc = 0.008579. Figure 9. Image seen by the robot's camera of the same object at different distances. Note the distance of the laser dot to the image center (shown by the green line) when the object is at different distances. Table 6 shows experimental results of the system. The average error is 2.55% and the maximum observed error is 8.5%, which happened at the limit of the measurement range. The range of operation goes from 15 cm to 60 cm, but that can be changed modifying the H distance. The advantage of such system is that the processing needed is very low: the system has to find the brightest red dot on a small limited region of interest in the image, and then compute the distance using simple trigonometric relations. The implementation computes distance at a rate of 9 frames per second in the mobile device while running the FFTs and closed loop control system described. This makes this approach an interesting solution to distance measurement in robotics systems. Figures 10, 11 and 12 show photos of the robot under the control of different devices. Thanks to the portability of the Android system, the same software can be used in PC computers and mobile devices using ARM processors. Although a proof of concept was developed using the Android operating system, the proposed architecture can be used with any system or programming language that can produce and record sounds. One should note that the main contribution of this work is the communication scheme, so these photos show a provisional robot's assembly setup used to validate the proposed architecture for robotics. Figure 10. Robot under the control of a mobile phone. The audio input and output channels are connected in a single connector below the phone. Figure 11. Robot under the control of a tablet computer. The audio output is driven from a P2 connector attached to the earphone jack and the audio input is captured by the built-in microphone of the device. Note the suction cup holding an earphone near the microphone. Figure 12. Robot under the control of a netbook computer. Audio input and output channels are connected with independent P2 connectors. This is the most common case for computers.
Conclusions
This paper introduces a simple but universal control architecture that enables a wide variety of devices to implement control of mechatronics and automation systems. The method can be used to implement closed loop control systems in mechatronics systems using audio channels of computing devices, allowing the processing unit to be easily replaced without the need of pairing or special configurations. Several obsolete and current devices can be used to control robots such as PDAs, phones and computers. Even an MP3 player could be used if control without feedback is needed. The sound produced by the player would drive the motors.
As an application example, the presented method is used to build a mobile robot with differential drive. The robot's complete costs, including frame, motors, sensors and electronics is less than 30 US dollars (in small quantities), and the parts can be easily found in stores or on the Internet. The mentioned price does not include the mobile device.
The method can be used for several applications such as educational robotics, low cost robotics research platforms, telepresence robots, autonomous and remotely controlled robots. In engineering courses it is also a motivation for students to learn digital signal processing theory, and all the other multidisciplinary fields involved in robotics.
Another interesting application of this system is to build sensor networks composed of smartphones that can gather data from their internal sensors and poll external sensors via audio tones, allowing sensor networks to be easily built and scaled using commercial off-the-shelf mobile devices instead of specific boards and development kits. | v2 |
2021-09-28T15:40:10.187Z | 2021-07-21T00:00:00.000Z | 239474009 | s2orc/train | Seroepidemiological Survey on the Impact of Smoking on SARS-CoV-2 Infection and COVID-19 Outcomes: Protocol for the Troina Study
Background After the global spread of SARS-CoV-2, research has highlighted several aspects of the pandemic, focusing on clinical features and risk factors associated with infection and disease severity. However, emerging results on the role of smoking in SARS-CoV-2 infection susceptibility or COVID-19 outcomes are conflicting, and their robustness remains uncertain. Objective In this context, this study aims at quantifying the proportion of SARS-CoV-2 antibody seroprevalence, studying the changes in antibody levels over time, and analyzing the association between the biochemically verified smoking status and SARS-CoV-2 infection. Methods The research design involves a 6-month prospective cohort study with a serial sampling of the same individuals. Each participant will be surveyed about their demographics and COVID-19–related information, and blood sampling will be collected upon recruitment and at specified follow-up time points (ie, after 8 and 24 weeks). Blood samples will be screened for the presence of SARS-CoV-2–specific antibodies and serum cotinine, being the latter of the principal metabolite of nicotine, which will be used to assess participants’ smoking status. Results The study is ongoing. It aims to find a higher antibody prevalence in individuals at high risk for viral exposure (ie, health care personnel) and to refine current estimates on the association between smoking status and SARS-CoV-2/COVID-19. Conclusions The added value of this research is that the current smoking status of the population to be studied will be biochemically verified to avoid the bias associated with self-reported smoking status. As such, the results from this survey may provide an actionable metric to study the role of smoking in SARS-CoV-2 infection and COVID-19 outcomes, and therefore to implement the most appropriate public health measures to control the pandemic. Results may also serve as a reference for future clinical research, and the methodology could be exploited in public health sectors and policies. International Registered Report Identifier (IRRID) DERR1-10.2196/32285
Introduction
Overview SARS-CoV-2 is the novel coronavirus strain that was first reported as a cluster of viral pneumonia cases of unknown etiology in Wuhan, the capital city of Hubei Province in China, on December 31, 2019. The spreading of SARS-CoV-2 reached pandemic proportion in March 2020, and as of April 2021, more than 141 million cases had been confirmed, and more than 3 million fatalities had occurred worldwide [1].
In late February 2020, the first nonimported cases of COVID-19 were identified in Italy. Since then, SARS-CoV-2 spread rapidly to the community, as reported by national health authorities [2]. On May 23, 2020, at the time of project proposal design, the Italian Ministry of Health reported that, of the 229,327 people who had contracted the virus, 57,752 were still positive, of whom 8695 (15%) were hospitalized with symptoms; 572 (1.0%) were admitted in intensive care units (ICUs); and the remaining 48,485 (84%) were self-isolating at home; 32,735 (14.3%) had died, and 138,840 (60.5%) healed on a total of 2,164,426 tested cases [3]. As of April 2021, 3,870,131 cases and 116,927 deaths have been recorded in the country [4].
Since the beginning of the pandemic, the global scientific community was engaged in intense research efforts to understand all aspects of this public health crisis, from the clinical features of the disease and risk factors for adverse outcome, the patterns of viral spread in the population, the role of asymptomatic or subclinical cases in human-to-human transmission, and the serological response. The clinical features of COVID-19 range from a self-limited flu-like syndrome to progressive lung involvement with respiratory failure and widespread systemic effects [5][6][7]. Epidemiological surveillance has primarily focused on hospitalized patients with severe disease, and as such, the full spectrum of the disease, including the extent and proportion of mild or asymptomatic infections, is less clear. Evidence suggests that asymptomatic or oligosymptomatic infection is not uncommon [8][9][10][11]. A recent review reported that at least one third of SARS-CoV-2 infections are asymptomatic and that almost three quarters of persons who are asymptomatic at the time of a positive polymerase chain reaction (PCR) test result will remain asymptomatic [12]. Remarkably, sequelae of previous viral pneumonia have been reported in chest computed tomography scans of asymptomatic individuals [13], and SARS-CoV-2 transmission from asymptomatic cases to others has been documented [13,14].
In this frame, a seroprevalence study is an ideal attempt for measuring the true infection rates in general or specific populations [11,15,16]. In Italy, a nationwide survey was conducted by the National Institute of Statistics, reporting an IgG seroprevalence of 2.5% [17]. However, there is limited use of seroprevalence studies to retrospectively identify potential predictors of infection susceptibility and disease severity, both in the general population and in specific subgroups (eg, smokers, pediatric populations, or older adults).
Several risk factors for severe COVID-19 have been identified, including cardiovascular disease, diabetes, obesity, and chronic obstructive pulmonary disease [18][19][20][21]. Intuitively, one important additional risk factor is expected to be cigarette smoking. Smokers have a higher risk for developing viral and bacterial respiratory infections [22][23][24], being five times more likely to have influenza and twice more likely to develop pneumonia [25].
However, the role of smoking in SARS-CoV-2 infection susceptibility and COVID-19 outcomes is still unclear. Although there appears to be a higher risk for ICU admission and adverse outcomes [26][27][28], it has been reported that the prevalence of smoking among hospitalized COVID-19 patients is far lower than would be expected based on population smoking prevalence [29][30][31][32]. These findings were initially derived from Chinese case series, and although it is possible that the prevalence of smokers in the Chinese case series may be underrepresented due to inaccurate recording of their smoking status, similar findings have been reported in France [30], Germany [33], Italy [34], and the United States [32,35].
It is not clear how far the underrepresentation of smokers among COVID-19 inpatients reflects problems with poor reporting of the smoking status. Given the challenging circumstances of the pandemic, recall or reporting bias cannot be excluded. The possibility for inaccurate recording, false reporting, or underreporting of the smoking status due to the challenging situations at wards/ICUs with work overloads and operating in a persistent state of emergency should not be underestimated. Improving the quality of clinical and behavioral data mandates the need for an accurate and dedicated recording of the smoking status. Alternatively, population-level data collected outside of hospital settings are required. Another important limitation is that most of the observations were unadjusted for smoking-related comorbidities, which are known to be associated with higher risk for an adverse outcome in patients with COVID-19 [18]. Lack of adjustment for relevant confounders means it is not possible to disentangle the effect of smoking. Addressing all these limitations is important for evaluating clinical risk, developing clear public health messages, and identifying targets for intervention.
Research Objectives
Surveillance of antibody seropositivity in a population can allow inferences to be made about the cumulative incidence of infection in the population. Additionally, little is currently known about antibody kinetics. Asymptomatic infected persons may clear the virus more quickly than do symptomatic patients, and antibody titers in the former are likely to be lower, if they seroconvert at all, than in infected symptomatic patients [11,36]. Furthermore, understanding the association between smoking and SARS-CoV-2 infection susceptibility or COVID-19 outcomes is generally limited and of poor quality.
To summarize, there is a need for robust population-based evidence on the association of smoking with SARS-CoV-2 infection and COVID-19 outcomes, adjusting for potential confounding variables (eg, sociodemographic characteristics, key worker status, and comorbid health conditions), and a population seroprevalence study could be useful for this goal. The following key research questions will be addressed: • Does smoking increase susceptibility to SARS-CoV-2 infection?
• Does smoking affect the serological response after SARS-CoV-2 infection?
Research Proposal
We propose a 6-month prospective study using in combination a random population sample (taken from residents of the town of Troina; the town with the highest prevalence of positive SARS-CoV-2 cases in Sicily at the time of drafting the protocol in March-May 2020) and a convenience sample (taken from staff of the Troina's main health care establishment, reported to have high infectious levels in the same time period) to investigate the prevalence of past infection, as determined by seropositivity (anti-SARS-CoV-2-specific IgG by enzyme-linked immunosorbent assay [ELISA]). The biochemically verified smoking status of the study population (ie, serum cotinine) will be correlated with serological data and COVID-19 outcomes (ie, clinical symptoms and hospitalization).
Study Aim
Epidemiological exposure data and venous blood (for measurements of anti-SARS-CoV-2-specific IgG and serum cotinine levels) will be systematically collected. Demographic, medical history, and epidemiological exposure data will be recorded from specifically designed questionnaires (for COVID-19 outcomes, relevant comorbidities, and smoking status) and shared rapidly in a format that can be easily aggregated, tabulated, and analyzed across many different regional (or national and international) settings for timely estimates of COVID-19 virus infection and its immunologic response rates according to the smoking status, and to inform public health responses and policy decisions.
Specific Aims
In summary, the main objectives of the study will be to:
Study Design
This study will investigate the association between biochemically verified smoking status of the study population and serological data as well as COVID-19 outcomes (ie, clinical symptoms and hospitalization). The research design involves a 6-month prospective multiple cohort study with serial sampling of the same individuals each time. Sampling will be commenced in July and repeated at 8 weeks ( Figure 1). A final follow-up visit will be carried out at 24 weeks.
Study Population and Setting
Within the geographic scope of the study, high incidence of positive cases was identified in the general population of Troina, a town of around 9000 inhabitants in the province of Enna, in the center of Sicily. This town was hit hardest in terms of COVID-19 cases during the first wave in Italy [2][3][4], with hundreds of cases registered in the first epidemic weeks in March 2020 [37,38], and was declared a red zone on March 29, 2020, with the enforcement of lockdown retractions in that area [38]. The study population will consist of a population-based, age-stratified cohort in Troina that will be sampled through random selection of town residents. Identification and recruitment of participants will expand over different age groups to determine and compare age-specific attack rates. For logistic reasons, specimen and data collection will be performed at a single location, asking participants to travel to that location to participate in the study. Targeted testing will also be extended to a convenience sample consisting of about 600 staff members of Troina's main health care establishment (high-risk individuals).
Eligibility Criteria
Any individual identified for recruitment, irrespective of age, can participate. Exclusion criteria will be refusal to provide informed consent or contraindication to venipuncture. Suspected or confirmed active/acute or prior SARS-CoV-2 infection should not be considered as an exclusion criterion for this investigation. Doing so would underestimate the extent of infection in the population. For individuals currently receiving medical care for COVID-19 infection, a family member or proxy may be used to complete the questionnaire on their behalf.
Smoking Status Definition
Current smokers will be defined as those who report that they smoke and have serum cotinine levels ≥20 ng/mL. Former smokers will be defined as those who report that they used to smoke in the past but not now and have serum cotinine levels <20 ng/mL. Never smokers will be defined as those who report that they never smoked in the past and have serum cotinine levels <20 ng/mL.
Data Collection
Each participant recruited into the study will be asked to complete a questionnaire that will record the following information: demographics, information about known COVID-19 and relevant clinical course, comorbidities, and smoking status.
Specimen Collection
A small amount of blood (10 mL) will be collected from each participant upon recruitment (T0) and at specified follow-up time points (T8wks, T24wks).
Specimen Transport and Biobanking
For each biological sample collected, the time of collection, the conditions for transportation, and the time of arrival at the study laboratory will be recorded. Specimens should reach the laboratory as soon as possible after collection. Serum should be separated from whole blood and stored at -20 °C or lower and shipped on dry ice. A biobanking facility will be established in Troina.
Sample Storage
Prior to testing, serum samples will be stored at -80 °C at the reference biobanking facility. It is recommended to aliquot samples prior to freezing to minimize freeze thaw cycles.
Serological Testing
Serologic assays of high sensitivity and specificity for SARS-CoV-2 have been recently validated and published. Serum samples will be screened for the presence of SARS-CoV-2-specific antibodies using a quantitative ELISA test for anti-SARS-CoV-2 IgG (Euroimmun, CND W0105040619) [39].
Serum samples will be stored at -80 °C until use, and the assay will be performed according to the manufacturer's protocol. The neutralization capability, specificity, and sensitivity of the test have been thoroughly investigated and published together with the assay validation [40]. Reagent wells of the assays are coated with recombinant structural protein (S1 domain) of SARS-CoV-2. The optical density (OD) will be detected at 450 nm, and a ratio of the reading of each sample to the reading of the calibrator, included in the kit, will be calculated for each sample (OD ratio). The cutoff value for IgG OD ratio is 0.3. Following blocking, diluted serum (1:100 or 2-fold serially diluted for titers) will be added and incubated at 37 °C for 1 hour in the 96-well microtiter ELISA plates. Antigen-specific antibodies will be detected using peroxidase-labeled rabbit antihuman IgG and TMB as a substrate. The absorbance of each sample will be measured at 450 nm. Laboratory procedures involving sample manipulation must be carried out in a biosafety cabinet.
Cotinine Assay
About 1mL of serum will be pipetted into 10 ml tubes and 100 ng/mL of Ortho-cotinine used as internal standard will be added. About 50 µL of 0.1 M aqueous sodium hydroxide solution will then be added to the culture tube followed by 325 µL of chloroform. The tube will be secured with cap and vortex mixed for ~3 minutes (using VX 2500 Multi Tube Vortex Mixer) and centrifuged for ~4 minutes (in Beckmann Allegra centrifuge) at 2500 rpm. Using a glass Pasteur Pipette, the top aqueous layer will be removed and discarded into the hazardous waste container and the organic layer will remain in the tube. About 100 mg (0.1 g) of anhydrous sodium sulfate will be added to the organic layer and allowed to rest for ~3 minutes (which will allow the sodium sulfate to absorb any water that may be present in the organic layer). In the end, the clear organic layer (with no water) will be carefully removed (without disturbing the settled sodium sulfate), concentrated to ~100 µL vial insert and placed in a gas chromatography (GC) vial. The concentrated sample will be capped and arranged on to an auto sampler tray for GC injection. One microliter (µL) of each sample will be injected into HP-5 Capillary GC Column (0.32 mm ID, 25 m length, and 0.52 µm film thickness; bonded 5% phenyl; and 95% dimethylpolysiloxane) of GC-nitrogen phosphorous detector. The inlet temperature will be 250 ˚C at split-less mode. The initial oven temperature will be 70 ˚C with a 1-minute hold and then increased to 230 ˚C at the rate of 25 ˚C per minute. Every batch of samples will be run with 6 calibration levels (20,50,100,200, 400, 600 ng/mL), 4 quality controls (20, 100, 400, 600 ng/mL), and 1 blank control for accurate quantification. The amount of cotinine will be reported in ng/mL. The limit of quantification of cotinine is 20 ng/mL.
Statistical Plan
Estimates of margin of error as a function of seroprevalence are low for 300 samples. We will be aiming for >1000 samples of a representative sample of the population by gender and age groups (0-17 years, 18-65 years, and 66 years and older).
For the enrollment of participants into the study, the following inclusion and exclusion criteria need to be fulfilled: To correctly represent the population involved, the age groups of interest have been assigned to three different categories by gender in terms of population size. A corresponding targeted sample size for this study is specified for each category.
In the entire population, we estimated several sample sizes depending on the margin of error equal to 0.03, 0.04, or 0.05 for estimate proportions of the sampled population, as shown in Table 1. We will draw a multilayered sample with a confidence level of 0.97% to minimize the margin of error to 3% and ensure the best reliability of the sample data. The planned total sample size comprises up to 1308 participants at the recruitment stage. The attrition rate is estimated at 10%.
Information will be collected in a standardized format according to the questionnaires and tools in the protocol. The data shared should include only the study identification number and not any personal identifiable information. We will report the following information: Sociodemographic and baseline characteristics will be summarized for the Troina population, for the convenience sample, and for the total sample recruited. Categorical variables will be reported as numbers and proportions with 95% CIs.
Between-group comparisons will be carried out using chi-square testing or Fisher exact test, as appropriate. Continuous variables will be reported as means and SDs, and as medians and IQRs. Between-group comparisons will be carried out using tests like analysis of variance, Mann-Whitney U test, or Wilcoxon signed rank test, as appropriate. The proportions of patients developing a positive PCR test for SARS-CoV-2 result over the course of the study will be reported as numbers and proportions with 95% CIs, separated by subgroups identifying those with symptomatic or asymptomatic infection. Data on change in antibody levels from baseline to follow-up will be presented for the whole recruited population. Statistical significance of change will be estimated using repeated-measures t testing. Correlation between smoking status and the risk of COVID-19 infection will be tested using a 3 × 2 chi-square test of independence. A P value ≤.05 will be considered to represent the threshold of statistical significance for all comparisons.
Data Collection Procedures
To accomplish the specific aims of the project, data will be stored in a database system after each completed collection step and transferred to central data management for further data processing and merging. Each participant will be assigned a unique identification code consisting of a number identifier that will be used for data merging. After data collection, data validation checks will be performed as agreed in a data validation plan, and data cleaning procedures will be used as applicable to ensure the best achievable quality of data for analysis purposes. Only the study staff working with the fieldwork provider will be able to identify the participants based on the identification codes. Only deidentified data will be transferred from the fieldwork provider to the central data management. All electronic data files will be kept on secure servers with backup processes in place. Personal data of participants must be strictly kept separately from study data and is accessed only by authorized staff for the purposes of the study conduct. Before starting data collection, the involved staff will receive training on the background and objectives of this study, on eligibility criteria, on the participant selection procedure, on ethical obligations, on completion and validation of the procedure, and on the data collection platform. Local fieldwork staff will be trained for each relevant data collection process and logistic-related procedure.
Eligible participants are informed about the study purpose, their requested tasks, time of involvement, data confidentiality, and data protection. Once they stated their willingness to participate, they will proceed with the screening and study enrollment. Enrolled participants have the right to stop their participation at any time without any penalty. Preferably, the reason for the premature end of participation will be recorded. Participants who drop out will not be replaced. Every effort will be made to protect participant confidentiality according to the General Data Protection Regulation.
Results
We expect to find a higher prevalence of antibodies in individuals at high risk for viral exposure (ie, health care personnel and other essential workers) according to previous evidence and to refine current estimates on the association between smoking status and SARS-CoV-2/COVID-19. A total of 1785 participants have been enrolled in the study. Data cleaning and analyses are ongoing.
Discussion
This project is the first population-based study that uses seroprevalence data and an objective assessment of the current smoking status to examine the association between smoking and SARS-CoV-2 infection susceptibility and severity. Additionally, the study will examine for the first time the magnitude of the seroconversion response in current smokers, compared to former and never smokers, and the changes in antibody titers over time according to the smoking status. Instead of focusing on hospitalized patients only, the study will also include participants who infected individuals who were either asymptomatic or had COVID-19 but were not hospitalized. It will also consider confounding factors in the association between smoking and SARS-CoV-2 that were not addressed in previous research. Finally, the results from this survey may serve as a reference to other contexts and provide an actionable metric to quantify and offer a clear overview of SARS-CoV-2 spread, and the methodology and findings can be exploited in public health research and policies to minimize the disease impact and implement the most appropriate prevention measures to protect susceptible populations.
the study and will be involved in the project management to organize and coordinate the data collection process. RF contributed to the design of the study and revised the manuscript.
Conflicts of Interest
RP is a full tenured professor of Internal Medicine at the University of Catania (Italy) and Medical Director of the Institute for Internal Medicine and Clinical Immunology at the same University. In relation to his recent work in the area of respiratory diseases, clinical immunology, and tobacco control, RP has received lecture fees and research funding from Pfizer, GlaxoSmithKline, CV Therapeutics, NeuroSearch A/S, Sandoz, MSD, Boehringer Ingelheim, Novartis, Duska Therapeutics, and Forest Laboratories. Lecture fees from a number of European EC industry and trade associations (including FIVAPE in France and FIESEL in Italy) were directly donated to vaper advocacy no-profit organizations. RP has also received grants from European Commission initiatives (U-BIOPRED and AIRPROM) and from the Integral Rheumatology & Immunology Specialists Network initiative. He has also served as a consultant for Pfizer; Global Health Alliance for Treatment of Tobacco Dependence; CV Therapeutics; Boehringer Ingelheim; Novartis; Duska Therapeutics; Electronic Cigarette Industry Trade Association, in the UK; Arbi Group Srl; Health Diplomats; and Sermo Inc. RP has served on the Medical and Scientific Advisory Board of Cordex Pharma, Inc; CV Therapeutics; Duska Therapeutics Inc; Pfizer; and PharmaCielo. RP is also founder of the Center for Tobacco Prevention and Treatment at the University of Catania and of the Center of Excellence for the Acceleration of Harm Reduction at the same university, which has received support from the Foundation for a Smoke-Free World to conduct 8 independent investigator-initiated research projects on harm reduction. RP is currently involved in a patent application concerning an app tracker for smoking behavior developed for ECLAT SRL. RP is also currently involved in the following pro bono activities: scientific advisor for Lega Italiana Anti Fumo | v2 |
2018-10-27T16:49:07.464Z | 2018-01-01T00:00:00.000Z | 52958919 | s2orc/train | EPR and HPLC Investigation of Pigments in Thai Purple Rice
EPR and HPLC Investigation of Pigments in Thai Purple Rice Kouichi Nakagawa , Wipawadee Yooin, and Chalermpong Saenjum 3 1 Division of Regional Innovation, Graduate School of Health Sciences, Hirosaki University, 66-1 Hon-Cho, Hirosaki 036-8564, JAPAN 2 Department of Pharmaceutical Sciences, Faculty of Pharmacy, Chiang Mai University, Chiang Mai, 50200 THAILAND 3 Cluster of Excellence on Biodiversity based Economics and Society (B.BES-CMU), Chiang Mai University, Chiang Mai, 50200 THAILAND
INTRODUCTION
Thai purple rice is attracting attention for its antioxidant effects and health benefits 1 3 . The purple color of this rice is due to the deposition of large amounts of the anthocyanin pigment. Certain compounds in the rice have been recognized as health-enhancing substances because of their antioxidant, anti-inflammatory, and anticancer effects 1 3 . Although the total contents of useful chemicals in such foodstuffs have been determined via analyses of powdered food crops, we have very limited knowledge about the distribution and concentration of useful chemicals within the crops.
Free radicals are generated in plants as a result of antioxidant scavenging activities and biochemical processes 4 7 . In most cases, stable paramagnetic species are found in the pigmented colored regions of plant seed coats 5 7 . These pigmented regions usually contain various organic compounds such as antioxidants. Electron paramagnetic resonance EPR can be used for the nondestructive detection of free radicals. The EPR spectrum appears either as an asymmetric line shape or as a series of multiple overlapping lines, depending on the sample being assessed 4 7 .
The X-band 9 GHz EPR imaging EPRI technique exhibits good spatial resolution and sensitivity. Several reports have described its application to investigate free radicals in naturally occurring high-value crops 4 7 . Noninvasive EPRI and EPR spectroscopy have provided detailed information regarding the location and concentration of paramagnetic species e.g., transition metal ions, transition metal complexes, and stable organic radicals in naturally occurring biological samples. Application of these techniques has revealed that the stable radicals are primarily located in the seed coat, while very few radicals were observed in the seed cotyledon. More specifically, these results indicate that stable radical species are only found within the seed coat, and few radical species are found in other seed parts 5 7 . These stable radicals could be the products of antioxidant reaction processes.
In addition to Thai purple rice, Japanese black rice Shikokumai also contains pigments, particularly anthocyanins 8 . Many papers have reported the radical scavenging and other beneficial functions of anthocyanins 8 . EPR detected paramagnetic species and the aforementioned functions of anthocyanin in black rice 9 . EPRI revealed that stable radicals are distributed in the exterior of rice. However, little is known about endogenous paramagnetic species e.g., Mn 2 and organic radicals present in rice. EPRI could be a useful tool for obtaining such information.
In this study, paramagnetic species in physically and chemically untreated rice were investigated using X-band EPR, noninvasive two-dimensional 2D EPRI, scavenging effect, and HPLC. EPR was carried out to detect paramagnetic species in whole rice, whereas 2D EPRI was used to demonstrate the spatial distribution of stable unreactive organic radicals within the rice grains. Possible antioxidants present in the extracted purple rice pigment fraction were also characterized using HPLC. The localization and concentration of the endogenous stable radicals within the rice are also discussed.
Samples
Khao Gam Pah E-Kaw purple rice was collected from the Mae Hong Son Rice Research Center, Mae Hong Son Province, and Niaw San-Pah-Tawng white rice was harvested from the Chiang Mai Rice Research Center, Chiang Mai, Thailand, in November 2016. These samples were used for EPR without any chemical or physical treatment. Black rice Murasaki no kimi was harvested from a rice paddy located in the far north Hirosaki, Aomori Prefecture of the main island in Japan, in the autumn of 2015, and was milled after harvesting. For EPR measurements, the rice grains 0.0230-0.0385 g/rice were sequentially inserted into an EPR tube outer diameter, 5.0 mm; inner diameter, 4.0 mm; Wilmad LabGlass, Buena, NJ, USA or an EPR rod outer diameter, 5.0 mm .
Chemicals for EPR analyses were purchased from Wako Pure Chemical Industries Ltd. Osaka, Japan . Cyanidin-3-O-glucoside chloride and peonidin-3-O-glucoside chloride were purchased from Extrasynthese Co., Ltd. Genay, France and used as received.
EPR and EPRI measurements
A JEOL RE-3X 9 GHz EPR spectrometer JEOL Co. Ltd., Tokyo, Japan was used for continuous wave CW measurements. The system was operated at 9.43 GHz using a 100-kHz modulation frequency. All CW EPR spectra were obtained in a single scan. Typical CW EPR settings were as follows: microwave power, 5 mW; time constant, 0.1 s; sweep time, 4 min; magnetic field modulation, 0.32 mT; and magnetic field sweep width, 5 300 mT.
A modified JEOL RE-3X 9 GHz EPR spectrometer was used for EPR imaging. A detailed description is available elsewhere 4, 6, 10 . All measurements were performed at ambient temperature.
Anthocyanin quanti cation by HPLC
The extracted samples of the pigmented part of purple rice were analyzed using HPLC Agilent 1100 , in accordance with a modified version of the method reported by Pengkumsri et al. 11 and Prior et al. 12 . Briefly, purple rice 5.0 g was extracted with 2 HCl in methanol 100 mL using a shaking incubator at 150 rpm and 50 for 30 min. Subsequently, the supernatant was filtered through a 0.45-μm filter for HPLC analysis. The wavelength for UV detector analysis was set at 520 nm. Symmetry Shield RP18 column diameter 250 4.6 mm obtained from Waters Co., Ltd. was used for this purpose. The mobile phase consisted of acetonitrile and 4 phosphoric acid. The linear gradient elution was operated from 0 to 40 min, with acetonitrile of 10-20 flow rate of 1.0 mL/min, injection volume of 10 μL . The anthocyanin standards including delphinidin-3-glucoside, cyanidin-3-O-glucoside, delphinidin, peonidin-3-O-glucoside, and malvidin-3-O-glucoside were purchased from Extrasynthese Co., Ltd. Genay, France .
2.4 Determination of antioxidant activity 2.4.1 ABTS assay 2,2 -Azino-bis 3-ethylbenzothiazoline-6-sulfonic acid ABTS free radical cation decolorization assay was carried out using an improved version of the method reported by Saenjum et al. 2,11 . ABTS was generated by the oxidation of 7.0 mM ABTS with 2.5 mM potassium persulfate for 16 h in the dark at room temperature stock solution . Then, the ABTS stock solution was diluted with absolute ethanol to give an absorbance of 0.7 0.2 at 734 nm before being used ABTS working solution . Different concentrations of the tested samples, along with the standard Lascorbic acid, Trolox, and quercetin, were mixed with the ABTS working solution. The decrease in the absorbance was measured after incubation in the dark for 5 min at room temperature. All measurements were carried out in triplicate. The results are expressed as vitamin C equivalent antioxidant capacity VCEAC , Trolox equivalent antioxidant capacity TEAC , and quercetin equivalent antioxidant capacity QEAC .
Scavenging effects on superoxide anion
The scavenging effects of Thai purple rice and white rice extract on superoxide anions were assayed following the method of Saenjum et al. 2 . Superoxide anion radicals were generated in a phenazine methosulfate PMS -βnicotinamide adenine dinucleotide NADH system by the oxidation of NADH and analyzed by the reduction of nitroblue tetrazolium NBT . The reaction was performed in 200 μL of PBS buffer pH 7.4, containing NADH, NBT, and EDTA in a 96-well plate, with different concentrations of the tested sample. PMS was added to initiate the reaction. After 5 min of incubation in the dark at room temperature, the absorbance was measured at 560 nm using a Beckman Coulter microplate reader. L-ascorbic acid and cyanidin-3-O-glucoside were used as positive controls. All samples were tested in triplicate. The results are expressed as the 50 inhibition concentration value IC 50 , μg/mL .
RESULTS AND DISCUSSION
3.1 EPR of rice Figure 1 shows the EPR spectra of A purple and B whole white rice, which were obtained with a 300 mT sweep width. The EPR spectrum of purple rice exhibited three distinct signals, which were stable for at least a few months, and corresponded to Fe 3 , Mn 2 , and organic radicals. The first signal was characteristic of the Mn 2 paramagnetic center M I 5/2 -related sextet 13 . The apparent increases in the hyperfine couplings while moving from low to high fields were attributed to the Mn 2 moiety and the overlap of other paramagnetic centers.
The second signal was strong and reproducible. The relatively broad single peak observed at g ≈ 2.00 1 was indicative of stable organic radicals 4, 13 , suggesting the possibility of the radical being generated during scavenging activities and the presence of antioxidant-related organic compounds in the rice 9 . The featureless EPR signal for the organic radicals can be due to the delocalization of unpaired electrons throughout the aromatic ring and relatively weak interactions with the neighboring nuclei. Figure 2 shows the EPR spectrum of the central region g ≈ 2.00 1 . The distorted baselines of the spectra occur owing to the overlap with the Mn 2 signal and other paramagnetic species. The peak-to-peak line width Δ H pp of the signal was 0.63 mT. In contrast, the EPR spectrum of white rice showed a very small signal. The radicals can be organic radicals and/or carbon centered radicals based on the g-value obtained. The concentration was estimated by comparison with a TEMPOL solution known concentration in a capillary tube outer diameter, 1.0 mm; inner diameter, 0.9 mm . The number of spins per sample for the purple rice sample was 3.2 10 17 .
The third signal was characteristic of Fe 3 the g-values of 4.34 5 and 3.21 4 at a lower magnetic field with filled triangles, as shown in Fig. 1 . The signal at g 4.34 5 shows the characteristic peak for high-spin iron, while that at g 3.21 4 could be low-spin iron; however, this could not be confirmed without knowing the other components of the signals g x and g y . In order to further consider the relationship between Fe 3 and organic radicals, we analyzed black rice. Figure 1 C shows the EPR spectrum of milled black rice. The Fe 3 signal was not observed in black rice at a low magnetic field. The EPR spectrum of black rice revealed two distinct signals, which corresponded to Mn 2 and organic radicals. In addition, we identified other paramagnetic species in the central region with very broad and intense signals. We also found similar signals for the husk.
In the case of white rice, we observed all three distinct signals, but the intensity of the organic radicals was lower than that seen in purple rice Figs. 1 and 2 . The signal intensity of the organic radicals was also very low in the endosperm region. Additionally, in white rice, the signal of the organic radicals in the husk was more intense than that in the endosperm. Figure 3 shows the EPR spectra of the husks of purple and white rice. The spectra show the presence of Fe 3 , Mn 2 , and the organic radicals, as well as strong paramagnetic species. The EPR spectra are similar to those in Fig. 1.
The signal of the organic radicals for white rice is much less intense than that seen in the purple rice spectra. In addition, when the sample weight is taken into account, the intensity of the Fe 3 signal for white rice is stronger than that for purple rice. In some cases, the iron ion is involved in iron-mediated reactions such as the Fenton reaction. The reactions produce reactive oxygen species ROS as follows: Fe 2 -complex M Fe 3 -complex ROS 1
ROS Antioxidants and/or others Stable organic radicals 2
If the iron-mediated reaction is the main source of ROS, the Fe 3 concentration increases with increasing stable antioxidant radicals reaction 2 . However, we observed a less intense Fe signal for the purple rice than the white rice. In the case of black rice, we observed relatively strong signals from organic radicals, although the signal from iron was absent Fig. 1 C . We concluded that the iron-mediated reactions may not be the main reaction in this case. Moreover, Mn 2 signals are weaker than those of the endosperm, in both types of rice. Hence, for further investigation, we focused on the organic radicals in purple rice.
2D EPRI of purple rice
To study the organic radicals present in purple rice in further details, we performed EPRI studies because the signal intensity of white rice was very weak. Figure 4 shows a sample image and an EPR image of the purple rice obtained using a scan width of 5 mT at the central region g ≈ 2.00 of the spectrum in Fig. 2. The dashed line shows the approximate size of the rice. Based on the Δ H pp value 0.63 mT , the spatial resolution of the rice EPRI was estimated to be 0.19 cm. It is noted that not all pigments are EPR-detectable radicals at low levels , especially for EPRI.
The signal overlap around the central region has been mentioned. The EPR baseline of the purple rice is not straight. Although we adjusted the baseline before data processes e.g., convolution and deconvolution , the background of the overlapped signals tends to create artifacts. Notably, the EPRI results showed that the organic radicals are mostly distributed near the embryo region of the purple rice. EPR studies of dry embryos of rice seeds Oryza sativa L. stored in a natural warm and humid environment showed free radical accumulation 14 . We speculate that the embryo region may have higher scavenging and/or oxidant activities than other regions. While the results are very different from those in black rice 9 , the EPR observation is similar to that reported for sesame seeds 5 . We observed organic radicals, which were located on the outer surface region of the rice Fig. 3 B . The left-hand panel shows the EPR measurement set-up A 5 mm EPR rod and B 5 mm EPR tube.
The right-hand panel shows a 2D EPR image of A purple rice and B black rice. The dashed line shows the approximate size of the rice. The red signal at the center is an artifact. Figure 5 shows an HPLC chromatogram of purple rice extract. The largest peak in the chromatogram was assigned to cyanidin-3-O-glucoside, as determined by comparison with the specific retention time and absorption spectra of the authentic standard, which usually corresponds to anthocyanin. The anthocyanins were quantified by determining the peak areas in HPLC chromatograms; the concentrations of cyanidin-3-O-glucoside and peonidin-3-O-glucoside were found to be 87.5 and 32.3 mg/100 g dry weight, respectively. The anthocyanin content of Khao Gam Pah E-Kaw was higher than that reported by Yamoangmorn and Thebault; they observed that local Thai purple glutinous rice genotypes, which were measured at different pH values and wavelengths, were associated with the total anthocyanin in the range of 9.73 54.7 mg/100 g dry weight 15 .
HPLC analyses of purple rice
In order to establish the link between stable radicals obtained by EPR and cyanidin-3-O-glucoside determined by HPLC, we carried out additional experiments. Figure 6 shows the EPR spectra of the reagent powder cyanidin-3-O-glucoside, 0.0004 g and purple rice without husk. The reagent was purchased on November 16, 2016. Both spectra are very similar to each other with respect to line shape and ΔH pp . The featureless spectra could be attributed to the delocalization of unpaired electrons throughout the aromatic ring. The EPR signal intensity of the reagent is much lower than that of purple rice. The signal contribution of anthocyanin and other paramagnetic species is roughly 60 per gram of the purple rice, based on comparison with white rice Fig. 2 . Only a small amount of the reagent is in radical form. The result indicates the scavenging potential of the reagent. In addition, the EPR spectrum of peonidin-3-O-glucoside is the same data not shown . This suggests that the unpaired spin of the reagent may delocalize the anthocyanin frame. Thus, the compounds are likely to contribute significantly to stable radicals in purple rice.
Scavenging activities may be responsible for the origin of stable radicals in the rice sample. The scavenging reaction scheme presents a possible explanation for stable radical production. We propose that intermediate stability i.e., an unreactive state plays a key role in antioxidant reactions or scavenging activity 16 . Physiological processes of plants produce reactive oxygen species ROS 16,17 , which together with nitric oxide are involved in regulating various processes 18,19 . ROS react with antioxidants such as anthocyanins e.g., cyanidin-3-O-glucoside and peonidin-3-Oglucoside to produce stable radicals, which may not easily propagate further as shown in the scavenging reaction scheme 3 . 3 Anthocyanins are polyphenol compounds that form stable radical intermediates. EPR detects such resultant radical intermediates 16,20 . EPR and EPRI provided further insight into the intermediate species of pigmented seeds. Our speculations about the possible compounds and the reaction scheme are based on the results obtained from EPR, EPRI, HPLC, and previous reports. Our consideration regarding the radicals, in relation to the antioxidant reaction scheme, is valid. However, we have modified the scheme in order to account for the possibilities of oxidants.
Previous studies on purple rice containing anthocyanin pigment in the bran showed that this pigment mainly comprises cyanidin-3-O-glucoside and peonidin-3-O-glucoside 2 . These studies also measured ABTS and superoxide radical scavenging activities of purple rice Table 1 . The scavenging activities of purple rice are approximately 3.5 times higher than those of white rice under the experimental conditions. Thus, the present results reveal that stable radicals can be produced during the scavenging activities of antioxidant compounds in purple rice. The EPR results suggest that cyanidin-3-O-glucoside of purple rice is detectable.
Our HPLC results show that the major compound in purple rice is cyaniding-3-O-glucoside Fig. 5 . This agrees with previous reports on black rice 8,9 . The EPR signal intensity g 2.00 1 of purple rice in our study is approximately 3.7 times stronger per gram of the sample than that of white rice Fig. 2 . In addition, the scavenging activities of purple rice are approximately 3-4 times higher than those of white rice Table 1 . Our results are consistent with those of previous studies. However, we have not excluded other contributions e.g., paramagnetic species to the signal.
In summary, when plant seeds have pigments, strong and stable radical signals were obtained, suggesting that the radicals are related to the pigment. X-band EPR detected at least three different paramagnetic species Mn 2 , Fe 3 , and stable radicals in both the rice. The spatial dis-tribution of endogenous organic radicals was imaged using noninvasive 2D EPRI, which revealed that the stable radicals are located in the pigmented embryo region of the rice and not in the rice interior. The possible stable organic radicals were inferred from EPR, EPRI, scavenging effect, and HPLC results. Cyanidin-3-O-glucoside ---9.08±0.39 The values were expressed as mean±SD. n = 3. Statistically significant at p < 0.05. | v2 |
2020-11-19T02:00:50.628Z | 2020-11-18T00:00:00.000Z | 243861129 | s2orc/train | Learning-Augmented Weighted Paging
We consider a natural semi-online model for weighted paging, where at any time the algorithm is given predictions, possibly with errors, about the next arrival of each page. The model is inspired by Belady's classic optimal offline algorithm for unweighted paging, and extends the recently studied model for learning-augmented paging (Lykouris and Vassilvitskii, 2018) to the weighted setting. For the case of perfect predictions, we provide an $\ell$-competitive deterministic and an $O(\log \ell)$-competitive randomized algorithm, where $\ell$ is the number of distinct weight classes. Both these bounds are tight, and imply an $O(\log W)$- and $O(\log \log W)$-competitive ratio, respectively, when the page weights lie between $1$ and $W$. Previously, it was not known how to use these predictions in the weighted setting and only bounds of $k$ and $O(\log k)$ were known, where $k$ is the cache size. Our results also generalize to the interleaved paging setting and to the case of imperfect predictions, with the competitive ratios degrading smoothly from $O(\ell)$ and $O(\log \ell)$ to $O(k)$ and $O(\log k)$, respectively, as the prediction error increases. Our results are based on several insights on structural properties of Belady's algorithm and the sequence of page arrival predictions, and novel potential functions that incorporate these predictions. For the case of unweighted paging, the results imply a very simple potential function based proof of the optimality of Belady's algorithm, which may be of independent interest.
Introduction
Paging is among the most classical and well-studied problems in online computation. Here, we are given a universe of pages and a cache that can hold up to pages. At each time step, some page is requested, and if it is not in the cache (called a cache miss or page fault), it must be fetched into the cache (possibly evicting some other page), incurring a unit cost. The goal of the algorithm is to minimize the total cost incurred. The problem is well understood through the lens of competitive analysis [51], with several optimal -competitive deterministic and (log )-competitive randomized algorithms known for it [1,30,47]. A remarkable property of paging is that the offline optimum can be computed with rather limited knowledge of the future: only the relative order of the next request times for pages. In particular, Belady's classic Farthest in Future (FiF) algorithm [16], which at any time greedily evicts the page whose next request is farthest in the future, gives the optimal solution.
A natural and well-studied generalization of paging is weighted paging, where each page has an arbitrary fetching cost > 0, and the goal is to minimize the total cost. Besides the practical motivation, weighted paging is very interesting theoretically as the phase-based analyses for unweighted paging do not work anymore (even if there are only two different weights), and as it is a stepping stone in the study of more general problems such as metrical task systems (MTS) [20] and the -server problem 1 [46]. In fact, (log )-competitive randomized algorithms for weighted paging were obtained relatively recently, and required new techniques such as the primal-dual method [11,13] and entropic regularization [24]. These ideas have been useful for various other problems and also for MTS and the -server problem [10,24,23,28].
Learning-augmented setting. Motivated by advances in machine learning, Lykouris and Vassilvitskii [45] recently introduced a new semi-online model where at each step, the algorithm has access to some, possibly erroneous, machine-learned advice about future requests and studied the paging problem in this model. Here, at each time , along with the current page request we are also given the predicted arrival time for the next request of the same page. This can be viewed as generalizing the setting for Belady's FiF algorithm to allow incorrect predictions. They design an algorithm with competitive ratio (1) when the predictions are accurate, and which degrades smoothly as the prediction error increases, but never exceeds (log ). These results have been subsequently refined and improved in [50,52].
In this work, we study whether Belady's algorithm and the results in the learning-augmented setting for unweighted paging can be extended to the weighted case. Suppose each page weight is one of distinct values 1 , . . . , ℓ ; the pages are thus divided into ℓ disjoint weight classes. Then recent work by Jiang et al. [37] and Antoniadis et al. [6] shows that even with perfect predictions, any deterministic (resp., randomized) online algorithm must have competitive ratio Ω(ℓ) (resp., Ω(log ℓ)), provided ℓ ≤ . 2 In particular, for ℓ ≥ , predictions do not give any advantage.
As Belady's algorithm is 1-competitive for ℓ = 1, this raises the natural question whether there are algorithms with guarantees that are only a function of ℓ, and independent of the cache size . In typical scenarios ℓ is likely to be small and much less than . Also if the weights range from 1 to , then one can assume ℓ = (log ) by rounding them to powers of 2.
Prediction model and error
We consider the following model for learning-augmented weighted paging. At each time = 1, . . . , , the algorithm receives a request to some page as well as a prediction ∈ N for the next time after when will be requested again. Let ∈ N be the actual time when is next requested (or = + 1 if it is not requested again). In the unweighted setting of [45,50,52], the prediction error was defined as the ℓ 1 -distance between and , which in the weighted case generalizes naturally to We remark that although the predictions are for the arrival times, we use them only to get a relative ordering of pages within the same weight class by their next predicted arrival times. We define the following more nuanced error measure that allows us to obtain tighter bounds. For any weight class , we call a pair ( , ) of time steps an inversion if both and belong to weight class and < but ≥ . Let ( , ) := |{ ∈ N | ∃ ∈ N : ( , ) is an inversion for weight class }|. In other words, ( , ) is the number of surprises within class , i.e., the number of times some page arrives although some other page of the same class was expected earlier. Let We drop , from the notation when it is clear from context and bound the competitive ratio of our algorithms in terms of . Since ≤ 2 [50, Lemma 4.1], our bounds hold for the error measure as well. In fact, the relationship holds even if is defined as the total number of inversions within weight class and thus our notion of can be significantly smaller than (see [29] for an example where = Ω( ) · ).
Our results
We obtain algorithmic results for learning-augmented weighted paging, both for the case of perfect predictions and for predictions with error. Even though the latter setting generalizes the former, we describe the results separately as most of the key new ideas are already needed for perfect predictions. To the best of our knowledge, no bounds better than ( ) and (log ) were previously known even for the case of ℓ = 2 weight classes with perfect predictions. We first consider the deterministic and the randomized settings when the predictions are perfect.
There is an ℓ-competitive deterministic algorithm for learning-augmented weighted paging with ℓ weight classes and perfect predictions.
The competitive ratio is the best possible by the lower bound of [37] and is (log ) if page weights lie in the range [1, ]. Also, notice that the algorithm is exactly ℓ-competitive; in particular, for ℓ = 1 we have an optimal algorithm. Since ℓ = 1 corresponds to the unweighted case, Theorem 1.1 can be viewed as generalizing Belady's FiF algorithm to the weighted case.
Our algorithm is quite natural, and is based on a water-filling (primal-dual) type approach similar to that for the deterministic -competitive algorithm for weighted paging due to Young [54]. Roughly speaking, the algorithm evicts from each weight class at a rate inversely proportional to its weight, and the evicted page is the one whose next arrival is (predicted) farthest in the future for that weight class. While the algorithm is natural, the analysis is based on a novel potential function that is designed to capture the next predicted requests for pages. The algorithm and its analysis are described in Appendix B.
Furthermore, for ℓ = 1, this gives a new potential-function proof for the optimality of FiF. This new proof seems simpler and less subtle than the standard exchange argument and might be of independent interest; see Appendix A.
Theorem 1.2. There is an (log ℓ)-competitive randomized algorithm for learning-augmented weighted paging with ℓ weight classes and perfect predictions.
The competitive ratio is the best possible [6,37], and is (log log ) for page weights in the range [1, ]. This result is technically and conceptually the most interesting part of the paper and requires several new ideas.
The algorithm splits the cache space into ℓ parts, one for each weight class. Within each class, the cached pages are selected according to a ranking of pages that is induced by running several copies of Belady's FiF algorithm simultaneously for different cache sizes. The key question is how to maintain this split of the cache space over the ℓ classes dynamically over time. To get an (log ℓ) guarantee, we need to do some kind of a multiplicative update on each weight class, however there is no natural quantity on which to do this update 3 . The main idea is to carefully look at the structure of the predicted requests and the recent requests and use this to determine the rate of the multiplicative update for each class. We give a more detailed overview in Section 1.3.
Both our algorithm and its analysis are rather complicated and we leave the question of designing a simpler randomized algorithm as an interesting open question.
Prediction errors and robustness. The algorithms above also work for erroneous predictions, and their performance degrades smoothly as the prediction error increases. In particular, our deterministic algorithm has cost at most ℓ · OPT + 2ℓ , and our randomized algorithm has expected cost (log ℓ · OPT + ℓ ). (Recall ℓ is the number of weight classes, and is the weighted number of surprises.) Using standard techniques to combine online algorithms [18,32,6], together with the worst case -and (log )-competitive deterministic and randomized algorithms for weighted paging, this gives the following results. Theorem 1.3. There is an (min{ℓ + ℓ /OPT, })-competitive deterministic algorithm for learning-augmented weighted paging. Theorem 1.4. There is an (min{log ℓ + ℓ /OPT, log })-competitive randomized algorithm for learningaugmented weighted paging.
Implications for interleaved caching. Our algorithms actually only require the relative order of pages within each weight class, and not how the requests from different classes are interleaved. Unweighted paging has also been studied in the interleaved model, where ℓ request sequences (1) , . . . , (ℓ) are given in advance and the adversary interleaves them arbitrarily. Here tight Θ(ℓ) deterministic and Θ(log ℓ) randomized competitive algorithms are known [15,26,41]. Our results thus extend these results to the weighted setting, where each sequence has pages of a different weight.
Overview of techniques
We now give a more detailed overview of our algorithms. We mainly focus on the case of perfect predictions, and briefly remark how to handle errors, towards the end. As we aim to obtain guarantees as a function of ℓ instead of , the algorithm must consider dynamics at the level of weight classes in addition to that for individual pages. Our algorithms have two components: a global strategy and a local strategy. The global strategy decides at each time how many cache slots to dedicate to each different weight class , denoted ( ). Since we have cache slots in total, we maintain ( ) = with ( ) ≥ 0 for all . 4 The local strategy decides, for each weight class , which ( ) pages to keep in the cache.
Suppose page requested at time belongs to weight class , and is fetched as it is not in the cache. This increases ( ), the number of pages of class in cache, and the global strategy must decide how to decrease ( ) for each class ≠ to maintain ( ) = . In the deterministic case, roughly, the global strategy simply decreases the ( ) uniformly at rate 1/ (some care is needed to ensure that ( ) are integral, and the idea is implemented using a water-filling approach), and the local strategy is Belady's FiF algorithm.
This suffices for a competitive ratio of ℓ, but to get an (log ℓ) bound in the randomized case, one needs more careful multiplicative updates for ( ). However, it is not immediately clear how to do this and naively updating ( ) in proportion to, for example ( )/ or ( − ( ))/ (analogous to algorithms for standard weighted paging), does not work.
Update rule. A key intuition behind our update rule is the following example. Suppose that for each class , the adversary repeatedly requests pages from some fixed set , say in a cyclic order. Assuming | | ≥ , we claim that the right thing to do is to update each multiplicatively in proportion to | | − (and inversely proportional to ). Indeed, if | | is already much larger than ( ), the algorithm anyway has to pay a lot when the pages in are requested, so it might as well evict more aggressively from class to serve requests to pages from other classes. On the other hand, if ( ) is close to | |, then the algorithm should reduce ( ) at a much slower rate, since it is already nearly correct.
The difficulty in implementing this idea is that the request sequence can be completely arbitrary, and there may be no well-defined working set of requests for class . Moreover, even if the requests have such structure, the set could vary arbitrarily over time. Our key conceptual and technical novelty is defining a suitable notion of . The definition itself is somewhat intricate, but allows us to maintain a "memory" of recent requests by utilizing a subset of the real line. (A formal description appears in Section 3.) Our definition relies on a crucial notion of page ranks that we describe next.
Page ranks. Let us fix a weight class , and consider the request sequence restricted to this class. We say that page has rank at time if Belady's algorithm running on with a cache of size contains at time , but an alternate version of Belady's algorithm with a cache of size − 1 does not. The rank of pages changes over time, e.g., a requested page always moves to rank 1 in its weight class. In Section 2, we describe various properties of this ranking.
Page ranks allow us to define a certain canonical local strategy. More importantly, they allow us to view the problem in a clean geometric way, where the requests for pages correspond to points on the line. In particular, if the requested page has rank , we think of the request arriving at point on the line. Even though the page request sequence can be arbitrary, the resulting rank sequences in the view above are not arbitrary but have a useful "repeat property", which we crucially exploit in both designing the update rule for and analyzing the algorithm. (Prediction errors are incorporated quite directly in the above approach, and require only an accounting of how these errors affect the ranks and the repeat property.) The overall algorithm is described in Section 3 and the analysis is described in Section 4. The analysis uses several potential functions in a careful way. In particular, besides a relative-entropy type potential to handle the multiplicative update of , we use additional new potentials to handle the dynamics and evolution of .
Other related work
Due to its relevance in computer systems and the elegance of the model, several variants of paging have been studied [11,19,35]. An important direction has been to consider finer-grained models and analyses techniques to circumvent the sometimes overly pessimistic nature of worst-case guarantees. In particular, several semi-online models, where the algorithm has some partial knowledge of the future input, have been studied, including paging with locality of reference [21,36,31], paging with lookahead [2,22,53], Markov paging [38], and interleaved paging [15,26,41]. Paging algorithms have also been explored using alternative notions of analysis such as loose-competitivenes [54], diffuse adversaries [39], bijective analysis [4], and parameterized analysis [3].
In [37], a different prediction model for weighted paging was considered where at each time, the algorithm has access to a prediction of the entire request sequence until the time when every page is requested at least once more. We note that this requires much more predicted information than our model and the analogous models for unweighted paging. As discussed in Section 1.3, our algorithm will have two parts: a global strategy and a local strategy. At each time , the global strategy specifies the cache space ( ) for each weight class and the local strategy decides which pages of class to keep. In this section, we define a notion of ranks for pages within each class. This will allow us to not only define a local strategy for each class that is close to optimal given any global strategy, but also view the weighted paging problem with arbitrary request sequences in a very clean way in terms of what we call rank sequences.
2.1 Page ranks Fix a weight class . We define a notion of time-varying ranks among pages of class . Let | be the actual request sequence and let | be the sequence of predictions, restricted to class . We can view this as an input for unweighted paging. For brevity, we use = | , = | . Let BelPred( ) be the variant of Belady's algorithm for cache size that, upon a cache miss, evicts the page with the farthest-in-future predicted next arrival time (breaking ties arbitrarily, but consistently for all ). Note that if all the predictions in are accurate, this is simply Belady's algorithm. For class , let , ( , ) be the set of pages in the cache of BelPred( ) at time ; we call , ( , ) a configuration or cache state. We may drop and and/or from the notation, and assume that | | = for all . A simple inductive argument, whose proof we defer to Appendix C, shows that the configurations of BelPred( ) differ in exactly one page for consecutive values of : Intuitively, Lemma 2.1 simply says that the set of items in the cache (of size m) when running BelPred( ) will be a subset of the items when running BelPred( + 1) on a cache of size + 1. It leads to the following well-defined notion of rank 5 on pages at any time .
We now describe how the ranks change when a page is requested. Suppose the requested page had rank 0 just before it was requested. Then will have new rank 1 as it lies in the cache of BelPred( ) for every ≥ 1. The pages with ranks > 0 do not change. Consider the set of pages with (old) ranks 1, . . . , 0 − 1. (Note that this is precisely 0 −1 .) Among those pages, the one whose next predicted request is farthest in the future will be updated to have a new rank of 0 ; denote its original rank by 1 . All pages with rank between 1 and 0 will keep their ranks. Continuing this way, if we consider the pages of ranks 1, 2, . . . , 1 − 1 (corresponding to 1 −1 ), the page among those whose predicted request appears farthest in the future will have a new rank of 1 ; denote its original rank by 2 . We can recursively define 3 , 4 and so on in a similar fashion. See Figure 1 for an illustration. More formally, we have the following lemma.
Lemma 2.2. (Rank update) For a given time , let
denote the page with rank , and let 0 be the rank of the next requested page 0 . Starting from 0 , define the sequence 0 > 1 > · · · > = 1 inductively as follows: given with predicted next request farthest in the future. If = 1, then := and the sequence ends. Then at time + 1, page 0 will have rank 1, and for = 1, . . . , , page will have rank −1 . All other ranks remain unchanged.
Proof. Clearly 0 will receive rank 1 as it must lie in the cache of BelPred(1). Moreover, as BelPred( ) for ≥ 0 does not incur a cache miss, ranks greater than 0 do not change.
Next, by definition of , the page evicted by BelPred( ) for ∈ { , + 1, . . . , −1 − 1} is , so will have new rank ≥ −1 . However, as BelPred( ) for ≥ −1 keeps in its cache (as it evicts another page), will have new rank exactly −1 . Also as no page other than 0 , . . . , will be loaded or evicted by any BelPred( ), none of these other pages' ranks will change. − 1} whose associated page has the latest predicted next request time, and = 1.
Local strategy and trustful algorithms
Recall that a global strategy specifies a vector ( ) = ( 1 ( ), . . . , ℓ ( )) at each time , describing the cache space for each weight class. For a weight class , consider the local strategy that keeps the pages with ranks between 1 and = ( ) in cache at time . Note that this is precisely the set , . If is fractional, let us extend the definition of , to be the cache state that fully contains the pages with ranks 1, . . . , , and an − fraction of the page with rank + 1. We call such algorithms trustful, defined next.
Next, we show that we can restrict our attention to trustful algorithms 6 without loss of generality. The proof of Lemma 2.3 is in Appendix C and uses how ranks change over time. As we remark there, a minor modification also yields a much simpler proof of a bound of Wei [52] for learning-augmented unweighted paging, whose proof was based on an analysis of eleven cases.
Thanks to Lemma 2.3, we can assume that the offline algorithm is trustful at the expense of misjudging its cost by an (1 + /OPT) factor. Note that given a global strategy that computes ( ) online, the corresponding trustful algorithm can be implemented in the learning-augmented setting; the algorithm we design will also be trustful.
Rank sequences and repeat violations
Restricting ourselves to trustful algorithms and the local strategy above is useful as we can view the subsequence of page requests for a given weight class as a sequence of rank requests. Consider a single weight class and let 1 , 2 , . . . be the request sequence of pages within that class. Then together with sequence of predicted next arrivals, it induces the corresponding rank sequence ℎ 1 , ℎ 2 , . . . where ℎ is the rank of just before it is requested. A trustful algorithm has a page fault on the arrival of ℎ if and only if it has less than ℎ pages of that weight class in its cache.
Structure of rank sequences. It turns out that rank sequences have a remarkable structural property. In particular, for the case of perfect predictions ( = 0), the possible rank sequences are exactly characterized by the following "repeat property", which we prove in Appendix C.
Lemma 2.4. (Repeat property) Let = 0 and let be a weight class. A rank sequence corresponds to a request sequence of pages of class if and only if it has the following repeat property: for any ℎ, between any two requests to the same rank ℎ, every rank 2, . . . , ℎ − 1 must be requested at least once.
With imperfect predictions (i.e., ≠ 0), while there is no clean characterization, the number of times the repeat property is violated can be bounded in terms of . We say that time is a repeat violation if rank ℎ was also requested at some earlier time (for the same weight class) and some rank from {2, . . . , ℎ − 1} has not been requested since then. The following lemma bounds the number of repeat violations. Proof. Let ℎ 1 , ℎ 2 , . . . be the rank sequence of weight class . Consider some time 2 that is a repeat violation of weight class . Let 1 be the last time before 2 that rank ℎ 2 was requested, and let ℎ := ℎ 1 = ℎ 2 . Since 2 is a repeat violation, there exists a rank ℎ ∈ {2, . . . , ℎ − 1} that is missing in the sequence ℎ 1 +1 , . . . , ℎ 2 −1 . For any , denote by the page with rank at a given time. We will show that at time 2 − 1, page ℎ has an earlier predicted arrival time than page ℎ . Thus, the request to ℎ at time 2 increases by 1.
Consider the last time in { 1 , . . . , 2 − 1} when the identity of ℎ changes (this time exists as 1 is one such time). By Lemma 2.2, the new page ℎ will have the farthest predicted next arrival among the pages 2 , . . . , ℎ . It suffices to show that it remains true until time 2 − 1 that ℎ has a farther predicted next arrival time than ℎ . This could only change if ℎ changes. But as rank ℎ itself is not requested between times 1 and 2 , by Lemma 2.2, ℎ can only change due to a request to a page with larger rank, and the predicted next arrival time of the new page ℎ can only decrease due to this change.
Algorithm
We now describe and analyze an (log ℓ)-competitive algorithm for learning-augmented weighted paging without prediction errors (Theorem 1.2), and more generally show that it is (log ℓ + ℓ /OPT)-competitive with imperfect predictions. Combining our algorithm with any (log )-competitive weighted paging algorithm via the combination method of Blum and Burch [18] yields Theorem 1.4.
By Lemma 2.3 we can assume that both the online and offline algorithms are trustful, and hence are fully specified by vectors ( 1 ( ), . . . , ℓ ( )) and ( 1 ( ), . . . , ℓ ( )) that describe the cache space used by each weight class under the online and offline algorithm, at each time . For the online algorithm, we allow ( ) to be fractional so that the cache contains pages with ranks 1, . . . , fully and a − fraction of the page with rank + 1. By standard techniques [13], this can be converted into a randomized algorithm with integer ( ), losing only a constant factor in the competitive ratio.
By the rank sequence view in Section 2.3, the problem can be restated as follows. At each time , some rank in some weight class is requested. If ( ) < , there is a cache miss and the algorithm incurs cost · min(1, − ( )). The goal is to design the update of ( ) so as to minimize the total cost. Before giving the details, we first present an overview of the algorithm.
Overview
For convenience, we usually drop the dependence on time from the notation. Consider a request to a page with rank in weight class and suppose our algorithm has a cache miss, i.e., < . Then always increases and all other with ≠ and > 0 decrease. So the main question is at what rate to decrease the for ≠ (since = at all times, the rate of increase = − ≠ ). A key idea is to have a variable for each weight class that is roughly proportional to the eviction rate from weight class . This variable also changes over time depending on the ranks of requested pages within each weight class. We now describe the main intuition behind how the are updated. For each weight class , we maintain a more refined memory of the past in the form of a set ⊆ [ , ∞), consisting of a union of intervals. Roughly speaking, the set can be thought of as an approximate memory of ranks of weight class that were requested relatively recently. So the in Section 1.3 corresponds to (0, ] ∪ . We set = | |, the Lebesgue measure of . For weight classes ≠ , the algorithm increases multiplicatively, so that eventually we will evict faster from weight classes that are only rarely requested. For the same reason, for the requested weight class we would like to decrease (as we increased , we would not want to decrease it rapidly right away when requests arrive next in other classes). However, decreasing could be highly problematic in the case that the offline algorithm has < , because this will slow down our algorithm in decreasing in the future, making it very difficult to catch up with later. To handle this, crucially relying on the repeat property, we decrease if and only if ∈ . Informally, the reason is the following: if the last request of rank was recent, then-assuming no repeat violation-all the ranks in {2, . . . , } were also requested recently. This suggests it may be valuable to hold pages from weight class in cache, but our algorithm only holds < of them. To improve our chance of increasing towards in the future, we should reduce . This simplified overview omits several technical details and in particular how to update the sets . We now describe the algorithm in detail and then give the analysis in Section 4.
Detailed description
As stated above, we denote by ∈ [0, ] the page mass from weight class in the algorithm's cache, meaning that the pages with ranks 1, . . . , of weight class are fully in its cache and the next page is present to an − extent. We maintain a variable and a set ⊂ [ , ∞) for weight class . The , , are all functions of time , but we suppress this dependence for notational convenience. Algorithm 1 contains a summary of our algorithm, that we now explain in detail. Request arrival and continuous view. Consider a request to the page with rank of weight class . If ≤ − 1 (resp. ≥ ), the page is fully missing (resp. fully present) in the algorithm's cache. But for − 1 < < , the page is fractionally present. To obtain a view where each request is either fully present or fully missing, we break the request to rank into infinitesimally small requests to each point ∈ ( − 1, ], by moving a variable continuously along the interval ( − 1, ]. We call the pointer (to the current request) and move it from − 1 to at speed 8, so the duration of this process is 1/8 unit of time. During this process, we update , , and the sets continuously. As changes over time, these quantities also change over time.
While ≤ , the algorithm does nothing (the corresponding infinitesimal request is already present in its cache). While > , we update and , for each class , at rates Let us make a few observations. Each lies between 0 and , and hence increases while all for ≠ decrease. Moreover, we have that = 0, and so the total cache space used by our algorithm remains constant at its initial value of . Second, increases for all ≠ . For = , we decrease if and only if ∈ . Update of the sets . Each set maintained by our algorithm will be a (finite) union of intervals satisfying (i) ⊂ [ , ∞) and (ii) | | = , where | · | denotes the Lebesgue measure. We initialize arbitrarily to satisfy these properties (e.g., as := [ , + )).
The precise update rule for the sets is as follows. First consider ≠ . Here decreases, and we simply add to all the points that moves over. As ⊂ [ , ∞), these points are not in yet, so grows at rate − = / . As is also / , both | | = and ⊂ [ , ∞) remain satisfied. For the requested weight class , modifying is somewhat more involved, consisting of three simultaneous parts (see Figure 2): • We remove points from the left (i.e., minimal points) of at the rate at which is increasing (i.e., we increase the left boundary of the leftmost interval of at rate ). This ensures that the property ⊂ [ , ∞) is maintained.
• We also remove points from the right of at rate 1. We can think of these points as "expiring" from our memory of points that were recently requested.
• While ∈ , we do nothing else. Observe that | | = remains satisfied. • While ∉ , we also add points from ( − 1, ] \ to at rate 2, thereby ensuring again that | | = remains satisfied. This can be achieved as follows, which will also ensure that continues to be a finite union of intervals: Consider the leftmost interval of that overlaps with ( − 1, ]; if no such interval exists, consider the empty interval ( − 1, − 1] instead (or, if ∈ ( − 1, ], consider the empty interval ( , ]). Since ∉ , the right boundary of this interval lies in [ − 1, ]. We can add points from ( − 1, ] \ to at rate 2 by shifting this boundary to the right at rate 2. (Since moves at rate 8, the boundary will never "overtake" .) If the considered interval happens to be the rightmost interval of , then combined with the previous rule of removing points from the right at rate 1 this would mean that effectively the right boundary moves to the right only at rate 1 instead of 2. Note that the removal of points from the left and right of is always possible: If were empty, then ∉ and the addition of points at rate 2 is faster than the removal.
Boundary conditions. The description of the algorithm is almost complete, except that it does not ensure yet that the requested page is fully loaded to its cache, which requires that ≥ 1 (recall that the requested page will receive new rank 1), and that no decreases below 0. Both issues can be handled as follows. Add a dummy page to each weight class and increase the cache size by ℓ, where this extra cache space shall be used to hold the ℓ dummy pages. In the request sequence, replace any request to a page by many repetitions of the request sequence ( , 1 , . . . , ℓ ). As the number of repetitions tends to infinity, our algorithm converges to a state with ≥ 1 for all and ≥ 2 for the weight class containing . Thus, it holds and all dummy pages in its cache. The corresponding algorithm obtained by removing the dummy pages (and reducing the cache size back to ) is a valid paging algorithm for the original request sequence.
Analysis
The goal of this section is to prove the following theorem: Theorem 4.1. The algorithm is (log ℓ + ℓ /OPT)-competitive for learning-augmented weighted paging.
We employ a potential function based analysis. Our potential function Φ will consist of three potentials E, R, and S and is defined as follows. Here describes the state of a trustful offline algorithm (whose cost is within a factor (1 + /OPT) of the optimal offline algorithm by Lemma 2.3). We use ( − ) + := max{0, − }. The set , for a weight class and ≥ 0, consists of all points that were visited by the pointer since the last request at (details in Section 4.4). In the definition of the repeat and scatter potentials, | · | denotes the Lebesgue measure.
The potential E bears similarities with entropy/Bregman divergence type potentials used in other contexts [12,23,24,25,28]. The potential R is carefully designed to exploit the repeat property. In particular, just before the pointer reaches , the set contains [1, ) provided there is no repeat violation, and it becomes empty immediately afterwards. The scatter potential is mostly for technical reasons to handle that the intervals in may be non-contiguous.
To show that our algorithm has the desired competitive ratio, it suffices to show that at all times, where On, Off, and are appropriate continuous-time notions of online cost, offline cost, and the prediction error and denotes the derivative of with respect to time. We will actually take On to be a quantity we call online pseudo-cost that approximates the true online cost, as discussed in Section 4.2. For each request, we will consider the step where the offline algorithm changes its vector ( 1 , . . . , ℓ ) separately from the remaining events.
Offline cost
We will charge the offline algorithm for both the weights of pages it fetches as well as pages it evicts. 7 We may assume that for each request, the (trustful) offline algorithm proceeds in two steps: First it changes and updates its cache content with respect to the old ranks. Then it updates its cache content to reflect the new ranks of weight class . This means that it may fetch a page of weight class in the first step that it evicts again in the second step in order to load the requested page (recall Lemma 2.2/ Figure 1). Since both of these pages have the same weight, this overestimates the offline cost by only a constant factor.
Change of y. When the offline algorithm changes some at rate , it incurs cost | |. We claim that the term corresponding to weight class in each of the potentials changes at rate at most (log ℓ) | |. First, for the entropic potential, it is not hard to see that E = 1 + log 1+ = (log ℓ) . Therefore, E ≤ (log ℓ) | |.
Next, for the scatter potential, as [ , ∞) and ( − ) + can change at rate at most | |, we also have that S ≤ 2 | |. It remains to bound R . Fix a ∈ and consider the term in the integrand of R corresponding to . If increases, then ( , ] ∩ can only decrease, in which case R only decreases. If decreases, then ( , ] ∩ increases at rate at most | |. Ignoring the increase in the denominator (which only decreases R), the increase in the numerator leads to an increase of at most | | + ( + |( , ] ∩ |) ≤ | | .
To summarize, when changes, (4.3) holds as we can charge the change in the overall potential to (log ℓ) times the offline cost.
Change of ranks. We can assume that the offline vector is integral. If < , then offline pays to fetch the page. As the pointer moves in [ − 1, ] at rate 8, equivalently, we can view this as charging the offline algorithm continuously at rate { < } 8 during the movement of . This view will be useful for analyzing the online algorithm in a continuous way, as we do next.
Online pseudo-cost
Instead of working with the actual cost incurred by the online algorithm, it will be convenient to work with a simpler online pseudo-cost. This is defined as the quantity that is initially 0 and grows at rate 1/ at all times during which the online algorithm changes . Recall that is itself changing over time. Formally, we define the online pseudo-cost as The following lemma shows that the online pseudo-cost is a good proxy for the true online cost. Proof. The online cost is the weighted page mass loaded to its cache. The two events that lead to page mass being loaded to the cache are either an increase of or a change of ranks. By definition of ranks (or alternatively, Lemma 2.2/ Figure 1), a request to rank does not affect the (unordered) set of pages with ranks 1, 2, . . . , for any ≥ , and for < it affects this set only by adding and removing one page. Thus, if rank of class is requested, the change of ranks incurs cost min{1, − } + . We can overestimate the cost due to increasing by viewing the change of as an increase at rate 1 separate from a decrease at rate / . In this view, the online cost for increasing is times the duration of the increase. Since the pointer moves at speed 8 across ( − 1, ], and changes to occur only while > , the duration of the update of for this request is precisely 1 8 min{1, − } + (where denotes the value of this variable before the request arrives). Therefore, the online cost for increasing is 1 8 min{1, − } + , and hence the total online cost is 9 times the cost for increasing (in our overestimating view). Over the course of the algorithm, the overall increase of any equals the decrease, up to an additive constant. Thus, instead of charging for increasing and the change of ranks, we can charge only for decreasing each (including = ) at rate / (the associated cost being times this quantity). This underestimates the true online cost by a factor of at most 9, up to an additive constant. The cost charged in this way increases at rate which is twice the rate of increase of the pseudo-cost, and the result follows.
By Lemma 4.1 it now suffices (up to (1) factors) to assume that On = 1 · { ≠0} . Consequently, recalling that the offline algorithm suffers cost at rate { < } 8 due to the change of ranks, in order to prove the desired competitive ratio it suffices to show that We now upper bound the rate of change of each of the potentials. Note that the first term − 1 2 can be charged against the online pseudo-cost and the second term can be charged to the offline cost resulting from the change of ranks. The third term is negative, which only helps. However, the last two terms are problematic. We will handle them using the repeat potential and the scatter potential.
Proof. [of Lemma 4.2] Define
:= ( + − ) + and := ( − ) + = ( − ) + , where the equality is due to ℓ =1 = ℓ =1 = . We note that where the first inequality follows as = = ( + − ) ≤ and ≤ as ≥ 0 for all , and the second inequality follows from the triangle inequality. Letting We first bound the change of Ψ. We first consider the case < , so that the summand for = is 0 and does not contribute to Ψ . For ≠ , the sum + is unchanged by (3.1) where the first inequality uses ≤ , and the second inequality uses ≥ 0 and + − ≤ . We will actually need the following slightly more complicated bound, which can be obtained by combining (4.5) and (4.6): Thus, the contributions considered so far only lead to a decrease of Ψ. However if ≥ , then Ψ could also suffer an increase resulting from increasing at rate 1 and, if ∈ , decreasing at rate 2. In this case, the change of Ψ can exceed the preceding bound by at most In summary, while is changing, Ψ is changing at rate where the term { < } comes from the fact that the extra increase (4.7) is incurred only if ≤ and is changing only if < . We now bound the change of the part of E not involving Ψ, i.e., of the quantity E −5Ψ = [ + 2( − )] + . Using the update rules for and , and cancelling some common terms, the rate of change can be written as Thus, The lemma follows by combining (4.8) and (4.10) and noting that which can be seen by considering separately the cases ≤ 4 and > 4 .
4.4
The repeat potential R and its rate of change The purpose of the repeat potential is to cancel the term from our bound on E in case the current request is not a repeat violation. We will crucially use that if the current request is not a repeat violation, then since the last request to rank of weight class , every rank less than has been requested at least once. For a weight class and ∈ R + , denote by the set of points across which the pointer has moved during requests of weight class after the time when the pointer was last located at for a request to weight class . In other words, after any request is the set ( , ] ∪ [ − 1, ], where ranges over all ranks of weight class that have been requested (so far) after the last request to rank of weight class . If the pointer was never at during a request to weight class , define := R + as the entire positive real line. Recall that the repeat potential is defined as and | · | denotes the Lebesgue measure. When online moves, R will change due to the changes in the values of , , the sets and . We will consider the effect of each of these changes separately while analyzing R . We also consider ≠ and = separately. The case ≠ is quite simple, and we describe it next.
Change of R i for i ≠ r: Since the fraction in the definition of R is non-decreasing in , and decreases for ≠ , the change of does not cause any increase of R . Similarly, R is non-increasing in and increases for ≠ , so also the change of does not cause any increase of R . Moreover, as the request is to weight class ≠ , the sets do not change. However, R could increase due to points being added to (recall that as decreases we add the points that moves over to ).
The integrand corresponding to any ∈ can be bounded by As new points are added to at rate , the change of R is bounded by Change of R r : For = , all the relevant quantities change, and we consider them separately.
Effect of changing : Fix a point ∈ . The increase of can increase the integrand corresponding to at most at rate as the numerator rises at rate at most ≤ 1, and the denominator can only rise and reduce the integrand (which we ignore). Note that this increase occurs only if < < , (as does not increase if ≤ , and the interval ( , ] is empty if ≥ ). Thus, as | | = , the contribution of the change of to R is at most Effect of changing : For a fixed , the increase in the integrand due to the change in is where the first inequality uses that ≤ 0 if and only if ∈ and in that case − ≤ 2.
As this expression does not depend on and as | | = , the change of contributes to R Effect of changing : Removing points from can only decrease R . Any point added to comes from ( − 1, ], and since these points have only just been passed by the pointer, = ( , ] for such points; as any such point added to is also in [ , ∞), we have |( , ] ∩ | = 0 for any added to . Thus, the change of the set does not increase R any further. Effect of changing : Finally and most crucially, we consider the change of R resulting from the change of the sets . If the current request is not a repeat violation, then just before the pointer reaches position during the current request, must contain the interval (1, ). Once the pointer reaches , the set becomes empty. In particular, using that ≥ 1 and given that is changing only while < , the quantity |( , ] ∩ | then changes from ( − ) + to 0. If ∈ , this contributes to a decrease of R . Since the pointer moves at speed 8, the contribution of the change of the set to R is then However, if there is a repeat violation, then in the worst case may already be empty so this would not yield any contribution to R . In this case, by Lemma 2.5, increases by 1 due to this request. In continuous time, as the pointer moves for 1/8 unit of time, this corresponds to increasing at rate = 8. We claim that regardless of a repeat violation or not, the contribution of the change of to R is at most In case of no repeat violation this is just our statement above. If there is a repeat violation then since ≥ 0, = 1/ℓ and = 8 this quantity is ≥ 0, and since becoming empty cannot increase R , this is a valid upper bound.
Note that R can also increase as points are added to some . The only point added to any is the current pointer position . The intersection ( , ] ∩ can be increased by this only if ≤ (when is not changing). During those times, the integrand increases at most at rate 8/ and only if ∈ ( , ]. This can cause R to increase at rate at most { < } 8.
Overall, while is changing, R changes at rate and while is moving but is not changing, R changes at rate Combining this with our bound on the change of R for ≠ , we obtain the bounds on the change of R that are summarized in the following lemma.
The { < } term can be charged to the offline cost, and then note crucially that the third and fourth terms in R (when is changing) are the same but with opposite signs (and up to a factor 5/2) as the third and fourth term in our bound on E .
Proof. For ≠ , the term ( − ) + is non-increasing. The term | ∩ [ , ∞)| can increase for ≠ only if > (as points are added to at ). Thus, any possible increase of | ∩ [ , ∞)| is cancelled by a decrease of ( − ) + . So S is bounded by the rate of change for = .
A possible increase of the term ( − ) + would be cancelled by a decrease of | ∩ [ , ∞)| caused by removing points from the left of (as ⊂ [ , ∞)). The removal of points from the right of at rate 1 contributes a decrease at rate if < sup . Since + ≤ sup , this contributes at most − { < + } to the change of S. Any other change of S could only be due to adding points from ( − 1, ] to at rate 2. This can increase S only if < and hence contributes { < } 2 . Together, this gives the claimed bound.
4.6
Putting it all together Consider our overall potential Φ = 2E + 5R + 4S. Using the bounds from Lemmas 4.2-4.4, we see that while is changing, this potential is changing at rate where in the first step the term { < } (log ℓ) absorbs any other terms of the form { < } (1) . The competitive ratio. By the bounds in Section 4.1, when is changing, the increase in potential is (log ℓ) times the offline cost, and we can charge additional cost at rate < 8 to the offline algorithm while the pointer is moving. By Lemma 4.1, the online algorithm suffers pseudo-cost at rate 1/ while it is moving. Thus, inequality (4.3) follows from the bound on the increase in potential above. Integrating over time, we get where the second inequality uses Lemma 2.3 and ℓ, , (1) denotes any constants that may depend on ℓ, , the weights, but are independent of the input sequence. We conclude that our algorithm is (log ℓ)-competitive in case of perfect predictions and (log ℓ + ℓ /OPT)-competitive in general.
A A simple analysis of Belady's FiF algorithm
We present a simple potential function argument for Belady's FiF algorithm for unweighted paging, that evicts the page whose next request is the farthest in the future. We first set up some notation. At any time , let ( ) be the set of pages in the cache of FiF and * ( ) be those in the cache of some fixed offline optimum solution. At any time , let us order the pages according to their next request (this order only depends on the request sequence and not on ( ) or * ( )). This order evolves as follows. At time , the page at position 1 is requested. It is then reinserted in some position and the pages in positions 2, . . . , previously move one position forward to 1, . . . , − 1, while the pages in positions + 1, . . . , stay unchanged.
Let Proof. The optimality of FiF now follows directly from Lemma A.1 as Φ( ) ≥ 0 for all , and
B Deterministic algorithm
We now give a natural extension of the FiF algorithm to the weighted case that yields an ℓ-competitive deterministic algorithm for learning-augmented weighted paging with perfect predictions (Theorem 1.1), and an ℓ + 2ℓ /OPTcompetitive deterministic algorithm in the case of imperfect predictions. Combined with any -competitive online algorithm for weighted paging [27,54,14] using the method of [32,6] to deterministically combine several online algorithms, this yields Theorem 1.3. Let pos ( , ) denote the position of page among all pages of weight when the pages are sorted by the time of their predicted next request, just before the -th request.
Algorithm. For each weight class , we maintain a water-level ( ) that is initialized to . At any time when a page eviction is necessary, the page to evict from cache is decided as follows.
Let ( ) ⊆ [ℓ] be the set of weight classes from which the algorithm holds at least one page in its cache. Let = arg min ∈ ( ) ( ) be the weight class among them with the least level (ties broken arbitrarily). Then we evict the page from weight class with the highest position and set the levels as In other words, the level of the class from which the page is evicted is reset to , and the levels for all other classes with at least one page in the cache are decreased by ( ). Potential function analysis. For any weight class , let ( ) and * ( ) be the set of pages of weight maintained in cache by the online algorithm and some optimal offline algorithm, respectively, just before serving the request for time . Let ( , ) = |{ ∈ ( ) | pos ( , ) ≥ }| denote the number of pages of weight in the cache of the online algorithm whose position is at least . Similarly, let * ( , ) = |{ ∈ * ( ) | pos ( , ) ≥ }|.
Proof. It suffices to show for each time step that where ΔOn and ΔOPT denote the cost incurred in this time step by the online algorithm and the optimum offline algorithm respectively, ΔΦ is the associated change in potential, and Δ is the increase of the prediction error . Consider any fixed time step where page is requested, and let denote the weight class of . Note that either the prediction is correct and pos ( , ) = 1 or otherwise Δ = . For ease of analysis, we consider the events at time in three stages and will show that (B.3) holds for each of them: (1) First the offline algorithm serves the page request, then (2) the online algorithm serves the request and a possible increase of is charged, and finally (3) the positions of pages in weight class are updated.
If
is already in the offline cache, then (B.3) holds trivially. Otherwise, let be the weight class from which the offline algorithm evicts a page. Then can increase by at most 1 and no other can increase, so ΔΦ ≤ ℓ . As ΔOPT = , (B.3) holds.
2. Now consider the actions of the online algorithm. If is already in the online cache, then (B.3) holds trivially. So suppose the requested page is not in the cache, and let = arg min ∈ ( ) ( ) be the class from which the online algorithm evicts a page. We consider the following three substeps: (a) level of each class ∈ ( ) is decreased by ( ), (b) a page is evicted from class and is reset from 0 to , and finally (c) the requested page is fetched into cache. Since ΔOn = , to get (B.3) it suffices to show that ΔΦ ≤ ℓΔ in steps (a) and (c), and + ΔΦ ≤ 0 in step (b).
(a) If there exists a class ∈ ( ) with ( ) ≥ 1: The decrease of by ( ) contributes −ℓ ( ) to ΔΦ due to the first term in Φ. The second term can increase by at most ( ) for each class, so overall ΔΦ ≤ −ℓ ( ) + ℓ ( ) = 0. Otherwise, we have ( ) = 0 for each ∈ ( ), and clearly ( ) = 0 holds also for ∉ ( ). In particular, the online and offline algorithm have the same number of pages in cache from each class. Since the offline cache contains page , this means that also the online cache must contain some page from class , so ∈ ( ). Then ΔΦ ≤ ℓ ( ) ≤ ℓ ( ) ≤ ℓ . To conclude this step, it suffices to show that Δ = . Suppose not, then pos ( , ) = 1. But then the fact that is in the offline but not the online cache and that they have the same number of pages from class in their cache would imply that ( ) ≥ (2, ) − * (2, ) = 1, a contradiction.
(b) We claim that ΔΦ = − . As is reset from 0 to , the second term in Φ contributes − to ΔΦ. If ( ) ≥ 1, then decreases by 1 upon the eviction because the evicted page has maximum position, and as changes from 0 to , stays unchanged. If ( ) = 0, does not change anyways.
(c) If pos ( , ) = 1, then fetching to the online cache does not change (as was already in the offline cache) and therefore ΔΦ = 0. Otherwise, we have Δ = and increases by at most 1, so ΔΦ ≤ ℓ = ℓΔ . Proof. We use induction on time . The base case for = 0 holds by assumption. For the induction step, let be the page requested at time + 1, and let denote the smallest index such that ∈ . 8 By the inductive hypothesis, as ⊂ +1 for all , we have that lies in for all ≥ , and none of these caches incur a page fault. So +1 = and the property +1 ⊂ +1 +1 for all ≥ is maintained. For < , each cache evicts its page with the farthest predicted re-arrival time and fetches . Let us consider this in two steps. First, adding to each for < maintains the property that +1 ⊂ +1 +1 for all , since ∈ +1 . Let us now consider the eviction step. Fix some < , and suppose +1 evicts . If ∈ , then as ⊂ +1 by the inductive hypothesis, is also the page with the farthest predicted re-arrival time in and hence evicted from . Otherwise ∉ and some other page is evicted from . In either case, +1 ⊂ +1 +1 is maintained for all < . Proof. We will use a potential function for analysis. For any weight class at any time , we order all pages of class in increasing order of the predicted arrival time of their next requests (breaking ties in the same way as BelPred). We call the position of pages in this ordering the predicted position. Note that this predicted position of pages within a weight class is very different from the rank of pages defined in Section 2.1. We observe the following property: at each time step , when a page from some weight class is requested, either has predicted position 1 in its class or the request contributes to the prediction error .
For any integer , let ( ) denote the total number of pages in the cache of algorithm with predicted position at least . Let * ( ) denote the respective quantity for algorithm * . We note that these quantities vary with time , but we suppress the dependence in the notation for brevity. Let Φ := max * ( ) − ( ) and consider the potential function We consider the setting where algorithms pay cost whenever they evict or fetch a page of weight class . As this doubles the cost of any algorithm compared to the original setting where algorithms only pay for evictions, up to an additive constant, it suffices to show that for each request, Δcost * + ΔΦ ≤ 3Δcost + 2Δ , (C. 4) where Δcost * and Δcost are the costs incurred for this request, ΔΦ is the change in potential and Δ is the increase of .
For any request to some page from weight class , we break the analysis into three steps: (1) First serves the request and * updates its cache accordingly with respect to the old ranks of each weight class. (2) Then * updates its cache content to reflect the new ranks of weight class . (3) Finally page might move to a later position in the predicted order for class . In each step, we will show that inequality (C.4) is satisfied.
In step (1), suppose evicts page from some weight class . In this case, Δcost = + and Δcost * ≤ + . Moreover, both Φ and Φ increase by at most 1 and hence ΔΦ ≤ 2( + ) and thus the inequality is maintained.
In step (2), after page is requested the ranks of pages of weight class change according to Lemma 2.2 (see Figure 1). In particular, moves to rank 1 and, if is not in cache yet, is fetched and the page in cache from class with the highest predicted position gets evicted. Let be the evicted page. When * fetches page and evicts , it incurs a cost of Δcost * = 2 . We analyze the change in potential ΔΦ and error Δ due to fetching of page and evicting separately.
Let be such that Φ = * ( ) − ( ). We observe that the predicted position of page must be at least (since otherwise, we would have * ( ) = 0 and hence Φ ≤ 0, but Φ ≥ * (1) − (1) = 0). Thus, evicting decreases Φ by 1 and we have ΔΦ = −2 and inequality (C.4) is maintained. To account for the change in potential due to fetching page , we consider separately the cases that has (old) predicted position 1 in class or not. In the former case, we have ≥ 2 and hence the potential does not change. Otherwise, Φ increases by at most 1, so ΔΦ ≤ 2 . However, in this case, the prediction error also increments and we have Δ = and inequality (C.4) is maintained.
Finally in step (3), when page is re-inserted in some position of the predicted order, the potential is not affected since now is present in the cache of both algorithms.
Remark C.1. A slight modification of the proof of this lemma yields a much simpler proof of the result from [52] that in unweighted paging, BelPred( ) has competitive ratio at most 1 + /OPT: In this case, we have a single weight class and 1 ( ) = remains fixed. Therefore, * = BelPred( ) does nothing in step (1), and we can avoid losing a factor 3 by considering the setting where algorithms are charged only for evictions (not for fetching) and omitting the factor 2 in the definition of Φ. The proof of this result in [52] uses a case analysis involving eleven cases.
Lemma C.3. (Repeat property, repeated Lemma 2.4) Let = 0 and let be a weight class. A rank sequence corresponds to a request sequence of pages of class if and only if it has the following repeat property: for any ℎ, between any two requests to the same rank ℎ, every rank 2, . . . , ℎ − 1 must be requested at least once.
Proof. Lemma 2.5 shows that if = 0, any rank sequence must satisfy the repeat property.
Conversely, we show that for any sequence ℎ 1 , ℎ 2 , . . . satisfying the repeat property, there exists a corresponding paging request sequence 1 , 2 , . . . . Let := max ℎ be the number of distinct pages. We construct the request sequence online by specifying, whenever a page is requested, the time when the same page will be requested next.
We will maintain the invariant that for each and integer = 1, . . . , , after the request at time , the page with rank has next-request time given as follows.
The invariant ensures that the next request will be to the page in position ℎ +1 , as required. It also implies that different pages have different next-request times. It remains to show that we can maintain this invariant over time.
We can satisfy the invariant initially by defining the first-request times of the pages according to the condition of the invariant for = 0. Suppose the invariant holds for some . By the invariant, the page +1 requested at time + 1 is the one with rank ℎ +1 . We define the next request time to page +1 to be + 2 if ℎ +2 = 1 and inf { > + 2 : ℎ = ℎ +2 } if ℎ +2 ≥ 2. Since +1 will receive new rank 1, we see that the condition for = 1 of the invariant is satisfied for the next time step. If ℎ +1 = 1, then the ranks remain unchanged and the invariant continues to be satisfied. So suppose ℎ +1 ≥ 2. By assumption on the sequence ℎ 1 , ℎ 2 , . . . , for each = 2, . . . , ℎ +1 − 1 we have with the inequality being strict unless both sides are ∞. Thus, by the invariant, the page previously in position 1 is the one with the farthest next-request time among the pages in positions 1, 2, . . . , ℎ +1 − 1. so the new ranks are the same as the old ones except that the pages with ranks 1 and ℎ +1 swap, and it is directly verified that the invariant is again satisfied at the next time step. | v2 |
2021-07-26T00:06:49.150Z | 2021-06-02T00:00:00.000Z | 236246919 | s2orc/train | ELECTROMAGNETIC RADIATION SHIELDING COMPOSITE COATINGS BASED ON POWDERED ALUMINA AND IRON OXIDE
The article presents the results of experimental substantiation of the method for improving the shielding properties of composite coatings based on powdered alumina (electrocorundum, alum earth), which consists in modifying the composition of such coatings by adding to it powdered iron oxide. This experimental substantiation consisted in the development of the technique for obtaining composite coatings based on powdered alumina and iron oxide, the manufacture of the experimental samples using the developed technique, measurements of electromagnetic radiation reflection and transmission coefficients values in the frequency range 0.7...17.0 GHz of the manufactured samples; implementation of the comparative analysis of the measured values with the similar values typical for the composite coatings filled with powdered alumina oxides, and composite coatings with the fillers such as powdered iron oxide. The obtained results revealed that by adding powdered iron oxide to the composite coatings based on powdered alumina oxides, it is possible to reduce by 1.0...8.0 dB their electromagnetic radiation transmission coefficient values in the frequency range 0.7...17.0 GHz. In addition, we found that the implementation of the proposed method allows one to decrease by 2.0...20.0 dB the electromagnetic radiation reflection coefficient values in the specified frequency range of the considered composite coatings, if such are applied to metal substrates. We propose to use the composite coatings, obtained on the base of the substantiated method, in order to ensure the electromagnetic compatibility of radio-electronic equipment.
Introduction
The paper [1] presents the results of the study targeted to experimental substantiation of the method for modifying the composition and thereby improving the shielding and radioabsorbing properties of non-combustible composite coatings based on powdered alumina (electrocorundum, alum earth), which consists in adding to such coatings the components characterized by conductive properties, in particular, in fixing fragments of insulated metallized polyethylene film on the surface of such coatings. The results of such study has shown that using the proposed method enables to reduce by 1.0…6.0 dB the electromagnetic radiation (EMR) reflection coefficient values in the frequency ranges 0.9…4.0 GHz and 11.0…17,0 GHz of the composite coatings based on powdered alumina and applied to metal substrates.
The study considered in this paper is developed from the study presented in paper [1]. Its aim was to substantiate experimentally the method for improving the shielding and radioabsorbing properties of composite coatings based on powdered alumina (electrocorundum, alum earth), which consists in modifying the composition of such coatings by adding the powdered materials ДОКЛАДЫ БГУИР D OKLADY BGUIR Т. 19, № 3 (2021) V. 19, NO. 3 (2021) 105 characterized by magnetic properties (in privacy, powdered iron oxide [2,3]). The choice of powdered iron oxide as the material added to the composite coatings based on powdered alumina was due to its lower cost compared to analogues. The study says the powdered iron oxide is more advantageous in comparison with other materials characterized by magnetic properties for being a natural material [4]. To achieve this aim, the following tasks have been solved: -the technique for the manufacture of composite coatings based on powdered alumina and iron oxide has been developed; -experimental samples on the basis of composite coating made in accordance with the developed technique, as well as experimental samples have been formed on the basis of the composite coatings, one filled with powdered alumina and another one with powdered iron oxide; -measurements of EMR reflection and transmission coefficients values of the formed experimental samples have been carried out; -the comparative analysis of EMR reflection and transmission characteristics, obtained on the basis of the measurements results, has been implemented; -recommendations for the practical use of the obtained research results have been made.
Experimental method
The developed technique for the manufacture of composite coatings based on powdered alumina and iron oxides includes the following stages.
Stage 1. Establishing the optimal volumetric ratio of powdered alumina, powdered iron oxide and a binder (water-based paint, aqueous alkaline sodium silicate solution or gypsum solution) in the manufactured composite coating in accordance with the method presented in [5], and taking into account that the content of powdered alumina in the composition of such coating should exceed the content of powdered iron oxide.
It was found that the optimal volumetric ratio of these three components is 3.0:2.0:5.0 parts. Stage 2. Mixing the powdered alumina with the powdered iron oxide in the established optimal volumetric ratio.
Stage 3. Adding the binder to the mixture of powdered alumina and iron oxide. Stage 4. Uniform distribution of particles of the mixture of powdered alumina and iron oxide over the volume of the binder added to it using a laboratory mixer.
Stage 5. Deposition with a spatula of a layer of the resulting mixture on a substrate surface. Stage 6. Drying a layer of the mixture applied to the substrate surface under standard conditions [6].
Stage 7. Controlling the layer thickness of the mixture using an electronic micrometer. Stage 8. If necessary, increase the thickness of the mixture layer by repeating the stages 5-7.
In accordance with the developed technique, the following experimental samples have been formed: -the composite coating based on powdered electrocorundum, iron oxide and an aqueous alkaline solution of sodium silicate, applied to a cellulose substrate with a layer 3.0 mm thick (reference designation -sample 1); -the composite coating based on powdered electrocorundum, iron oxide and an aqueous alkaline solution of sodium silicate, applied to a metal substrate with a layer 3.0 mm thick (reference designation -sample 2).
In addition, following the technique similar to the developed one, the following experimental samples have been formed: -the composite coating based on powdered electrocorundum and an aqueous alkaline sodium silicate solution, applied to a cellulose substrate with a layer 3.0 mm thick (reference designation -sample 3); -the composite coating based on powdered electrocorundum and an aqueous alkaline sodium silicate solution, applied to a metal substrate with a layer 3.0 mm thick (reference designation -sample 4); -the composite coating based on powdered iron oxide and an aqueous alkaline sodium silicate solution, applied to a cellulose substrate with a layer 3.0 mm thick (reference designation -sample 5); -the composite coating based on powdered iron oxide and an aqueous alkaline sodium silicate solution applied to a metal substrate with a layer 3.0 mm thick (reference designation -sample 6). NO 3 (2021) Measurements of EMR reflection and transmission coefficients values of the formed experimental samples have been carried out in the frequency range 0.7…17.0 GHz using a panoramic meter of reflection and transmission coefficients SNA 0.01-18 in accordance with the method presented in [7, p. 47].
ДОКЛАДЫ БГУИР
Based on the results of such measurements, EMR reflection and transmission characteristics in the frequency range 0.7…17.0 GHz were obtained. A comparative analysis of the obtained characteristics has been carried out in the order presented in Table 1. Experimental substantiation of the prospects of using the proposed method to improve EMR shielding properties of composite coatings based on powdered alumina oxides EMR reflection and transmission characteristics of samples 1 and 5 Experimental substantiation of the obtaining capability on the basis of the proposed method of the composite coatings with EMR shielding properties not worse than those characteristic of the coatings filled with powdered iron oxide or exceeding these properties EMR reflection characteristics of samples 2 and 4 Experimental substantiation of the prospects of using the proposed method to improve radioabsorbing properties of composite coatings based on powdered alumina oxides EMR reflection characteristics of samples 2 and 6 Experimental substantiation of the obtaining capability on the basis of the proposed method of the composite coatings with the radioabsorbing properties not worse than those characteristic of the coatings filled with powdered iron oxide or exceeding these properties
Results and their discussion
The frequency dependencies of EMR reflection and transmission coefficients in the range of 0.7…17.0 GHz of manufactured samples 1, 3 and 5 are presented in Fig. 1 and 2. Based on the results of comparing the characteristics shown in Fig. 1 and 2, which was performed in the order presented in Table 1, the following has been established.
1. The addition of powdered iron oxide to the composition of the composite coating filled with powdered alumina enabled to reduce by 1.0…8.0 dB EMR transmission coefficient values in the frequency range 0.7…17.0 GHz of such coating, which is due to an increase of 1.0…15.0 dB of EMR reflection coefficient values [8]. An increase of EMR reflection coefficient values of a composite coating based on powdered alumina as a result of adding powdered iron oxide to the composition of such coating is associated with an increase in its wave resistance [9, p. 142] due to the fact that the relative magnetic permeability of powdered iron oxide is greater than 1 [2,3].
2. In the frequency ranges 0.7…14.0 GHz and 16.0…16.5 GHz, EMR reflection coefficient values of the composite coating filled with a mixture of powdered alumina and iron oxide, exceed by 1.0…8.0 dB the values of EMR reflection coefficient of the composite coating filled with powdered iron oxide. It could be due to a combination of the following phenomena: -the energy of electromagnetic waves scattered by particles of the mixture of powdered alumina and iron oxide exceeds the energy of electromagnetic waves scattered by particles of powdered iron oxide, since the size of particles of powdered alumina is larger than the size of particles of powdered iron oxide [10, p. 123]; -interaction of electromagnetic waves, scattered by particles of the mixture of powdered alumina and iron oxide and characterized by a phase similar to the phase of an electromagnetic wave reflected from the "air -composite coating" interface, causes an increase in the amplitude of this wave.
In the frequency ranges 14.0…16.0 GHz and 16.5…17.0 GHz, EMR reflection coefficient values of the composite coating filled with a mixture of powdered alumina and iron oxide, are lower by 1.0…8.0 dB than EMR reflection coefficient values of the composite coating filled with powdered iron oxide. It could be due to the fact that the electromagnetic waves of the specified frequency ranges, scattered by the particles of the mixture of powdered alumina and iron oxide, are characterized by a phase different from the phase of the electromagnetic wave reflected from "air -composite coating" interface. In this regard, as a result of the interaction of the reflected wave with the scattered waves, its amplitude decreases.
EMR transmission coefficient values in the frequency range 0.7…10.0 GHz of the composite coating filled with a mixture of powdered alumina and iron oxide, are practically similar to the values of a similar parameter of the composite coating filled with powdered iron oxide. This feature may be due to a combination of the following phenomena: -EMR transmission coefficient in the frequency range 0.7…10.0 GHz of the considered composite coatings is determined by the amplitude of the electromagnetic wave reflected from "air -composite coating" interface, the amplitudes of the electromagnetic waves scattered by the particles of the fillers of these coatings, as well as the energy losses of the EMR as a result of its propagation in the coating; -EMR energy losses associated with its propagation in the composite coating filled with a mixture of powdered alumina and iron oxide are less then EMR energy losses associated with its propagation in the composite coating filled with powdered iron oxide, due to the fact that the value of the relative magnetic permeability of the latter is higher than that of the specified mixture; -the difference between the magnitude of EMR energy losses associated with its propagation in the composite coating filled with powdered iron oxide, and between the magnitude of EMR energy losses associated with its propagation in the composite coating filled with a mixture of powdered alumina and iron oxide, is practically similar with the difference between the magnitude of the energy of electromagnetic waves scattered by the filler particles of the former and the latter coatings.
In the frequency range of 10.0…17.0 GHz, EMR transmission coefficient values of the composite coating filled with a mixture of powdered alumina and iron oxide, exceed, on average, by 3.0 dB EMR transmission coefficient values of the composite coating filled with powdered iron oxide. This can be associated with an increase in the difference between the amount of EMR energy losses associated with its propagation in the latter coating and the magnitude of the EMP energy losses associated with its propagation in the former coating.
The frequency dependencies of EMR reflection coefficient in the range 0.7…17.0 GHz of manufactured samples 2, 4 and 6 are presented in Fig. 3. Based on the results of comparison of the characteristics shown in Fig. 3, which was performed in the order presented in Table 1, the following has been established.
1. EMR reflection coefficient values in the frequency ranges 0.7…1.5 GHz, 1.52…2.0 GHz of the composite coating filled with powdered alumina and deposited on a metal substrate are practically similar to the values of the similar parameter of the composite coating a mixture of powdered alumina and iron oxide or powdered iron oxide and deposited on a metal substrate. This can be attributed to the fact that in the specified frequency range EMR reflection coefficient is determined to a greater extent by the amplitude of electromagnetic waves reflected from "composite coating -metal substrate" interface than by the amplitude of electromagnetic waves reflected from "air -composite coating" interface.
2. The addition of powdered iron oxide to the composite coating filled with powdered alumina enables to reduce by 2.0…20.0 dB EMR reflection coefficient values at a frequency of 1.5 GHz and in the frequency ranges 2.0…5.0 GHz, 11.0…17.0 GHz (provided that such coating is deposited on a metal substrate). The specified effect recorded at a frequency 1.51 GHz and in the frequency range 2.0…5.0 GHz, may be due to the phenomenon of natural ferromagnetic resonance associated with the magnetic properties of powdered iron oxide. In turn, the effect recorded in the frequency range 11.0…17.0 GHz may arise from the phenomenon of interaction in antiphase between electromagnetic waves reflected from "air -composite coating" interface and electromagnetic waves reflected from "composite coating -metal substrate" interface. Note that relative to EMR of the frequency range 3.0…4.0 GHz, the composite coating filled with a mixture of powdered alumina and iron oxide and deposited on a metal substrate is characterized by radioabsorbing properties, since its EMR reflection coefficient values in the specified frequency range are equal to or less than -10.0 dB.
3. In the frequency range 4.5…6.0 GHz, EMR reflection coefficient values of the composite coating filled with powdered iron oxide and deposited on a metal substrate is lower by 1.0…10.0 dB than the EMR reflection coefficient values of the composite coating filled with a mixture of powdered alumina and iron oxide and deposited on a metal substrate. This is due to the difference in the frequency value of natural ferromagnetic resonance associated with EMR interaction with each of these coatings.
Conclusion
The obtained results make it possible to conclude that adding 20.0 vol. % of powdered iron oxide to the composition of the composite coating filled with powdered alumina allows improving their EMR shielding properties in the frequency range 0.7…17.0 GHz and radioabsorbing properties in the frequency ranges 2.0...5.0 GHz, 11.0...17.0 GHz. In this case, the property of incombustibility of such coatings is preserved. Note that the cost of 1 kg of iron oxide is comparable to the cost of 1 kg of powdered alumina (electrocorundum, alumina), that is, the use of the proposed method will not lead to an increase in the cost of a composite coating based on such oxides. Composite coatings filled with a mixture of powdered alumina and iron oxide can be used in the manufacturing or improving the technical and operational properties of electromagnetic shields designed to ensure electromagnetic compatibility of radioelectronic equipment. | v2 |
2000-05-03T17:54:20.000Z | 1998-09-21T00:00:00.000Z | 3266109 | s2orc/train | Incremental and Decremental Maintenance of Planar Width
We present an algorithm for maintaining the width of a planar point set dynamically, as points are inserted or deleted. Our algorithm takes time O(kn^epsilon) per update, where k is the amount of change the update causes in the convex hull, n is the number of points in the set, and epsilon is any arbitrarily small constant. For incremental or decremental update sequences, the amortized time per update is O(n^epsilon).
Introduction
The width of a geometric object is the minimum distance between two parallel supporting hyperplanes. In the case of planar objects, it is the width of the narrowest infinite strip that completely contains the object ( Figure 1). The width of a planar point set can be found from its convex hull by a simple linear time "rotating calipers" algorithm that sweeps through all possible slopes, finding the points of tangency of the two supporting lines for each slope [9,13,16].
Despite several attempts, no satisfactory data structure is known for maintaining this fundamental geometric quantity dynamically, as the point set undergoes insertions and deletions. The methods of Janardan, Rote, Schwarz, and Snoeyink [10,14,15] maintain only an approximation to the true width. The method of Agarwal and Sharir [3] solves only the decision problem (is the width greater or less than a fixed bound?) and requires the entire update sequence to be known in advance. An algorithm of Agarwal et al. [1] can maintain the exact width, but requires superlinear time per update (however note that this algorithm allows the input points to have continuous motions as well as discrete insertion and deletion events). Finally, the author's previous paper [8] provides a fully dynamic algorithm for the exact width, but one that is efficient only in the average case, for random update sequences.
In this paper we present an algorithm for maintaining the exact width dynamically. Our algorithm takes time O(kn ǫ ) per update, where k is the amount of change the update causes in the convex hull, n is the number of points in the set, and ǫ > 0 is any arbitrarily small constant. In particular, for incremental updates (insertions only) or decremental updates (deletions only), the total change to the convex hull can be at most linear and the algorithm takes O(n ǫ ) amortized time per update. For the randomized model of our previous paper, the expected value of k is O(1) and the average case time per update of our algorithm is again O(n ǫ ). Our approach is to define a set of objects (the features of the convex hull), and a bivariate function on those objects (the distance between parallel supporting lines), such that the width is the minimum value of this function among all pairs of objects. We could then use a data structure of the author [7] for maintaining minima of bivariate functions, however in the case of the width this minimum is more easily maintained directly. To apply this approach, we need data structures for dynamic nearest neighbor querying on subsets of features; we build these data structures by combining binary search trees with a data structure of Agarwal and Matoušek for ray shooting in convex polyhedra [2].
Corners and Sides
Given a planar point set S, we define a corner of S to be an infinite wedge, having its apex at a vertex of the convex hull of S, and bounded by two rays through the hull edges incident to that vertex. We define a side of S to be an infinite halfplane, containing S, and bounded by a line through one of the hull edges. Figure 2 depicts a point set, its convex hull, a corner (at the top of the figure), and a side (at the bottom of the figure).
We say a corner and a side are compatible if they could be translated to be disjoint with one another, and incompatible otherwise. Alternatively, a side is compatible with a corner if the boundary line of the side is parallel to a different line that is tangent to the convex hull at the corner's apex. The corner and side in the figure are incompatible, because if one translates the side's boundary to pass through the corner's apex, it would penetrate the convex hull.
Given a side s and a compatible corner c, we define the distance d(s, c) to be simply the Euclidean distance between the apex of the corner and the boundary line of the side. Equivalently, this is the distance between parallel lines supporting the convex hull and tangent at the two features. However, if s and c are incompatible, we define their distance to be +∞. Let width(S) denote the width of S, sides(S) denote the set of sides of S, and corners(S) denote the set of corners of S.
Lemma 1 For any point set S in
Proof: Clearly, any compatible pair defines an infinite strip having width equal to the distance between the pair, so the overall width can be at most the minimum distance. In the other direction, let X be the infinite strip tangent on both sides to the convex hull and defining the width; then at least one of the tangencies must be to a convex hull edge, for a strip tangent at two vertices could be rotated to become narrower. The opposite tangency includes at least one convex hull vertex, and the edge and opposite vertex form a compatible side-corner pair. 2 Lemma 2 Each side of the convex hull has at most two compatible corners. The sides compatible to a given corner of the convex hull form a contiguous sequence of the hull edges.
By Lemma 2, there are only O(n) compatible side-corner pairs. The known static algorithms for width work by listing all compatible pairs. The dynamic algorithm of our previous paper maintained a graph, the rotating caliper graph, describing all such pairs. However such an approach can not work in our worst-case dynamic setting: there exist simple incremental or decremental update sequences for which the set of compatible pairs changes by Ω(n) pairs after each update. Instead we use more sophisticated data structures to quickly identify the closest pair without keeping track of all pairs. To do so, we will need to keep track of the set of convex hull features, as the point set is updated.
Lemma 3 (Overmars and van Leeuwen [12]) We can maintain a list of the vertices of the convex hull of a dynamic point set in R 2 , and a data structure for performing logarithmic-time binary searches in the list, in linear space and time O(log 2 n) per point insertion or deletion.
Recently, Chan [5] has improved these bounds to near-logarithmic time, however this improvement does not make a difference to our overall time bound. Proof: We apply the data structure of Overmars and van Leeuwen from the previous lemma. The set of features inserted and deleted in each update can be found by a single binary search to find one such feature, after which each adjacent feature affected by the update can be found in constant time by traversing the maintained list of hull vertices. 2
Finding the Nearest Feature
In order to apply our closest pair data structure, we need to be able to determine the nearest neighbor to each feature in a dynamic subset of other features. We first describe the easier case, finding the nearest corner to a side. We next describe how to perform dynamic nearest neighbor queries in the other direction, from query corners to the nearest side. To begin with, we show how to find the nearest line to a corner, ignoring whether the line belongs to a compatible side.
Lemma 6 (Agarwal and Matoušek [2])
For any ǫ > 0, we can maintain a dynamic set of halfspaces in R 3 , and answer queries asking for the first halfspace boundary hit by a ray originating within the intersection of the halfspaces, in time O(n ǫ ) per insertion, deletion, or query. Maintain such a three-dimensional halfspace for each of the halfplanes in the set, along with the data structure of Lemma 6. A nearest halfplane query from point (x, y) can be answered by performing a vertical ray shooting query from point (x, y, 0); the first halfspace boundary hit by this ray corresponds to the nearest halfplane to the query point. 2
Lemma 8
We can maintain the sides of a point set in R 2 , and handle queries asking for the nearest side to a given corner, in amortized time O(n ǫ ) per query, side insertion, or side deletion.
Proof:
We store the sides in a weight-balanced binary tree [11], according to their positions in cyclic order around the convex hull. For each node in the tree, we store the data structure of Lemma 7 for finding nearest boundaries among the sides stored at descendants of that node.
For each query, we use the binary tree to represent the contiguous group of compatible sides (as determined by Lemma 2) as the set of descendants of O(log n) tree nodes. We perform the vertical ray shooting queries of Lemma 7 in the data structures stored at each of these nodes, and take the nearest of the O(log n) returned sides as the answer to our query.
Each update causes O(log n) insertions and deletions to the data structures stored at the nodes in the tree, and may also cause certain nodes to become unbalanced, forcing the subtrees rooted at those nodes to be rebuilt. A rebuild operation on a subtree containing m sides takes time O(m 1+ǫ ), and happens only after Ω(m) updates have been made in that subtree since the last rebuild, so the amortized time per update is O(n ǫ ). 2
Dynamic Width
We are now ready to prove our main result.
Theorem 1 We can maintain the width of a planar point set in R 2 , as points are inserted and deleted, in amortized time O(kn ǫ ) per insertion or deletion, where k denotes the number of convex hull sides and corners changed by an update.
Proof: We store the data structures described in the previous lemmas, together with a pointer from each corner of the point set to the nearest side (this pointer may be null if there is no side compatible to the corner). Finally, we store a priority queue of the corner-side pairs represented by these pointers, prioritized by distance. By Lemma 1, the minimum distance in this priority queue must equal the overall width.
When an update causes a corner to be added to the set of features, we can find its nearest side in time O(n ǫ ) by Lemma 8, and add the pair to the priority queue in time O(log n).
When an update causes a corner to be removed from the set of features, we need only remove the corresponding priority queue entry, in time O(log n) per update.
When an update causes a side to be added to the set of features, at most two corners can be compatible with it (Lemma 2). We can find these compatible corners by binary search in the dynamic convex hull data structure used to maintain the set of features, in time O(log n). For each corner, we compare the distances to the new side and the side previously stored in the pointer for that corner, and if the new distance is smaller we change the pointer and update the priority queue.
Finally, when an update causes a side to be removed, that side can be pointed to by at most the two corners compatible with it. We use the dynamic convex hull data structure to find the compatible corners, and if they point to the removed side, we recompute their nearest side in time O(n ǫ ) by Lemma 8. Proof: For the incremental version of the problem, each insertion creates at most two new sides and three new corners, along with deleting h + 2 corners and h + 1 sides where h is the number of input points that become hidden in the interior of the convex hull as a consequence of the insertion. Each point can only be hidden once, so the total number of changes to the set of sides and corners over the course of the algorithm is at most 10n. The argument for deletions is equivalent under time-reversal symmetry to that for insertions. 2 We note that in the average case model of our previous paper on dynamic width [8], the expected value of k per update is O(1), and therefore our algorithm takes expected time O(n ǫ ) per update. This is not an improvement on that paper's O(log n) bound, but it is interesting that our algorithm here is versatile enough to perform well simultaneously in the incremental, decremental, and average cases.
Conclusions and Open Problems
We have presented an algorithm for maintaining the width of a dynamic planar point set. The algorithm can handle arbitrary sequences of both insertions and deletions, and our analysis shows it to be efficient for sequences of a single type of operation, whether insertions or deletions. Are there interesting classes of update sequences other than the ones we have studied for which the total amortized convex hull change is linear? Does there exist an efficient fully dynamic algorithm for planar width?
Another question is to what extent our algorithm can be generalized to higher dimensions. The same idea of maintaining pairwise distances between hull features seems to apply, but becomes more complicated. In three dimensions, it is no longer the case that incremental or decremental update sequences lead to linear bounds on the total change to the convex hull, but it is still true that random update sequences have constant expected change. In order to apply our approach to the threedimensional width problem, we would need dynamic closest pair data structures for finding the face nearest a given corner, the corner nearest a given face, and the opposite edge nearest a given edge. The overall expected time per update would then be O(log 2 n) times the time per operation in these closest pair data structures. Can this approach be made to give an average-case dynamic algorithm for three dimensional width that is as good as the best known static algorithms [4,6]? | v2 |
2007-11-10T07:00:45.000Z | 2007-09-05T00:00:00.000Z | 14195189 | s2orc/train | A Note on Gravitational Baryogenesis
The coupling between Ricci scalar curvature and the baryon number current dynamically breaks CPT in an expanding universe and leads to baryon asymmetry. We study the effect of time dependence of equation of state parameter of the FRW universe on this asymmetry.
Introduction
The origin of the difference between the number density of baryons and anti-baryons is still an open problem in particle physics and cosmology. The measurements of cosmic microwave background [1], the absence of γ ray emission from matter-antimatter annihilation [2] and the theory of Big Bang nucleosynthesis [3] indicate that there is more matter than antimatter in the universe. Observational results yield that the ratio of the baryon number to entropy density is approximately n b /s ∼ 10 −10 . In [4], it was pointed out that a baryon-generating interaction must satisfy three conditions in order to produce baryons and antibaryons at different rates: (1) baryon number violation; (2) C and CP violation; (3) departure from thermal equilibrium.
In [5], by introducing a scalar field coupled to baryon number current it was suggested that the baryon asymmetry may be generated in thermal equilibrium while the CPT invariance is dynamically violated. Similarly , in [6], by introducing an interaction between Ricci scalar curvature and baryon number current which dynamically violates CPT symmetry in expanding Friedman Robertson Walker (FRW) universe, a mechanism for baryon asymmetry was proposed. The proposed interaction shifts the energy of a baryon relative to that of an antibaryon, giving rise to a non-zero baryon number density in thermal equilibrium. The model suggested in [5,6] was the subject of several studies on gravitational baryogenesis and leptogenesis in different models of cosmology [7]. But, in [6], the problem was restricted to the cases that the equation of state parameter of the universe, ω, is a constant, and the role of time dependence of ω in baryogenesis was neglected. As a consequence, in this scenario, the baryon number asymmetry cannot be directly generated in radiation dominated epoch. But in [8], in the framework of modified theories of gravity, following the method of [6], it was shown that the baryon asymmetry may be generated even in the radiation dominated era.
In this paper, like [6], we assume that the universe is filled with perfect fluids such as the scalar inflaton field and radiation. The time dependence of ω is due to the fact that: (1) these components have different equation of state parameters, (2) they interact with each other and, (3) they may have time dependent equation of state parameters. We will study the effect of each of these subjects on time derivative of the Ricci scalar and consequently on baryogenesis. We will elucidate our discussion through some examples.
Natural units = c = k B = 1 are used throughout the paper .
. So, in thermal equilibrium there will be a nonzero baryon number density given by : where g b ∼ O(1) is the number of internal degrees of freedom of baryons. The entropy density of the universe is given by s = 2π 2 45 g s T 3 , where g s ≃ 106 indicates the total degrees of freedom for relativistic particles contributing to the entropy of the universe [9]. In the expanding universe the baryon number violation decouples at a temperature denoted by T D and a net baryon asymmetry remains. The ratio n b s in the limit T ≫ m b (m b indicates the baryon mass), and T ≫ µ b is then: Note that in different models we may haveṘ < 0 as well asṘ > 0, therefore introduction of ε gives us the possibility to choose the appropriate sign for n b . The geometry of the universe is described by the spatially flat FRW metric where a(t) is the scale factor. The Hubble parameter, H =ȧ a , satisfies P and ρ are the pressure and energy density. We assume that the universe, filled with perfect fluids, satisfies the effective equation of state: P = ωρ. The equation of state parameter, ω, can be expressed in terms of the Hubble parameter: The Ricci scalar curvature is given by From Eq. (6), it follows thaṫ M p ≃ 1.22 × 10 19 Gev is the Planck mass. Ifω = 0, Eq. (8) reduces to the result obtained in [6].
In the following we continue our study with a universe dominated by two perfect fluids with equation of states P d = γ d ρ d and P m = γ m ρ m interacting with each other, through the source term Γ 1 ρ d + Γ 2 ρ m : Although these components don't satisfy the conservation equation solely, butρ + 3H(1 + ω)ρ = 0.
Note that ρ ≃ ρ m + ρ d and P ≃ P m + P d . ω can be written in terms of the ratio of energy densities, r = ρm ρ d , Fromṙ =ρ and Eq. (9) one can determine the behavior of r with respect to the comoving timeṙ or by suppressing γ ḋ ρ and r may be related througḣ Combining Eqs. (11) and (12) results iṅ The first and second terms show the effects of time dependence of γ's, and interaction of fluid components onω, respectively. Note that even for constant γ's and in the absence of interactions, as the third term of (16) indicates, ω varies with time. This is due to the fact that the universe is assumed to be composed of components with different equation of state parameters. Putting Eq. (16) into Eq. (8) giveṡ This equation can be rewritten in terms of ω If we neglectω, as mentioned before, only the first term remains:Ṙ = which is zero at ω = 1 3 and at ω = −1. But by takinġ ω into account we may have baryon asymmetry at ω = 1 3 and ω = −1. Although the asymmetry generated during inflation will be diluted away.
It is worth to study what happens when one of the fluid components corresponds to radiation (e.g. produced after the inflation epoch). To do so, we take λ m = 1 3 . In this case Eq. (17) reduces tȯ In general Γ's may also be time dependent [10], e.g., one can consider Γ 1 = λ 1 H and Γ 2 = λ 2 H , where λ 1 , λ 2 ∈ ℜ [11]. Depending on the model under consideration, the third term of (19), including the time derivative of γ d , may be simplified in terms of other parameters of the model. For example consider a massive scalar field of mass m, with a time dependent equation of state parameter interacting with radiation. The energy density, ρ d , and the pressure, P d , of the scalar field satisfy The time dependent equation of state parameter of the scalar field is then The scalar field interacts with another component (radiation) characterized by the equation of state parameter γ m = 1 3 : The subscript R denotes the component with which was verified in [12], we are led tȯ By supposing we arrive atγ Finally we deducė If the potential is negligible with respect to the kinetic energy,φ 2 ≫ m 2 φ 2 , d . When the scalar field dominates, i.e., in the limit r → 0,Ṙ = − 6m So, even in the radiation dominated era, we can haveṘ = 0 if λ R = 0. For radiation component, the energy density is related to the equilibrium temperature via [9] ρ where ǫ R = π 2 30 g ⋆ and g ⋆ counts the total number of effectively massless degrees of freedom. Its magnitude is g ⋆ ≃ g s . Therefore the baryon asymmetry in terms of temperature can be determined from (4), (28), and (29): By defining M ⋆ = αM p we find that n b s is of order When γ's are time dependent, in general, derivation of the exact solutions of Eq. (22) is not possible. Thus explaining the precise behavior of the fluid components in terms of temperature, except in special situations like (30), is not straightforward. But when γ's are constant, it may be possible to determine baryogenesis in terms of decoupling temperature. We will study this situation through two examples.
In this caseṘ where ρ R is the energy density in the relativistic decay products. If M 4 is the vacuum energy of the scalar field at the beginning of the oscillation, from t ≃ | v2 |
2020-07-30T02:07:27.212Z | 2020-01-01T00:00:00.000Z | 226572169 | s2orc/train | Energy Efficiency Analysis for Machining Magnesium Metal Matrix Composites Using In-House Developed Hybrid Machining Facilities
Adoption of sustainable machining techniques shall offer the local industry a cost-effective route to improve its environmental, economic and social footprint when it comes to machine difficult-to-cut materials. This experimental study investigates the behavior of sustainable cutting fluid approaches on active cutting energy (ACE), active energy consumed by machine tool (AECM) and energy efficiency (EE) for machining PMMCs (particulate metal matrix composites) of magnesium at different combinations of rotational speed and feed. Minimum Quantity Lubrication (MQL), cryogenic and CryoMQL machining are performed on in-house developed MQL and cryogenic experimental setups and the results obtained from them are compared with dry machining. The L36 orthogonal array is employed to design the experiments. It is observed that cryogenic machining consumes comparatively lower ACE and AECM among the four cutting fluid approaches. It is found that dry machining provides comparatively lower EE among four cutting fluid approaches. From the main effects plot, it is observed that cryogenic assistance further improves the machining performance of the MQL technique and offers better EE. The results of Analysis of Variance (ANOVA) suggest that rotational speed, cutting fluid approach and feed are the significant parameters that affect the EE in descending order respectively.
Introduction
It has been forecasted by International Energy Agency that up to 2030 the demand for electrical energy increases by 1.7% per year. Manufacturing processes consume about 30% of the total electrical energy produced. Almost all machining processes being a subpart of manufacturing process consume electrical energy for performing work. By considering this fact, it is required to consider energy consumption as an important machinability indicator along with tool wear, cutting force and surface roughness for machining processes (Bilga et al. 2016;Li and Kara 2011). Due to lightweight and exceptional mechanical properties at elevated temperature, the PMMCs of Mg are widely used in the aerospace and automotive industries (Khanna et al. 2019). Machinability of PMMC is poor due to the existence of hard particles within a softer metal matrix. To combat it, cutting fluid is used which reduces the cutting zone temperature, cutting force, surface roughness and tool wear. MQL and cryogenic machining have a low impact on the environment due to a minimum and no usage of cutting oil respectively during machining. These processes do not only decrease the hazards to the operator but also eliminate the chip recyclability process (Khanna and Agrawal 2020;Adler et al. 2006). Though the dry machining does not consume any type of cutting fluid, it is not sustainable because it generates higher surface roughness and tool wear and hence results in lesser product quality and productivity (Canter 2009). So, it will be interesting to compare the above-mentioned sustainable processes with dry machining. In this context, the EE is measured for dry, MQL, cryogenic and CryoMQL machining processes at different combinations of rotational speed and feed. EE is defined as the ratio of ACE to AECM (Bilga et al. 2016). ACE is the net energy consumed during the cutting process while the AECM includes the ACE and the energy losses occurred due to mechanical transmission, electrical motors and electrical networks. The difference between active power consumed by machine tool (APCM) with material removal and without material removal i.e., when the spindle is on but no contact between cutting tool and the workpiece is active cutting power (ACP) and it converts into ACE when it is multiplied by cutting time (Bilga et al. 2016). Pu et al. (2012) compared surface integrity of machined surface of the AZ31B Mg alloy for dry and cryogenic machining. A combination of large tool radius and cryogenic machining provided higher values of compressive stresses as compared to dry machining. Kara and Li (2011) made an empirical model of specific energy consumption (SCE) for turning and milling operations for dry and wet machining. The lower values of SCE were observed in the dry machining as compared to wet machining for the same material removal rate. Madanchi et al. (2019) developed a model to identify the effect of cutting fluid strategies with the change in process parameters (cutting speed, feed and depth of cut) on the energy consumption and cost. This model considered the correlation of elements of the machining system with the change in the cutting fluid strategies.
From the above literature, it can be inferred that the selection of cutting process parameters and cutting fluid influence the machining performance and eventually energy consumption and EE during the machining process.
The industry is facing a decisive challenge to make energy-efficient machine tools for machining difficult-to-machine materials. Research work focused on this theme is required to increase pertinent understanding in order to develop an energyefficient hybrid machining facility for the local industry. The adoption of sustainable production techniques shall allow the local industry a cost-effective way to fulfill its socio-economic and environmental challenges. In relation to this, the present work compares the effect of process parameters on EE for dry, MQL, cryogenic and CryoMQL machining. It is envisioned that the findings of this work will help in the development of optimized in-house retro-fitted hybrid machining facility.
Experimental Setup and Design of Experiments
AZ91/5SiC PMMC is used in the form of a 20 mm diameter and 190 mm length rod for turning tests on a conventional lathe. In the final composition of AZ91/5SiC PMMC, 5% SiC is reinforced with 67 µm particle size in the metal matrix of AZ91.
For MQL machining, in-house developed mist generator is used. LRT30 cutting oil is used as a lubricant with 14 ml/h flow rate. For performing cryogenic machining, LN 2 was stored in Dewar at 6 bar pressure. To convey the N 2 in liquid form from Dewar to the cutting zone, a vacuum insulated hose pipe is used. In MQL and cryogenic machining, 2 mm diameter nozzle is used. For CryoMQL machining, the above two setups are merged in such a way that MQL and LN 2 stroke on flank and rake face of the cutting tool respectively. Figure 13.1 describes the experimental setup of CryoMQL machining.
Here CNMG120404AH DLC (Diamond Like Coating) insert is used with MCLNR2020K12 tool holder. For every experiment, a fresh cutting edge is used to have the same experimental treatment. Fluke 435 (series-II) 3 phase energy and power quality analyzer is used to measure the APCM. To limit the experimental design, the rotational speed, feed and cutting fluid approach is considered as factors
Results and Discussion
The results of ACE, AECM, and EE are shown in Fig. 13.2. It is observed from Fig. 13.2 that dry machining gives higher values of ACE, AECM and lower values of EE at most of the tests among four cutting fluid approaches. For cryogenic machining, ACE and AECM are found to be the lowest at most of the tests among four cutting fluid approaches except at lower rotational speed and higher feed. The reason for lower energy consumption for cryogenic machining may be a noteworthy grain refinement of Mg alloy on the surface of machined parts at low temperature. It reduces the requirement of cutting force and hence energy (Pu et al. 2012). It has been observed that the main effects and interaction plots provide qualitative information regarding the impact of factors on response with direction. Figures 13.3 and 13.4 show the main effects and interaction plots for EE respectively. From Fig. 13.3, it is observed that rotational speed, cutting fluid approach and feed are the significant parameters that affect the EE in descending order respectively. It is evident that as the rotational speed increases the EE increases rapidly. Though there is a marginal difference of EE for the MQL, cryogenic and CryoMQL machining, they have significantly higher value of EE as compared to dry From the interaction plot ( Fig. 13.4), it is clear that rotational speed with feed and cutting fluid approach with rotational speed strongly affect the EE if the interaction effect of factors is considered. It is also clear from the Fig. 13.4 that for all cutting fluid approaches if the value of rotational speed increases the value of EE increase rapidly.
ANOVA is considered as a valuable tool to predict the impact of factors on response quantitatively with interaction effect (Khanna and Agrawal 2020). Here the p-test is performed with ANOVA to analyze the information regarding the significance of parameters on response. For any parameter having p-value less than 0.05 is considered as a significant parameter with 95% confidence. From the Table 13.1, it is reconfirmed that rotational speed, cutting fluid approach and feed are the significant parameters, which affect the EE by 43.21%, 12.23%, and 5.01% respectively.
In this study, only the spindle network is considered as machining system and the rest of the elements are not included (e.g. air-compressor used for the MQL and CryoMQL machining). If the energy consumed by air-compressor is considered then even lower values of ACE and AECM must be observed in cryogenic machining as compared to MQL and CryoMQL machining. This clearly establishes cryogenic machining as eco-efficient machining.
Conclusions
This study presents cryogenic machining as an eco-efficient machining technique suitable to reduce energy consumption in modern-day manufacturing processes. Trials are carried out using in-house developed retro-fitted hybrid machining facilities.
The following conclusions are drawn from the study.
• The lowest values of ACE and AECM at most of the turning tests have been obtained for cryogenic machining among four cutting fluid approaches. Cryogenic assistance further improves the machining performance of the MQL technique and offers higher values of the EE. • From the results of the main effects plot, rotational speed, cutting fluid approach and feed are the significant parameters affecting the EE in descending order respectively. With the interaction effect of parameters, it is clear that for all four cutting fluid approaches higher values of EE are found at higher rotational speed (835 rpm). It is also clear that the interaction effect of rotational speed with feed and cutting fluid approach with rotational speed affect the EE strongly. • From the results of ANOVA, it is observed that rotational speed, cutting fluid approach and feed affect the EE by 43.21%, 12.23%, and 5.01% respectively.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | v2 |
2020-04-10T08:00:40.818Z | 2021-03-08T00:00:00.000Z | 215716099 | s2orc/train | Orbital Conflict: Cutting Planes for Symmetric Integer Programs
Cutting planes have been an important factor in the impressive progress made by integer programming (IP) solvers in the past two decades. However, cutting planes have had little impact on improving performance for symmetric IPs. Rather, the main breakthroughs for solving symmetric IPs have been achieved by cleverly exploiting symmetry in the enumeration phase of branch and bound. In this work, we introduce a hierarchy of cutting planes that arise from a reinterpretation of symmetry-exploiting branching methods. There are too many inequalities in the hierarchy to be used efficiently in a direct manner. However, the lowest levels of this cutting-plane hierarchy can be implicitly exploited by enhancing the conflict graph of the integer programming instance and by generating inequalities such as clique cuts valid for the stable-set relaxation of the instance. We provide computational evidence that the resulting symmetry-powered clique cuts can improve state-of-the-art symmetry exploiting methods. The inequalities are then employed in a two-phase approach with high-throughput computations to solve heretofore unsolved symmetric integer programs arising from covering designs, establishing for the first time the covering radii of two binary-ternary codes.
Introduction
We focus on binary integer programs(BIP)s of the form where A ∈ R m×n , c ∈ R n and b ∈ R m . We are interested in BIPs containing a great deal of symmetry-instances where permuting some of its variables yields an equivalent problem. The presence of symmetry significantly reduces the effectiveness of standard integer programming (IP) solution techniques, and problems of relatively limited size can be very challenging to solve. Several techniques have been investigated in order to overcome this drawback, ranging from problem to zero or one are sequentially considered. This can be computationally demanding in practice. In this paper we investigate a mechanism called orbital conflict to populate conflict graphs based on the symmetries present in the problem. The major contributions of the work are the following: • a hierarchy of cutting planes which spring out of a re-interpretation of symmetry-exploiting branching methods; • an implicit description of the lowest levels of this hierarchy through new edges of the CG; • a computational assessment of the implication information implied by branching through symmetry-powered clique cuts; • a computational evidence that these can improve state-of-the-art symmetry-exploiting methods; • a two-phase solution methodology that combines our symmetry-exploiting methods with parallel processing to solve heretofore unsolved instances of symmetric BIPs.
In Section 2 we review common notions in symmetry-enhanced branching methods and recall the concept of constraint orbital branching. In Section 3 we apply constraint orbital branching to the standard 0-1 variable branching disjunction to derive a family of symmetry-induced cutting planes for BIP. In Section 4 we describe how we employ these cutting planes implicitly through their addition to the conflict graph of the BIP instance, and we provide an example to demonstrate their potential. Section 5 describes our computational setting, and Section 6 consists of a set of experiments aimed at demonstrating the impact of employing orbital conflict. By using orbital conflict, in conjunction with high-throughput computing, we are able to solve for the first time three symmetric BIP arising from covering designs.
Notation: We define [n] = {1, 2, . . . , n}. For any node a of the branch-and-bound tree we let F a 1 and F a 0 denote the indices of variables fixed to one and zero, respectively. We represent the conflict graph for (1) at node a as G(V, E a ). Given a set T ⊆ [n] and an element i ∈ [n], we often abuse notation and write T ∪ i rather than the correct T ∪ {i}. Given x ∈ R n and T ⊆ [n], we often use the shorthand notation x(T ) def = i∈T x i .
Symmetry-exploiting branching
Describing symmetry in IP requires some notions from group theory. We briefly introduce these notions here in order to make the presentation self-contained, but refer the reader to standard references such as (Rotman 1994) for an exhaustive treatment. Let us denote by Π n the set of all permutations of [n] = {1, . . . , n}; that is, the symmetric group of [n]. Permutations π ∈ Π n are represented by n−vectors, where π(i) represents the image of i under π. The notation is extended to the case of permutations acting on sets S ⊆ [n]; that is, π(S) = {π(i), i ∈ S} ⊆ [n]. A permutation π ∈ Π n is said to be a symmetry of the IP instance if it maps any feasible solution into another Optimization Online feasible solution with the same value. The set of symmetries of an instance forms the symmetry group G: For a point z ∈ R n , the orbit of z under G is the set of all points to which z can be sent by permutations in G: Finally, the stabilizer of a set S in G is the set of permutations in G that send S to itself: Note that stab(S, G) is a subgroup of G.
Our work builds on and reinterprets the symmetry-exploiting branching methods known as Orbital Branching (OB) (Ostrowski et al. 2011a) and Constraint Orbital Branching (COB) (Ostrowski et al. 2008). Suppose that we plan to apply the standard 0-1 branching dichotomy at subproblem a of the branch-and-bound tree. Let O = orb(stab(F a 1 , G), x i ) = {h 1 , h 2 , . . . , h |O| } be the orbit of the variable x i in the stabilizer of F a 1 in G. Orbital branching prescribes to apply a strengthened right branch where all the variables in the orbit of x i can be fixed to zero. Specifically, the branching disjunction can be enforced, and at least one optimal solution is contained in one of the two created child subproblems.
Constraint orbital branching extends the rationale of orbital branching to general disjunctions.
Given an integer vector (λ, λ 0 ) ∈ Z n+1 and a base disjunction of the form constraint orbital branching exploits all symmetrically equivalent forms of λ x ≤ λ 0 so as to obtain the following binary branching disjunction: which is equivalent to Note that orbital branching is a special case of constraint orbital branching using the integer vector (λ, λ 0 ) = (e i , 0).
Constraint orbital branching can be very powerful, as demonstrated by its successful application to solve instances of the highly-symmetric Steiner triple covering problems (Ostrowski et al. 2011b).
However, exploiting its potential in general purpose solvers requires problem-specific knowledge of good branching vectors (λ, λ 0 ) and clever mechanisms for using and managing the potentially huge number of symmetric constraints on the right-branching child node in the disjunction (3). We will introduce the concept of orbital conflict in Section 4 as a mechanism for implicitly managing the large collection of inequalities coming from one particular application of constraint orbital branching.
Cutting planes from branching disjunctions
In this section, we employ the COB branching disjunction (3) on an augmented form of the standard variable branching disjunction (2), and we suggest a computationally useful mechanism for categorizing the family of resulting symmetric branching inequalities. We assume that we are branching on variable x i at subproblem a. Since the variables in F a 1 are fixed to one at node a, the standard 0 − 1 branching disjunction (3) can be reinterpreted as being obtained from the following base disjunction: By applying COB (3) to the base disjunction (4) we obtain If |G| is large, then adding all of the symmetric inequalities to the child node on the right-branch of the disjunction (5) is not practical. Instead, we will focus on identifying a subset of permutations likely to produce useful inequalities. To that end, we make the following definition.
Definition 1. Let G ⊆ Π n be a permutation group, T ⊆ [n], and i ∈ T . The permutation π ∈ G is a level−k permutation for ( We define L k (T, i) ⊆ G to be the set of all level−k permutations for (T, i) in G and note that G is partitioned into these sets: Let π ∈ L k (F a 1 , i) be a level−k permutation for the set of variables fixed to one at node a and branching index i. By definition, |π(F a Optimization Online and the branching inequality associated with π in the disjunction (5) is equivalent to We refer to (7) as a level−k branching inequality for node a and branching index i. The branching disjunction (5) is equivalent to applying the level-k branching inequalities for all possible values of k, i.e: For π ∈ L k (F a 1 , i), the set π(F a 1 ∪ i) \ F a 1 has cardinality k + 1 and there are k + 1 elements on the left-hand side of (7). Therefore, branching inequalities (6) will be strongest for permutations π ∈ L k (F a 1 , i) with small values of k. To mitigate the impact of dealing with the potentially large number of inequalities in (5), we propose to employ level−k branching inequalities for only small values of k = 1, 2, . . . , K < F a 1 .
Given, T ⊂ [n] and i ∈ T , characterizing exactly the permutations in L k (T, i) appears to difficult.
However, we can calculate a subset of these permutations with the following theorem.
Theorem 1. Let G ⊆ Π n be a permutation group, T ⊆ [n], i ∈ T , and L k (T, i) be the set of all level−k permutations for (T, i) in G. Then, Proof of Theorem 1 Let S ⊆ T with |T \ S| = k and π in stab(S, G). We will shows that π is at most a level-k permutation for (T, i). First we note that Since π is a permutation, this is a union of three disjoint sets, and we have As π ∈ stab(S, G), we have that π(S) = S, so |(π(S) ∩ T )| = |S ∩ T | = |T | − k. Plugging this into (11) yields |π(T ∪ i) ∩ T | ≥ |T | − k, so π is at most a level-k permutation for (T, i). Q.E.D.
For k = 0 and T = F a 1 , the permutations in the subgroup on the left-hand-side of (10) are precisely those used to define the orbits of the variables in orbital branching, so an immediate consequence of Theorem 1 is that the branching disjunction used by orbital branching is dominated by branching disjunction (9), even for K = 0.
Using Theorem 1, we can for a fixed k enumerate the appropriate subsets of T and perform the stabilizer computations in (10). Note that as one moves deeper in the tree, the number of stabilizer computations used to approximate L k can increase considerably, since it is dependent on |F a Thus, the size of L k is expected to increase dramatically as k increases. In Section 6.1, we show a small experiment that documents the number of level−k inequalities we obtain for a symmetric IP. In our work, we do not choose to directly employ the level-k branching inequalities, or even use the inequalities to separate a given fractional solutionx. Rather, in the next section, we show how to use level-1 branching inequalities to augment the conflict graph of the problem, and we use well-known techniques to separate fractional solutions from its stable-set relaxation.
Orbital Conflict
The level-1 inequalities (7) are of the form x i + x j ≤ 1 for some i ∈ [n], j ∈ [n]. Thus, information from these inequalities need not be added directly to the formulation, but can be handled implicitly by adding the edges (i, j) to the (local) conflict graph of the problem at a node. Algorithm (1) describes the generation of level-1 inequalities at a node a of the enumeration tree. Note that the level-1 inequalities generated at ancestors of a remain valid for the entire sub-tree rooted at a.
Included in the output of Algorithm 1 is a symmetry-enhanced conflict graph G(V, E c ) on the right branch of the branching disjunction. When node c is to be explored, the clique cut separation heuristic of Marzi et al. (2019) is run to search for clique inequalities violated by the LP relaxation of node c. In the following example we compare a branch-and-cut algorithm based on OB with one enhanced by OC.
Optimization Online
Algorithm 1: Orbital Conflict (OC) Input: Subproblem a = (F a 1 , F a 0 , G(V, E a )), branching variable index i Output: Two child subproblems b and c x i which corresponds to computing the stability number of G. The optimal value is 8, while the root LP relaxationx has objective value 12, with x i = 1 2 , i ∈ V . If a branch-and-bound algorithm is executed based on OB the search tree of Figure 2 is explored.
Let us focus on subproblem 2 at depth 1, with F 2 1 = {9} and having the conflict graph shown in Figure 3. Suppose we choose to branch on variable x 11 . The resulting COB disjunction (5) is Enumerating all the constraints on the right-hand-side of (12) yields the following inequalities: As |F 2 1 | = 1, these inequalities are either level-0 inequalities or level-1 inequalities. To distinguish the two, the level-0 constraints are written in bold font style and level-1 constraints are in black.
Note that the level-0 constraints reduce to fixing x 11 , x 12 , x 23 , and x 24 to zero when x 9 is fixed to one. Also note that all of the level-0 constraints are generated by π∈stab({9}, G) x π(9) + x π(11) ≤ 1, demonstrating that stab({9}, G) is a good approximate for L 0 in this instance.
The graphs of Figure 4 represent the child subproblems that result from the above disjunction.
Note the abundance of 4-cliques in subproblem 5 that are formed by adding the level-1 inequalities (the newly added constraints are represented by the dashed edges), whereas the parent subproblem only had 3-cliques. We can use these cliques to strengthen the level-1 inequalities generated by ,12,15,16, 19,20,23,24}
Figure 2
Search tree by orbital branching the branching disjunction. In fact, all the generated inequalities will be dominated by the four 4-clique inequalities, reducing the size of the resulting formulation while also tightening it. Adding the clique inequalities from the 4-cliques to the LP formulation improves the bound from 9.5 to 8, allowing it to be pruned by bound. Without these clique inequalities, we would have to solve 4 more nodes (6,7, 20, and 21). Let us now apply the branching disjunction (5) at subproblem 4, with F 4 1 = {9, 11} using the branching variable x 15 . The resulting disjunction is The right side of the disjunction (13) has 128 non-redundant inequalities that will not be enumerated in this example. However, we can enumerate all level-0 and level-1 inequalities. Note that stab ({9, 11}), G) ⊂ L 0 contains the symmetry that reflects the graph across its vertical axis as well as the permutations that permute the pairs of adjacent vertices on the graph's outer ring (excepting vertices 9-12). Thus, the permutations in stab({9, 11}) generate the following level-0 constraints: x 9 + x 11 + x 15 ≤ 2 x 9 + x 11 + x 16 ≤ 2 x 9 + x 11 + x 21 ≤ 2 x 9 + x 11 + x 22 ≤ 2, which results in fixing x 15 , x 16 , x 21 , and x 22 to zero (which would have occurred via orbital branching). π ∈ stab({9}, G): π ∈ stab({11}, G): Note, of course, that because x 9 and x 11 are fixed to one at node 9, the above constraints reduce to something in the form of x i + x j ≤ 1, and thus can be added to the conflict graph of the child subproblem.
The graphs of Figure 5 represent the subproblems formed by our branching disjunction. Note again that the additional constraints added to the right branch, represented by the dashed edges, form 4-cliques, again allowing us to strengthen the symmetry-exploiting constraints and replace them with clique-based constraints. Enforcing the two corresponding clique inequalities in the linear description the LP relaxation yields an integer optimal solution of value 4 and the subproblem is pruned. Notice that adding these constraints avoids generating subproblems 18 and 19 in the OB tree. The overall saving produced by OC w.r.t. OB corresponds to the gray nodes of Figure 2.
This example demonstrates the potential positive impact of employing the symmetry-induced branching inequalities we propose. We next perform a suite of computational experiments to further test their utility.
Experimental setting
In order to produce a fair evaluation of the orbital conflict algorithm, we also implemented other symmetry-exploiting techniques, such as isomorphism pruning (Margot 2002) and orbital fixing (Margot 2003). Isomorphism pruning is an approach that prunes the branching tree in such a way that it only contains one element per each orbit of optimal solutions. Let O be the set of orbits defined by stab(F a 1 , G) at a node a. Orbital fixing acts on each orbit O ∈ O as follows: When O ∩ (F a 0 ∪ F a 1 ) = ∅, then O is called a free orbit. The OB scheme branches on one of the free orbits at a feasible node a. Let O 1 , O 2 , . . . , O p be the free orbits at node a. For our computational experiments, the branching rule was set to choose a variable belonging to the largest orbit, this is We implement isomorphism pruning in an equivalent way that we call isomorphism fixing. For each free orbit O i , let j i ∈ O i be its variable with minimum index. If the set F a 1 ∪ j i is not the lexicographically minimal element in its orbit, we set all variables in O i to be zero. The argument is that if we did create a branching node with F a 1 ∪ j i as the set of variables fixed to one, then a symmetric set of variables fixed to one will be used at another node, so we could safely prune the node.
In the original paper on isomorphism pruning of Margot (2002), the branching scheme was rigid, as it was only possible to branch on the variable with a minimum index across all free orbits. The thesis of Ostrowski (2009) proved that other branching rules were also valid when performing an additional conjugation of the group before implementing group operations. See also Pfetsch and Rehn (2019) for a nice description of the method. We implement the flexible version of isomorphism pruning in our implementation so that it can be combined with orbital branching, branching on a variable that appears in a largest orbit.
Set of instances
To test the performance of orbital conflict, we prepared a collection of instances where symmetry is known to be present. These instances belong to the following problem types: Steiner triple systems (sts), covering designs (cov), binary coding (cod), binary-ternary covering codes (codbt), infeasible instances from Margot website (flosn, jgt, mered, ofsub9), two stable set instances (keller), and minimally aliased response surface designs (omars).
A n-sts instance is based on a Steiner triple system of order n, which is a collection of triples from n elements such that every pair of distinct elements appears together in only one of the triples. A n-sts instance consists on finding the smallest set of elements that covers all the triples of a given Steiner triple system of order n. This is known as the incidence width of the system.
Optimization Online
These instances were introduced in (Fulkerson et al. 1974, Feo and Resende 1989, Karmarkar et al. 1991. For these problems, we considered the complementary formulation, where each variable is substituted by its complement (x → 1 − x).
A (v, k, t)-cov instance is a collection of k-subsets of v elements such that every t-element subset is contained in at least one of the k-subsets in the collection. See (Schönheim 1964) for an analysis of the properties of these designs. The problems that we considered are minimization problems on the number of k-subsets. The repository at https://www.ccrwest.org/cover.html points to references for different instances and indicates which problems are not yet solved.
A (n, R)-cod instance is a collection of codewords in a binary code of length n such that any element of the binary code of length n is at a distance of at most R of a codeword of the collection.
R is called the covering radius. The paper of Graham and Sloane (1985) is recommended for further information. The link http://old.sztaki.hu/~keri/codes/index.htm contains an updated repository of the current solvability status of different cod instances.
A (b, t)-codbt instance is a binary-ternary covering code, that is, a collection of vectors of length b + t with b binary coordinates and t ternary coordinates such that, for every possible binaryternary vector defined by b and t there is a vector in the collection at a distance of at most 1.
When b = 0, this problem is also known as the football pool problem (Linderoth et al. 2009). The same link quoted for the cod instances has results for mixed binary-ternary codes. In Section 6.3, we will use our symmetry-enhanced integer programming methods to solve for the first time three of these instances.
KELLER integer programs correspond to computing the stability number of Keller graphs.
These arise in the reformulation of Keller's cube-tiling conjecture (Debroni et al. 2011). Vertices of a d-dimensional Keller graph correspond to the 4 d d-digit numbers (d-tuples) on the alphabet {0, 1, 2, 3}. Two vertices are adjacent if their labels differ in at least two positions, and in at least one position the difference in the labels is two modulo four. Keller graphs were introduced in the second DIMACS max-clique challenge and represent meaningful benchmark instances for the max-clique/stable set community. The instance are available at ftp://dimacs.rutgers.edu/pub/ challenge/graph/benchmarks/clique. Note that since we are interested in computing stable sets, we consider complemented versions of the clique problems. We test three formulations for graph Keller 4, based on clique and nodal inequalities. Details can be found in (Letchford et al. 2018).
A (m, n, α, β)-omars instance is an orthogonal minimally-aliased response surface design (see (Núñez-Ares and Goos 2019)), which is an experimental design with m factors and n runs with sparsity properties α and β that have desirable statistical estimation properties. These instances are enumeration instances, so we will enumerate all non-isomorphic feasible solutions.
All but the omars instances are optimization problems. However, in some cases, we consider optimization instances as enumeration instances by finding all non-isomorphic solutions with a given objective value. Table 1 shows some characteristics of the instances. In addition to the instance size, we list the type of inequalities present in the problem and its symmetry group size. For optimization instances, we list the optimal solution value as well as the number of non-isomorphic optimal solutions for each instance. For unsolved instances codbt52,codbt71,codbt43 we report the best known lower and upper bounds.
Type Instance #cols #rows L/E/G
Implementation
Our approach was implemented using the user application functions of MINTO v3.1 (Nemhauser et al. 1994), while the clique separation algorithm used the LEMON library (Dezső et al. 2011), and the necessary symmetry calculations (group operations) used the PERMLIB library (Rehn and Schürmann 2010). To compute the generators of the symmetry group of the problem instance, we relied on the usual approach of building a graph such that the formulation symmetries correspond Optimization Online one-to-one to the graph automorphisms. The generators were calculated using the NAUTY software (McKay and Piperno 2014). For a more complete description of computing the symmetry group, the reader is invited to turn to (Puget 2005, Salvagnin 2005, Margot 2010, Pfetsch and Rehn 2019. To minimize the performance variability induced by the timing at which feasible solutions are found by branch and bound, we input the optimal value of the instance to the solver to use as a cutoff. We also employ reduced-cost fixing.
Computational experiments
Our computational results are divided into three parts. First, we demonstrate the prevalence of the level-k inequalities on one symmetric instance. Next, we measure the computational impact of adding level-1 inequalities to the conflict graph, as described in Algorithm 1. We finish by using the symmetry-enhanced conflict graph in combination with a two-phase approach and parallel computing to solve unsolved symmetric IP instances arising from covering designs.
Frequency of Level-k Inequalities
Our first experiment demonstrates the prevalence of (unique) level-k inequalities for small values of k and the relative computational time of implementing the different features of our algorithm.
For the instance codbt52, Table 2 displays the average number of level-{1, 2, 3} cuts found at nodes a for different values of |F a 1 | as well as the average group size of stab(F a 1 , G). Our first observation is the large number of level-k inequalities generated. In our computation, we do not add the level-k inequalities to the linear programming relaxation at the child nodes, since adding that many inequalities would quickly overwhelm the linear programming solver. Another interesting observation from Table 2 is that level-k inequalities can be generated at nodes that do not contain any apparent symmetry, as evident by the last row of the table. In some sense, level-k inequalities "look back" in the tree and determine cuts that would have been generated if the branching order had been different. Table 3 shows the breakdown of computation time for different parts of the algorithm, when implemented on the instance codbt52. The table demonstrates the significant computational effort required to general level-k inequalities and how that effort increases with k. The algorithm behavior on codbt52 is quite typical of all of our computational results, both in terms of the number of level-k inequalities generated and the CPU time required to generate them. Thus, in subsequent experiments, we focus on assessing the impact of implicitly employing the level-1 inequalities as edges of a local conflict graph as described in Algorithm 1. Table 3 Timing of the different parts of the implementation for the instance codbt52
Results for small and medium size instances
For our next experiment, we solved the optimization instances in Table 1 both with and without adding the level-1 inequalities to the local conflict graph. Computations were completed on a DellR810 machine with 256G of RAM and E7-4850 Xeon processor, and Table 4 shows the results of this experiment. The table contains the time, number of nodes, and total number of clique inequalities found for both cases for each instance. Table 4 shows that some significant decrease in branch-and-bound tree size can be obtained when using the orbital conflict procedure. In fact, for each of the 18 test instances, the tree size was smaller when using orbital conflict, resulting in a tree that was on average only 58% of the size of the tree without the orbital conflict-induced clique inequalities. However, the results in the table also indicate that the computational effort required for the reduced tree size is significant. Specifically, even though the number of nodes is significantly reduced, the CPU time on average is increased by 7.7% when doing orbital conflict. In six of the Table 5 Results on enumeration instances, with and without OC
Results for large instances
Despite the fact that the CPU times in general increased when using orbital conflict for the smallto-medium sized instances, the significant reduction in tree size led us to believe that performing the computationally costly orbital conflict procedure at the top of the branch and bound tree could help us to solve larger symmetric instances.
With that in mind, we developed the following two-phase approach to tackling difficult, symmetric instances. In Phase I, we do the orbital conflict procedure on the instance as usual, but we prune a feasible node a if the size of stab(F a 1 , G) falls below a certain threshold. These pruned nodes (which we call leaves) are saved to MPS files together with the edges of the conflict graph valid at that node. In Phase II, each one of these MPS files can be solved in parallel using a commercial, state-of-the art MIP solver. The hope is that through the combination of branching, symmetryinduced variable fixing, and enhanced conflict-graph information, solving each of the active leaf node instances will be significantly easier than solving the original problem directly. In addition, each of the active leaf node instances can be solved in parallel. The jobs on Phase II are executed on a High Throughput Computing (HTC) Grid managed by the scheduling software HTCondor (Thain et al. 2005).
To demonstrate the promise of this two-phase approach, we employ it to solve the instance codbt52. The optimization software CPLEX 12.7, tuned to aggressively tackle symmetry, can solve this instance in 14,087 seconds (roughly 4 hours) and 28,817,719 nodes. Note that this in itself is an achievement, as this instance has not been reported solved in the literature. However, the twophase approach we propose can solve this instance much more effectively. In Table 6, we show the computational behavior of a two-phase approach both with and without employing orbital conflict, fathoming all nodes and writing to MPS files once the group group size fell below | stab(F a 1 , G)| ≤ 128. In the table, we shows the CPU time in Phase I, the number of clique inequalities obtained, the number of active leaves that need to be solved at the end of Phase I, the total CPU time for CPLEX to solve all leaf nodes in Phase II, the total wall clock time in Phase II, and the total Wall Time (combining Phase I CPU/Wall Time with Phase II Wall Time). In Phase II, we limit the number of threads that CPLEX can use to 2, so as not to overwhelm the shared resources provided by HTCondor. From the table, we see that significant speedup is possible with this two-phase approach, solving the same instance in only 634 seconds, so we employed it in an effort to solve some heretofore unsolved instances of binary-ternary covering. Table 6 Performance of two-phase method on solving codbt52. Time is measured in seconds. Table 7 shows the results on two other open binary-ternary covering design problems. The optimal solution of codbt43 is known to lie in the interval [44 − 48]. We run our two-phase algorithm with orbital conflict, pruning and saving all nodes a such that stab(F a 1 , G) < 16. In total, we generated 1, 208 such nodes, which were then solved with CPLEX, setting an upper value of 47.1. All of the active leaf node instances were infeasible, so the optimal solution to codbt43 is 48. For the instance codbt71, we employed our two-phase approach with a fathoming group size of 32, resulting in a branch-and-bound tree with 111, 318 nodes left to be evaluated. All of these instances were solved in parallel by CPLEX, using an upper bound value of 47.1 as a cutoff value.
Since all 111, 318 instances were infeasible, our computations have verified that the optimal solution for the instance codbt71 has value 48. To our knowledge the optimal solutions for instances Table 7 Results on two large codbt instances. Time is measured in seconds.
Conclusions
In this work, we introduce a hierarchy of cutting planes for symmetric integer programs that arise from a reinterpretation of symmetry-exploiting branching methods. We show how to implicitly and effectively utilize the cutting planes from level-1 of this hierarchy to populate the conflict graph in the presence of symmetry. Our orbital conflict method outperforms the state-of-the-art solver CPLEX 12.7 for large symmetric instances, proving that augmenting the instance conflict graph via symmetry considerations can be beneficial. For small and medium sized instances, the time spent in the additional group operations required to make use of the deeper symmetry operations appears to outweigh its benefit, at least in our implementation.
Future work consists of refining the orbital conflict algorithm to make it more computationally efficient. For example, we could consider doing fast, partial group operations to compute the symmetries in (10), or not performing the calculations at all nodes of the enumeration tree. Another extension of the work is to make better computational use of the level-k inequalities for k ≥ 2. By complementing the variables in the Equation (7), we obtain a set-covering inequalitȳ x(π(F a 1 ∪ i) \ F a 1 ) ≥ 1.
So these inequalities could be used in a set-covering relaxation for the instance. As seen in Table 2, our procedure can generate a considerable number of k-level cuts for k ≥ 2, so there may be some benefit to this approach. | v2 |
2011-03-31T19:36:53.000Z | 2011-03-31T00:00:00.000Z | 117927669 | s2orc/train | The nerve of a crossed module
We give an explicit description for the nerve of crossed module of categories.
Introduction
Let T be a topological space. It is said that T has a type k if all the homotopy groups π n (T ) are zero for n > k. It is known that the categories of groups and of 1-types are equivalent. In [EM45] Eilenberg and Maclane constructed for every group G a simplicial set BG such that the topological realization |BG| of BG is the corresponding 1-type. In fact they gave three different description for BG called homogeneous, non-homogeneous and matrix description. They used these descriptions to get the explicit chain complex that computes the cohomology groups of |BG|. This was the born of the homology theory for algebraic objects.
It turns out that non-homogeneous description of BG is the most useful one. This description was used by Hochschild in [Hoc46] to define the Hochschild complex for an arbitrary associative algebra A that coincides with the complex constructed by Eilenberg and Maclane when A is a group algebra. It also inspired the definition of the nerve of small category and definition of Barr cohomology. In fact, it is difficult to image the modern mathematics without non-homogeneous description of BG.
In [Whi49] Whitehead showed that 2-types can be described by crossed modules of groups. Blakers constructed in [Bla48] for every crossed module of groups (A, G) the complex N B * (A, G) whose geometrical realization is the 2-type corresponding to (A, G). In fact he has done this for arbitrary crossed complexes of groups that describe k-type for any k ∈ N. In the case of k = 1 his description coincides with the matrix description of Eilenberg-Maclane for BG.
In this article we give an explicit description of a simplicial set N (A, C) for a crossed monoid (A, C) in terms of certain matrices. This simplicial set is isomorphic to the one constructed by Blakers in case (A, C) is a crossed module of groups (A, G). The difference is that the elements of N k (A, C) are described as collections of elements in A and G without any relations between them, however the elements of N B k (A, G) are described as collections of elements in A and G that should satisfy certain conditions between them.
The paper is organized as follows. In Section 2 we recall the definition of simplicial set and their elementary properties. Section 3 contains the main result of the paper. Namely, we describe the simplicial set N (A, C) for an arbitrary crossed monoid (A, G). In Theorem 3.2 we prove that N (A, C) is indeed a simplicial set.
In Section 4 we prove that N (A, C) is 4-coskeletal. Moreover, in case (A, C) is a crossed module of groups it turns out that N (A, C) is 3-coskeletal.
In Section 5 we check that N (A, C) is a Kan simplicial set if (A, C) is a crossed module of groups. We also check that the homotopy groups of (A, C) and N (A, C) are isomorphic in this case.
In the next version of this paper we shall give a comparison between our construction and the construction of Blakers [Bla48] and the construction of Moerdijk and Svensson [MS93].
Simplicial set
For the purpose of this paper a simplicial set is a sequence of sets X n , n ≥ 0 with maps d j : X n → X n+1 and s j : X n → X n−1 , 0 ≤ j ≤ n such that for i < j: The n-truncated simplicial set is defined as a sequence of sets X 0 , . . . , X n with the maps d j : X k → X k−1 , s j : X k → X k+1 for all k and j they have sense, that satisfy the same identities as same-named maps for a simplicial set.
We denote the category of simplicial sets by △ op Sets and the category of ntruncated simplicial sets by △ op n Sets. Then we have an obvious forgetful functor tr n : △ op Sets → △ op n Sets. This functor has a right adjoint cosk n : △ op n Sets → △ op Sets. The composition functor cosk n tr n will be denoted by Cosk n . Thus Cosk n is a monad on the category of simplicial sets. We say that X is ncoskeletal if the unit map η X : X → Cosk n X is an isomorphism.
For every simplicial set X • we define n X as a simplicial kernel of the maps d j : X n−1 → X n−2 , 0 ≤ j ≤ n − 1. In other words n X is a collection of sequences (x 0 , . . . , x n ), x j ∈ X n−1 , such that d j x k = d k−1 x j for all 0 ≤ j < k ≤ n − 1. We have the natural boundary map b n : X n → n X defined by Proposition 2.1. Let X be a simplicial set. Then X is n-coskeletal if and only if for every N > n the map b N is a bijection.
Proof. Note that for every N > n the canonical map is an isomorphism. Thus if X is n-coskeletal it is also N −1-coskeletal. Therefore the maps X N → Cosk N (X) N are isomorphisms for all N > n. Int Section 2.1 in [Dus02] it is shown that these maps coincide with b N . This shows that the maps b N are isomorphisms for all N > n. Now suppose that all the maps b N are isomorphisms. The map η X : X → Cosk n X is an isomorphism in all degrees up to n by definition of the functor Cosk n . We proceed further by induction on degree. Suppose we know that η X : X → Cosk n X is an isomorphism in all degrees up to N ≥ n. Therefore the map τ : induced by the N -th component of η X is an isomorphism. But now the set on the right hand side of (7) is Cosk N Cosk n X N +1 ∼ = (Cosk n X) N +1 . As n-th component of η X decomposes into the product of τ and b n we get that it is an isomorphism.
Define the set n l X of l-horns in dimension n to be the collection of ntuples (x 0 , . . . , x l , . . . , x n ) of elements in X n−1 such that d j x k = d k−1 x j for all 0 ≤ j < k ≤ n − 1 different from l. There are the natural maps b n l : A complex X is said to be Kan complex if the maps b n l are surjective for all 0 ≤ l ≤ n. We define now based homotopy groups π n (X, x) for a Kan complexes X. We follow to the exposition of [Smi01] on the pages 27-28. Let x ∈ X 0 . Then all the degenerations s in . . . s i1 (x) of x in degree n are mutually equal and will be denoted by the same letter x. We define π n (X, x) to be the set { y ∈ X n | b n (y) = (x, . . . , x)} factorized by the equivalence relation That ∼ is indeed an equivalence relations for a Kan set is shown at the end of page 27 of [Smi01]. Now we define a multiplication on π n (X, x) as follows. Let [y], [z] ∈ π n (X, x) be equivalence classes containing y and z, respectively. Then the tuple (x, . . . , x, y, ∅, z) is an element of n+1 n . Therefore there is an element w ∈ X n+1 such that b n+1 n (w) = (x, . . . , x, y, ∅, z). We define [y][z] = [d n (w)]. Again it is shown in [Smi01], that this product is well defined and associative, [x] is the neutral element, and if n ≥ 2 the product is commutative.
There is a connection between coskeletal and Kan conditions for a simplicial set. To see this we start with Proposition 2.2. Let (x 0 , . . . , x l , . . . , x n ) ∈ n l X. Then For 0 ≤ j ≤ l − 1 < k ≤ n − 1 we get Finally for l − 1 < j < k ≤ n − 1 we have Thus we have a well defined map β n l : n l X → n−1 X given by (8). As a simple corollary of Proposition 2.2 we get Since b n−1 is surjective there is x l ∈ X n−1 such that d j x l = d l−1 x j for 0 ≤ j ≤ l − 1 and d j x l = d l x j+1 for l ≤ j ≤ n − 1. Therefore (x 0 , . . . , x n ) ∈ n X and since b n is surjective there is z ∈ X n such that d j z = x j , 0 ≤ j ≤ n.
Category crossed monoids
Let C be a small category. We denote by C 0 the set of objects and by C 1 the set of morphisms of C. We will write s (α) for the source and t (α) for the target of the morphism α ∈ C 1 . If F : C → Mon is a contravariant functor from C to the category of monoids, for α ∈ C (s, t) and m ∈ F (t) we write m α for the result of applying F (α) to m.
A crossed monoid over C is a contravariant functor A : C → Mon together with a collection of functions ∂ t : for all s, t ∈ C 0 , α ∈ C (s, t), a , b ∈ A (t). We will write e x for the unit of A(x), x ∈ C 0 . A morphism from a crossed module (A, C) to a crossed module B, C is a pair (f, F ), where F : C → C is a functor and f is a collection of homomorphisms for all s, t ∈ C 0 , α ∈ C (s, t), a ∈ A (t). We denote the category of crossed monoids over small categories by XMon. Note that XMon contains a full subcategory XMod of crossed modules whose objects (A, C) are such that C is a groupoid and A (t) is a group for every t ∈ C 0 . Now we describe the nerve functor N : XMon → △ op Sets into the category of simplicial sets. Define N 0 (A, C) = C 0 . For n ≥ 1 we define N n (A, G) to be the set of n × k upper triangular 1 matrices M = (m ij ) i≤j such that there is a sequence x (M ) = (x 0 (M ) , . . . , x n (M )) of objects in C such that We will identify N 1 (A, C) with C 1 . We extend function x on N 0 (A, cat) = C 0 by x (p) := (p).
3. shift all elements below (j + 1)-st row one position to the right.
Note that in the case j = 0 the first step is skipped and in the case j = n the last step is skipped. 3. if 1 ≤ j ≤ n − 1 (a) at every row above the j-th row we multiply elements at j-th and (j + 1)-st places; (b) shift all the elements at j-th row and below one position to the left; (c) replace j-th and (j + 1)-st rows with the row: For example with the maps s j , d j defined above is a simplicial set.
Proof. We have to check that the maps d j and s j satisfy the simplicial identities. For a convenience we divide them into two groups. Let M ∈ N n (A, C). In the first group we put the identities The rest of the identities where j < k − 1, will be in the second group.
Note that the effect of action of all above maps on the i-th row of the matrix M for i < j is the same as the effect of action of the same named maps on the nerve of A (x i (M )). Therefore the equality of the matrices above the j-th row follows from the standard description of the nerve of monoid. Now the matrices s j+1 s j (M ) = s 2 j (M ) are equal strictly under the (j + 1)st as this part is obtained by shifting the part of M under the (j − 1)-st row two positions in the south-east direction in both of them. Let x = x j (M ). The j-th row of s j+1 s j (M ) is obtained from the sequence (∅, . . . , ∅, 1 x , e x , . . . , e x ) by inserting e x after 1 x and thus coincides with the j-th row of s 2 j (M ). Since of the appropriate length. The (j + 1)-st row of s j s j (M ) is equal to the j-th row of s j (M ) and thus is the same sequence. This shows that s j+1 s j = s 2 j (M ). Now for the rest of matrices in the first group the part strictly bellow the j-th row is obtained by shifting the elements of M back and forth. It is not difficult to see that these shifts bring the same-named elements to the same positions in all four pairs of matrices.
Similarly the parts strictly below the j-th row in matrices of second group are obtained by applying the map with greater index and moving elements around. Again the same elements will be in the same places.
Thus we have only to check that the j-th rows are equal in every pair of matrices.
We start with the matrices of the second group. Thus from now on k −1 > j.
Now the j-th row of d k−1 d j (M ) is obtained from the j-th row of d j (M ) by multiplying elements in the (k − 1)-st and k-th columns: Thus the j-th rows of d j d k (M ) and d k−1 d j (M ) are the same outside the (k − 1)th column, where the most complicated looking elements are. By (11) we get where e's are in the (k + 1)-st column. Since ∂ e xj+1 = 1 xj+1 it is immediate that the corresponding sequence of η's has the form η j+1,j+1 , . . . , η j+1,k−1 , η j+1,k , η j+1,k , η j+1,k+1 , . . . , η j+1,n , that is it is obtained from the sequence of η's for M by duplicating η j+1,k . Since These two elements are equal since by (10). Now let l > j. We will compute the element at the place (j, l) in To compute the corresponding element in d j d j+1 (M ) we have to find Since by (10) we get The map is a bijection and we will denote the inverse of λ n by µ n . The following picture explains how to construct µ n M 0 , M n , m ∈ N n (A, C) for where m 2 1,n−1 is the element of M 2 at the upper-right corner. We get a commutative triangle If 2 ≤ j ≤ n − 2, then m ′ = m 2 1,n−1 . If j = 2 then Note that in the first step we used n−1 ≥ 4 which is equivalent to our assumption n ≥ 5. Combining with (20) we get We have to show that this product is equal to m 1 1,n−1 . We have Theorem 4.2. Let (A, C) be a crossed monoid such that • for every object t ∈ C the monoid A (t) has left and right cancellation properties; • for every morphism γ ∈ C the map a → a γ from A (t (γ)) to A (s (γ)) is injecive. Proof. For every 0 ≤ j ≤ 3 and M * ∈ 3 j N (A, C) we will construct M j ∈ N 2 (A, C) that extends M * to M 0 , M 1 , M 2 , M 3 ∈ Im b 3 . The diagonal elements of M j are determined from the equalities The element at the right upper corner of M j is uniquely determined from (30). The care should be taken for j = 3: in this case we replace m 3 22 in (30) by m 0 11 . We automatically get that Thus we have only to check that Below is the required computation. For j = 0 we have: or in other terms For j = 2 we have to check that w 21 w −1 20 = w −1 24 w 23 m22∂(m23)m33 . We have Note that this time we can not use m 23 = w 04 , instead we will use m 23 = w 40 . We get Now we can compute homotopy groups of N (A, C) for a crossed module (A, C). Let t ∈ C 0 . Then π (N (A, C) , t) is given by the classes [g] of elements g ∈ C 1 such that s (g) = t (g) = t, that is g ∈ C (t, t). Two elements g 1 , g 2 ∈ C (t, t) belong to the same class if and only if there is an element M ∈ N 2 (A, C) such that b 2 (M ) = (1 t , g 1 , g 2 ). This implies m 11 = 1 t and m 22 = g 2 . Therefore g 1 = ∂ (m 12 ) m 22 = ∂ (m 12 ) g 2 . As the element m 12 ∈ A (t) can be chosen arbitrary we see that g 1 and g 2 are in the same class if and only if g 1 Im∂ t = g 2 Im∂ t . Note that Im∂ t is a normal subgroup of C (t, t) as for all g ∈ C (t, t) and a ∈ A (t) we have g −1 ∂ (a) g = ∂ (a g ). Thus π 1 can be identified with the quotient group C (t, t) ∂ (A (t)) as a set. Now we show that the composition law on π 1 (N (A, C) , t) coincides with the composition law of C (t, t) ∂ (A (t)) . Let [g 1 ], [g 2 ] ∈ π 1 (N (A, C) , t). Then M := g 1 e t g 2 ∈ N 2 (A, C) is the preimage of (g 1 , ∅, g 2 ) ∈ As Ker (∂ t ) is a commutative group we see that π 2 and Ker (∂ t ) are isomorphic as groups.
Since N (A, C) is a 3-coskeletal set all other homotopy groups of N (A, C) are trivial. Thus N (A, C) is a 2-type. Proof. We have to show that C is a groupoid and that for every t ∈ C 0 the monoid A (t) is a group. Let g ∈ C (s, t). Then (g, 1 s , ∅) and (∅, 1 t , g) are elements of From the explicit form for d 1 M and d 3 M we see that m 15 is the inverse element to a in A (t). | v2 |
2016-06-17T08:03:42.307Z | 2014-06-26T00:00:00.000Z | 4593979 | s2orc/train | Slow gait speed – an indicator of lower cerebral vasoreactivity in type 2 diabetes mellitus
Objective: Gait speed is an important predictor of health that is negatively affected by aging and type 2 diabetes. Diabetes has been linked to reduced vasoreactivity, i.e., the capacity to regulate cerebral blood flow in response to CO2 challenges. This study aimed to determine the relationship between cerebral vasoreactivity and gait speed in older adults with and without diabetes. Research design and methods: We studied 61 adults with diabetes (65 ± 8 years) and 67 without diabetes (67 ± 9 years) but with similar distribution of cardiovascular risk factors. Preferred gait speed was calculated from a 75 m walk. Global and regional perfusion, vasoreactivity and vasodilation reserve were measured using 3-D continuous arterial spin labeling MRI at 3 Tesla during normo-, hyper- and hypocapnia and normalized for end-tidal CO2. Results: Diabetic participants had slower gait speed as compared to non-diabetic participants (1.05 ± 0.15 m/s vs. 1.14 ± 0.14 m/s, p < 0.001). Lower global vasoreactivity (r2adj = 0.13, p = 0.007), or lower global vasodilation reserve (r2adj = 0.33, p < 0.001), was associated with slower walking in the diabetic group independently of age, BMI and hematocrit concentration. For every 1 mL/100 g/min/mmHg less vasodilation reserve, for example, gait speed was 0.05 m/s slower. Similar relationships between vasodilation reserve and gait speed were also observed regionally within the cerebellum, frontal, temporal, parietal, and occipital lobes (r2adj = 0.27–0.33, p < 0.0001). In contrast, vasoreactivity outcomes were not associated with walking speed in non-diabetic participants, despite similar vasoreactivity ranges across groups. Conclusion: In the diabetic group only, lower global vasoreactivity was associated with slower walking speed. Slower walking in older diabetic adults may thus hallmark reduced vasomotor reserve and thus the inability to increase perfusion in response to greater metabolic demands during walking.
INTRODUCTION
Gait speed is predictive of mobility, morbidity, and mortality in older adults (Guralnik et al., 1995;Studenski et al., 2011). Vasoreactivity is an important cerebrovascular control mechanism used to maintain brain perfusion during increased metabolic demands (Bullock et al., 1985;Schroeder, 1988) such as walking, and can be clinically quantified by the vasodilation responses to hypercapnia (Low et al., 1999;Lavi et al., 2006). In healthy older adults, blood flow velocities in the middle cerebral artery territory, which supplies numerous brain regions involved in locomotor control, increased proportionally to walking speed (Novak et al., 2007). In a population-based study comprising community-dwelling older adults both with and without risk factors for falls (e.g., diabetes, stroke, use of walking aids, etc.), slower walkers exhibited lower vasoreactivity within the middle cerebral artery territory as measured by Transcranial Doppler ultrasound (Sorond et al., 2010).
Slowing of gait may thus reflect an early manifestation of underlying abnormalities in vasoreactivity and perfusion adaptation to the metabolic demands of walking. However, the relationship between brain vascular health and walking has not yet been established.
Type 2 diabetes accelerates brain aging (Biessels et al., 2002;Last et al., 2007) and has also been linked with microvascular disease and altered cerebral blood flow regulation (Allet et al., 2008;Várkuti et al., 2011) and vasoreactivity (Novak et al., 2011). Diabetes is associated with reduced gait speed and related functional decline (Volpato et al., 2010). In older adults, gait characteristics have been linked to gray matter atrophy and white matter hyperintensities (Rosano et al., 2007a,b;Callisaya et al., 2013). Moreover, gray matter atrophy appears to have a stronger effect on locomotor control in those with type 2 diabetes as compared those without, suggesting that the control of walking may be more dependent upon supraspinal control within this population (Manor et al., 2012). This study therefore aimed to determine the relationship between vasoreactivity and gait speed in older adults with and without type 2 diabetes. We hypothesized that lower global and regional vasoreactivity would be associated with slower gait speed in older adults, particularly in those with type 2 diabetes.
PARTICIPANTS
This secondary analysis was completed on prospectively collected data from community-dwelling older adults originally recruited via local advertisement. We analyzed records from three completed projects spanning March 2003-July 2012: Cerebral vasoregulation in the elderly with stroke (March 2003-April 2005; Cerebral perfusion and cognitive decline in type 2 diabetes (January 2006-December 2009; and Cerebromicrovascular disease in elderly with diabetes (August 2009-July 2012). Grant numbers are provided in the study funding section.
For the present analysis, we excluded an additional 43 stroke records that met the exclusion criteria for the current analyses, 34 records that did not have complete datasets, and 29 records from subjects who completed more than one of the above-mentioned studies. In each of the latter cases, the most recent record was kept. Thus, records from a total of 128 subjects were included in the present analysis.
Participants were originally screened by medical history and physical, neurological, and laboratory examinations. Research protocols were conducted in accordance with the ethical standards of the Beth Israel Deaconess Medical Center (BIDMC) Clinical Research Center and all participants signed an informed consent, as approved by the Institutional Review board at BIDMC.
The diabetic group included men and women aged 50-85 years with a physician diagnosis and treatment of type 2 diabetes mellitus with oral agents and/or combinations with insulin for at least one year. Diabetes treatments included insulin, oral glucosecontrol agents (sulfonylurea, second generation agents), their combinations and diet. Non-diabetic participants had no history of metabolic disorder and were recruited to match the age and gender characteristics of the diabetic group (Table 1).
Exclusion criteria for the current analysis were history of stroke, myocardial infarction, clinically significant arrhythmia or other cardiac disease, nephropathy, severe hypertension (i.e., systolic BP > 200, diastolic BP > 110 mm Hg or the use of three or more antihypertensive medications), seizure disorder, kidney or liver transplant, renal disease, any other neurological or systemic disorder (aside from peripheral neuropathy), and current recreational drug or alcohol abuse. MRI exclusion criteria were incompatible metal implants, pacemakers, arterial stents, claustrophobia and morbid obesity (i.e., BMI > 40).
PROTOCOL
Participants completed medical history, autonomic symptoms, and physical activity questionnaires. A study physician completed physical, neurological, and ophthalmologic examinations. None of the study participants had active foot ulcers during the study. A study nurse completed a fasting blood draw and recorded vital signs, anthropometric and adiposity measures. Participants also completed a comprehensive cognitive exam, autonomic testing, perfusion MRI of the brain and a gait assessment. For this study, we focused analyses on gait and MRI-based measures of cerebral perfusion and vasoreactivity.
Walking test
A 12-min walk was completed along a 75 m course on an 80 m × 4 m indoor hallway. Participants were instructed to walk at preferred speed (i.e., a pace they deemed as comfortable or normal), which has excellent test-retest reliability, even in those with severe diabetic complications (Steffen et al., 2002;Manor et al., 2008). The time taken to complete each 75 m length and total distance were recorded. For the present analysis, we only examined data from the first hallway length (i.e., the first 75 m of the trial) in order to minimize potential confounders of turning and fatigue. Assistive devices were not used for ambulation. A rating of perceived exertion was asked of the participant before the start of the walk and once the walk was completed. Rating of perceived exertion ranged from 0 (no exertion) to 10 (very, very strong exertion).
Magnetic resonance imaging (MRI)
Brain imaging was completed in a 3T GE HDx MRI scanner (GE Medical Systems, Milwaukee, WI, USA) within the Center for Advanced MR Imaging at the BIDMC. 3D spiral continuous arterial spin labeling (CASL) MRI was used to quantify cerebral perfusion Detre et al., 1998;Floyd et al., 2003) during normocapnia, hypocapnia, and hypercapnia. Vasoreactivity was assessed as perfusion responses to vasodilation during hypercapnia and vasoconstriction to hypocapnia (Kety and Schmidt, 1948), as a noninvasive reliable method of assessing the integrity of cerebral vasculature (Fujishima et al., 1971;Yen et al., 2002). Specifically, two-minute scans were acquired during normal breathing (i.e., baseline normocapnia; end tidal CO 2 concentration 33-38 mmHg), hyperventilation (i.e., hypocapnia; participants hyperventilated to reduce CO 2 to a target of 25 mmHg), and rebreathing (i.e., hypercapnia; participants breathed a mixture of 5% CO 2 and 95% air to increase CO 2 to a target of 45 mmHg).
Respiratory rate, tidal volume and end-tidal CO 2 values were measured during each scan using an infrared end-tidal volume gas monitor (Capnomac Ultima, General Electric, Fairfield, CT, USA) attached to a face-mask. Blood pressure and heart rate were also recorded at one-minute intervals using an upper-arm automatic blood pressure cuff and finger photoplethysmogram.
Frontiers in Aging Neuroscience
www.frontiersin.org Perfusion images were acquired using a custom 3D CASL sequence (T R /T E = 10.476/2.46 ms, Label duration = 1.45 s, postlabel delay = 1.525 s, with 64 × 64 matrix in the axial plane and 40 slices with thickness = 4.5 mm, seven spiral interleaves and the bandwidth = 125 kHz). Images were averaged over each condition to maximize signal-to-noise ratio.
Gait speed
Average gait speed (m/s) was computed from the first 75 m of walking by dividing distance by time. This valid and reliable outcome predicts future health status and functional decline in numerous older adult populations (Quach et al., 2011;Studenski et al., 2011).
Image analysis
A rigid-body model (Collignon et al., 1995;Wells et al., 1996) was used for registration of the MP-RAGE image on CASL images using the Statistical Parametric Mapping software package (SPM, Wellcome Department of Imaging Neuroscience, University College, London, UK). This "normalization" module was employed to stereotactically normalize structural images to a standard space defined by ideal template image(s). The registered perfusion image was then overlaid on the segmented anatomical regions to obtain regional perfusion measurements. Generated maps of gray matter and white matter were segmented based upon the LONI Probabilistic Brain Atlas (Shattuck et al., 2008) and was used to calculate global volumes. All image segmentations were completed using Interactive Data Language (IDL, Research Systems, Boulder, CO, USA) and MATLAB (MathWorks, Natick, MA, USA) software.
Perfusion analyses
Perfusion and vasoreactivity were calculated in five regionsof-interest: the cerebellum, frontal, temporal, parietal, and occipital lobe. Within each region, perfusion was normalized for tissue volume and thus expressed in mL/100 g/min. Four perfusion measures were calculated for each region: baseline perfusion during normal breathing, cerebral vasoreactivity, Frontiers in Aging Neuroscience www.frontiersin.org vasodilation reserve, and vasoconstriction reserve. Each outcome was computed globally and within each brain region-ofinterest. Perfusion values were normalized to each subject's average CO 2 level during this condition. Vasoreactivity measures were calculated as previously described (Last et al., 2007;Hajjar et al., 2010;Novak et al., 2011). Briefly, vasoreactivity was defined as the slope of the best-fit line produced by linear regression of perfusion and CO 2 values across the three conditions (i.e., normal breathing, CO 2 rebreathing, and hyperventilation). Vasodilation reserve was defined as the increase in perfusion from baseline to the rebreathing condition, normalized to the change in CO 2 between these two conditions. Vasoconstriction reserve was defined as the decrease in perfusion from baseline to the hyperventilation condition, normalized to the change in CO 2 between these two conditions.
STATISTICAL ANALYSIS
All analyses were performed using JMP software (SAS Institute, Cary, NC, USA). Descriptive statistics were used to summarize all variables. Outcomes have been expressed as either the mean ± SD or categorical (yes/no) for each group. Student's t, Fisher's Exact and Chi-squared tests were used to compare group demographics.
We examined the effects of diabetes on both perfusion measures and gait speed using ANCOVA. For perfusion measures, the model effect was group and covariates included age, hematocrit (Hct) concentration and hypertension. Hct was included because it is inversely correlated with blood viscosity and is higher in men than women (Wells and Merrill, 1962;Kameneva et al., 1999;Zeng et al., 2000). Hypertension was included as a covariate because it affects small blood vessels of the body and may therefore alter cerebral blood flow regulation (Alexander, 1995;Hajjar et al., 2010). For gait speed, the model effect was group and covariates included age, gender and BMI.
Linear least-square regression analyses were used to test the hypotheses that (1) those with lower vasoreactivity demonstrate slower preferred gait speed, and (2) this association between vasoreactivity and gait speed is stronger (as reflected in the correlation coefficient, r 2 adj ) in older adults with diabetes as compared to those without diabetes. The dependent variable was gait speed. Model effects included perfusion outcome, group (non-diabetic, diabetic), and their interaction. Separate models were performed for each global and regional perfusion and vasoreactivity outcome. Age, BMI, and Hct concentration were included as covariates. Significance level was set to p = 0.05 for each global perfusion and vasoreactivity outcome. The Bonferroni-adjusted significance level for multiple comparisons (p = 0.01) was used to determine significance of models examining outcomes within each of the five brain regions-of-interest.
PARTICIPANTS
Groups were matched by age and gender and had a similar cardiovascular risk factors (e.g., blood pressure, triglycerides, cardiovascular disease history), yet the diabetic group had higher BMI (p < 0.0001). The prevalence of hypertension and peripheral neuropathy was also higher in the diabetic group as compared to the non-diabetic group (62% vs. 30%, p < 0.001 and 51% vs. 18%, p < 0.001, respectively). Participants with diabetes had greater HbA1c and serum glucose levels, but lower total cholesterol as compared to the non-diabetic group. Blood Hct concentration was similar between groups, but overall, higher in males as compared to females (42% vs. 38%, p < 0.001). Groups did not differ in global gray matter, white matter or white matter hyperintensity volumes (see Table 1).
Baseline perfusion and cerebral vasoreactivity
The diabetic and non-diabetic groups had similar global and regional perfusion at baseline after normalizing for baseline CO 2 levels and adjusting for age, Hct concentration and the presence of hypertension. Global and regional vasoreactivity, as well as vasodilation and vasoconstriction reserve, were also similar between groups ( Table 1).
THE EFFECTS OF DIABETES ON GAIT SPEED
The diabetic group had slower preferred gait speed as compared to the non-diabetic group (1.05 ± 0.15 m/s vs. 1.14 ± 0.14 m/s, p < 0.001; Table 1). This group difference remained significant (p = 0.007) after adjusting for age, gender, and BMI.
Across all participants, those with higher BMI had slower gait speed (r 2 adj = 0.04, p = 0.01). Specifically, within the diabetic group, those with higher fasting glucose had slower gait speed (r 2 adj = 0.13, p = 0.003). Gait speed was not correlated with the participant's rating of perceived exertion, HbA1c levels or diabetes diagnosis duration. The diabetic group had a higher change in rating of perceived exertion (i.e., difference from the start of walk from the end of the walk) compared to the non-diabetic group (2.17 ± 2.13 vs. 1.49 ± 1.43, p = 0.039).
Cerebral vasoreactivity
Least square models revealed that global vasoreactivity was related to gait speed, but that this relationship was dependent upon group (F 1,96 = 5.48, p = 0.024). This group by vasoreactivity interaction was independent of age, BMI, and Hct levels. Post hoc testing indicated that within the diabetic group, those with lower global vasoreactivity walked more slowly (r 2 adj = 0.13, p = 0.007; Figures 1A,B). In the non-diabetic group, however, global vasoreactivity was not correlated with gait speed (Figure 1C). A trend towards a similar interaction was also observed between frontal lobe vasoreactivity and group (F 1,95 = 4.32, p = 0.04); that is, in the diabetic group only, those with lower frontal lobe vasoreactivity tended to walk slower (r 2 adj = 0.13, p = 0.007). Yet, this interaction was not significant based upon the Bonferroni-adjusted significance level (p = 0.01).
Vasodilation reserve
Least square models revealed a significant relationship between global vasodilation reserve and gait speed, but that this relationship was also dependent upon group (F 1,97 = 12, Frontiers in Aging Neuroscience www.frontiersin.org p < 0.001). This significant interaction between group and vasodilation reserve was independent of age, BMI, and Hct levels. Post-hoc testing revealed that within the diabetic group only, those with lower global vasodilation reserve walked more slowly (r 2 adj = 0.33, p < 0.0001; Figure 2A).
Vasoconstriction reserve
Global and regional vasoconstriction was not related to gait speed in either group.
Baseline perfusion
Global or regional baseline perfusion was not related to gait speed within either group.
Additional covariates
Secondary analyses were performed to determine if within the diabetic group, the observed relationships between cerebral blood flow regulation outcomes and gait speed were influenced by the participant's height, weight, rating of perceived exertion, the burden of white matter hyperintensities, or the prevalence of hypertension or peripheral neuropathy. In each case, relationships between cerebral blood flow regulation and gait speed remained significant after adjusting for potential covariance associated with these factors.
DISCUSSION
This study has shown that within the diabetic group, those with lower global vasoreactivity walked more slowly. Our results further indicate that within this group, vasodilation reserve, or the capacity to increase cerebral perfusion specifically in response to hypercapnia, was linked to gait speed, which is an overall measure of health in older adults. This relationship was observed both globally and within each brain region-of-interest (i.e., cerebellum, frontal lobe, temporal lobe, parietal lobe, and occipital lobe). Specifically, for every 1 mL/100 g/min/mmHg less global vasodilation reserve, gait speed was 0.05 m/s slower in the diabetic group. These relationships were independent of age, BMI, Hct, and additional covariates (i.e., height, weight, rating of perceived exertion, white matter hyperintensities, and the prevalence of hypertension or peripheral neuropathy). Both groups presented with average walking speeds that were slower than published norms; i.e., 1.2-1.4 m/s for healthy adults over 50 years of age (Bohannon, 1997). Diabetic participants walked 0.09 ± 0.15 m/s more slowly than those without diabetes, which reflects a clinically significant difference between groups (Kwon et al., 2009). In the diabetic group, walking speed was correlated with fasting glucose levels, but not with diabetes duration or HbA1c. Furthermore, as can be observed in Figure 2A, several participants with diabetes that walked the slowest appeared to have abnormal responses to the hypercapnia condition (i.e., no change or decreased perfusion). For these individuals, this response may function as a compensatory response to ensure adequate perfusion even during resting conditions (Novak et al., 2006).
Previous research in older adults has linked slow gait speed to impaired "neurovascular coupling," or the change in cerebral blood flow in response to the performance of a cognitive task (Girouard and Iadecola, 2006;Iadecola and Nedergaard, 2007;Sorond et al., 2011). For example, Sorond et al. (2011) investigated the association between gait speed and neurovascular coupling as quantified by the change in blood flow velocity within the middle cerebral artery (using Transcranial Doppler Ultrasonography) in response to performance of the n-back cognitive task. Those with impaired neurovascular coupling walked more slowly. They also reported an interaction between neurovascular coupling and white matter hyperintensity burden, such that the presence of white matter hyperintensities was associated with reduced gait speed, except in those individuals with relatively strong neurovascular coupling. Previous work by Novak et al. (2007Novak et al. ( , 2011 further demonstrated that lower vasoreactivity is linked to reduced gait speed independently of white matter hyperintensities specifically within older adults with type 2 diabetes. Therefore, neurovascular coupling appears to one mechanism that links vascular changes to neuronal activity, and is therefore essential for the preservation of functional outcomes. This notion is in line with the "brain reserve" hypothesis (Bullock et al., 1985;Stern, 2002) and may help explain the results of the current study. In other words, while diabetes was associated with reduced gait speed overall, those diabetic participants with greater vasoreactivity (or vasodilation reserve) tended to walk at similar speeds as non-diabetic controls.
Walking is a complex act that requires the coordination of locomotor, cardiovascular, and autonomic systems. The lack of relationship between cerebral vasoreactivity and gait speed in those without diabetes is supported by the notion that gait is largely autonomous and governed primarily by supraspinal elements of the motor control system under normal or healthy conditions (Stoffregen et al., 2000;Manor et al., 2010;Kloter et al., 2011). In those with diabetes, however, the capacity to Frontiers in Aging Neuroscience www.frontiersin.org modulate cerebral perfusion between conditions of hyper-and hypocapnia (i.e., vasoreactivity, a widely used prognosis of metabolic cerebral blood flow regulation) was associated with gait speed. These results suggest that in diabetic patients, the regulation of walking speed is dependent upon cerebral elements related to the locomotor control system. This notion is supported by research demonstrating that walking requires adjustments of the cardiovascular and cerebrovascular systems that are coordinated to increase blood pressure and cerebral blood flow velocities in order to meet metabolic demands (Novak et al., 2007;Perrey, 2013). Therefore, those diabetic participants with reduced vasoreactivity may have a diminished ability to increase perfusion in response to the metabolic demand associated with walking.
The relationship between vasoreactivity and gait speed that was observed in the diabetic group, but not in the non-diabetic group might also be explained by the complex effects of diabetes on cerebral vasculature and metabolism. Diabetes accelerates aging in the brain (Launer, 2006) and alters vascular reactivity through the combined effects of central insulin resistance on microvasculature, brain metabolism, glucose utilization, and neuronal survival. Central insulin plays an important role as a neuromodulator in key processes such as cognition (Shemesh et al., 2012;Freiherr et al., 2013), energy homeostasis, and glucose utilization during activity (e.g., walking). Cerebral insulin may directly modulate neuron-astrocyte signaling through neurovascular coupling and autonomic control of vascular tone and thus enable better regulation of local and regional perfusion (Lok et al., 2007) and neuronal activity in response to various stimuli (Amir and Shechter, 1987;Cranston et al., 1998;Kim et al., 2006;Muniyappa et al., 2007) including walking. Type 2 diabetes decreases insulin sensitivity in the brain, insulin transport through the blood-brain barrier, and insulin receptor's sensitivity, and it alters glucose metabolism and energy utilization (Plum et al., 2005(Plum et al., , 2006Hallschmid et al., 2007;Freiherr et al., 2013). Glucotoxicity and endothelial dysfunction associated with chronic hyperglycemia further affect perfusion, vasoreactivity, and metabolism (Makimattila and Yki-Jarvinen, 2002;Brownlee, 2005;Kilpatrick et al., 2010) and contribute to neuronal loss (Manschot et al., 2006(Manschot et al., , 2007Last et al., 2007). Therefore, inadequate insulin delivery to brain tissue combined with altered energy metabolism may affect neuronal activity in multiple regions, but in particular the motor and cognitive networks that have high demands on energy (Gunning-Dixon and Raz, 2000). Diabetes may therefore especially alter neuronal activity and energy utilization during complex tasks like walking which require coordination of neuronal activity in numerous brain regions. As such, even if the same amount of blood flow is delivered to the neurons, energy utilization may be reduced in diabetic as compared to non-diabetic brain, leading to reduced neuronal activity and function, such as walking speed.
While our study controlled for numerous variables associated with gait speed, it did not control for other associated variables, such as muscular strength or fear of falling (Bendall et al., 1989;Chamberlin et al., 2005). The current study has the advantage of investigating regional perfusion in response to CO 2 challenges using 3-D CASL MRI; however, the measures were recorded while participants were lying supine and not during walking. Although these regional perfusion measures may be lost, future studies are warranted to utilize wireless cerebral blood flow measurement tools (e.g., portable TCD or functional near-infrared spectroscopy) to examine the effects of diabetes on cerebral perfusion when walking at different speeds. Moreover, this is a cross-sectional study and thus, observed relationships between low vasoreactivity and slow gait speed does not necessarily imply a causal link between the two. As such, prospective studies are needed to determine potential mechanisms underlying the observed relationship between vasoreactivity and gait speed in those with diabetes, the predictive value of vasoreactivity as a clinical tool, and the potential for therapies targeting cerebral blood flow regulation to improve functional outcome in this vulnerable population.
AUTHOR CONTRIBUTIONS
Azizah J. Jor'dan analyzed the data, performed statistical analyses and wrote the manuscript. Brad Manor oversaw statistical analyses, data interpretation and contributed to manuscript preparation. Vera Novak designed the study, conducted experiments, and oversaw all aspects of the study, data interpretation and manuscript preparation.
ACKNOWLEDGMENTS
This work was conducted with support from a National Institute on Aging (NIA) T32 (5T32AG023480) fellowship awarded to Azizah J. Jor'dan, a KL2 Medical Research Investigator Training (MeRIT) award (1KL2RR025757-04) and NIA career development grant (1-K01-AG044543-01A1) awarded to Brad Manor, the Harvard Clinical and Translational Science Center (NIH Award KL2 RR 025757), and grants from the National Institute of Diabetes and Digestive and Kidney Diseases (5R21-DK-084463-02) and the NIA (1R01-AG-0287601-A2) awarded to Vera Novak. The content is solely the responsibility of the authors and does not necessarily represent the official views of Harvard Catalyst, Harvard University and its affiliated academic health care centers, the National Center for Research Resources, or the NIH. | v2 |
2021-01-07T09:06:14.140Z | 2020-12-15T00:00:00.000Z | 234593599 | s2orc/train | Spaces between Words as a Visual Cue when Reading Chinese: An Eye-Tracking Study*
Atıf/Citation: Ay, Sila, Canturk, Ismıgul, Akgur, Tugba. “Spaces between Words as a Visual Cue when Reading Chinese: An Eye-Tracking Study”. Şarkiyat Mecmuası Journal of Oriental Studies 37 (2020), 51-64. https://doi.org/10.26650/jos.2020.007 ABSTRACT In texts written in languages that use the Latin alphabet, the spaces left between words serve as visual cues to understand the text. In the written Chinese language, there are no spaces between words. Chinese differs from alphabetic languages in many respects, including its synonymous and multi-meaning symbolic language elements. The absence of a visual clue to indicate word boundaries creates ambiguity in Chinese sentences or may lead to the emergence of different meanings. A large number of Chinese characters, spelling features, and lack of boundaries between words cause various difficulties for foreign students. Turkish students who study Chinese as a foreign language may find its unfamiliar orthographic features, such as a spelling system without spaces between words, difficult to understand as Turkish is written using an alphabet. According to some studies on the early stages of Chinese language learning, artificially-triggered spaces in writing can have a positive effect on the reading process. Yet, this finding that the spaces between Chinese words facilitate reading comprehension is controversial. In this study, an eye movement tracking technique is used to investigate whether orthographically-triggered spelling differences or adding spaces between words affect the reading process of foreign language students. The results are discussed through Chinese grammatical features.
Literature Review
Reading process in Chinese is different from reading alphabetic languages. Compared to alphabetic languages such as English, the notion of word in Chinese is not clear enough, and this makes it difficult to define Chinese word boundaries. The abundance of two-syllable characters and the fact that the syllables and semantic values of the lines and parts that make up these characters do not follow a certain rule, are amongst reasons that renders Chinese writing system a difficult one (Yen, Tsai, Tzheng and Huang, 2008). In teaching Chinese as a foreign language, the Chinese characters' structure and the lack of spaces between words in the sentence are seen as obstacles in developing the learners' reading comprehension skills, as these features are not compatible with the characteristics of their mother tongues. The ability to recognize characters, to distinguish them from other characters in the sentence and to understand the whole sentence involves a much more complex process. Therefore, the absence of space between words in Chinese sentences can cause ambiguity. The ambiguity of meaning in the written language is divided into different categories at word, phrase and sentence levels (see examples 1, 2 and 3 accordingly).
As it can be seen from the examples, the lack of clear word boundaries in Chinese and the absence of spaces between words appear as significant difficulties in teaching Chinese both as a native language and as a foreign language. In order to understand the learning process of learners and to examine the use of word boundaries as a visual cue for reading Chinese texts, studies that measure eye movements and comprehension were conducted and study results showed differences. For example, Zang et al. (2013) examined sixteen Chinese native children's and sixteen adults' eye movement when reading word spaced and unspaced Chinese text. Child participants were in the third grade of primary school and adults were Spaces between Words as a Visual Cue when Reading Chinese: An Eye-Tracking Study undergraduate students at university. On the basis of the early local measures, they found that the word spacing manipulation had a greater beneficial effect for children than adults. They computed the total sentence reading times, and there was no reliable difference in term of reading time for all participants.
On the other hand, Bai et al. (2008), in their study with Chinese native speakers, found that the presence of space between the words in the sentence or marking as a clue of separating the words neither prevented nor facilitated the reading process.
In the study of Hsu and Huang (2000), Chinese native speakers' reading time and comprehension rate of Chinese sentences with and without spaces between words were measured. It was understood that spaces between words accelerated reading but did not have any effect on reading accuracy.
On the other hand, in another study conducted on Chinese native speakers, it was understood that adding spaces between words in Chinese texts had no facilitating effect on native speakers (Li et. al., 2010(Li et. al., , 1381. Researchers think that this may be due to the fact that native Chinese readers are familiar with the absence of spaces between the words or with the graded reading method restraining their reading process. Although the studies on this topic are highly diversified, the ones related to teaching Chinese as a foreign language are rather inconclusive. Bai et al. (2010) in his study consisting of two different eye-tracking tests conducted with Chinese language learners, who are native English speakers, found out that adding spaces between words had a positive function in reading comprehension rate.
Similarly, Shen et al. (2012) in his study conducted with students from four different countries concluded that the presence of spaces in Chinese texts facilitated the reading of second language learners, regardless of alphabetic status and word spacing in their native language.
Participants
Twenty-seven undergraduate students participated in the experiment. They were all learning Chinese as a foreign language who were grouped in two levels (elementary and advanced) by the researchers. Ten students were at elementary level and seventeen students were classified as advanced level students. Elementary level students were in their second year at the Sinology department of a university and had never been to China. Advanced level students were seniors at the same department and had studied in China for at least 1 year. Participants were told that they were going to read and understand sentences presented under two different spacing conditions. They were assured that there was not any reading aloud task (as most of them were reluctant to participate if there were any).
Materials and Design
A total of 30 sentences were constructed. All the sentences consisted of 7 areas of interest (AOI) [subject +adverb of time + function word (preposition of place/在) + place name + verb + function word (tense suffix/了) + object] and had five free morphemes and two bound morphemes (as shown in figure 1). Each sentence was presented to the participants twice, once with spaces between the words and once in the conventional form of Chinese, in a random order. The sentences were ranged randomly by a computer so the participant could not have an educated guess of what kind of stimuli he/she should see next. Two practice sentences, one for each spacing condition, were included at the beginning of the first session. After the first 30 sentences there was a 5-minute break. In total each participant read 60 sentences. After each of these sentences, a wh-question (who, where, when and what) was presented to test the comprehension. Participants were asked to choose the right answer from the two options by using the mouse as a pointer.
Apparatus
Participants' eye movements were recorded using the SMI RED 500 eye tracking system. The stimuli were presented on a 22-in. (55.8-cm) DELL monitor with a 1689 × 1050 pixel resolution. Participants were seated at a distance of 70 centimetres from the computer screen where the stimuli were presented, and their eyes were fixed with the jaw stabilizer. The stimuli were presented with Song font type in 36 font size. Prior to the start of each session, a five points calibration was completed.
Procedure
The eye tracking procedure was explained to the participants and it was emphasized that they should keep their heads still. The participants were tested individually. Participants were informed that they would read sentences under different spacing conditions. They were told to read the sentences silently and press the button to see the following comprehension question. In total the experiment took approximately 30 minutes. The eye tracker was placed in a sound proof room of the linguistics laboratory which was dimly lit.
Spaces between Words as a Visual Cue when Reading Chinese: An Eye-Tracking Study
Results
The comprehension rate was 93% making it possible to conclude that participants read and understood the sentences. The obtained data were analysed both globally and locally by the Linear Mixed Effect Model (LME). We used lme4 package in R (R Core Team, 2013) by using lmer function for all fixation duration measures and glmer (Bates, Maechler &Bolker, 2013) function for number of fixation measure to fit generalized linear mixed-effects regression models (Baayen, 2008), with Condition factor (spaced, unspaced) and Language Level (elementary, advanced as fixed factors. In addition to fixed factors considered in simple linear regressions, LME models account for random variation induced by items and participants. All data points above or below twice the standard deviations from the mean were excluded from the fixation duration data.
For global eye tracking measurements, total reading time, average fixation duration and total number of fixations were computed.
Total reading time: Reading times were shorter for spaced writing (M=3760 ms, SD=560) and reliably longer for unspaced writing condition (M=3833 ms, SD=463). But this difference was not reliably different (ps >.05).
Average fixation duration: There was no reliable effect of neither the writing condition (spaced/unspaced) nor the language level (elementary/advanced) (see table 1). That is to say, participants found neither of the conditions easier to read.
Total number of fixation:
Although it is anticipated that the total reading time and the number of fixations are highly correlated, in this case number of fixation showed reliable difference both in condition and language level variables (see table 2) whereas there was none in total reading time. Number of fixation in spaced writing and elementary language level were higher. (Gelman & Hill, 2007).
Sıla Ay, İsmigül Cantürk, Tuğba Akgür
In addition to the global analyses, local analyses were also done. For these analyses firstpass duration, first fixation duration, second-pass duration and dwell time for each area of interest were computed.
First-pass Duration: First-pass duration time was statistically different only in AOI-3 and AOI-6 which are both bound morphemes. Figure 1 shows the first-pass duration and language level relation of spaced and unspaced sentences. In AOI-3 there is a statistically reliable difference concerning the level variable. Elementary level participants tend to have a longer first-pass duration. Concerning the first-pass duration, in AOI-6 there is a statistically reliable difference in the condition (spaced/unspaced) variable. In unspaced sentences this area of interest tends to have a longer first-pass duration (see figure 2).
Spaces between Words as a Visual Cue when Reading Chinese: An Eye-Tracking Study
Second-pass Duration:
In late processing (second-pass duration), like it was observed in the first-pass duration, again in AOI-3 there is a statistically reliable difference concerning the level variable. Elementary level participants tend to have a longer second-pass duration too. Concerning the second-pass duration, in AOI-4 there is a statistically reliable difference in the condition (spaced/unspaced) variable. In unspaced sentences this area of interest tends to have a longer second-pass duration (see figure 6).
Conclusion and Discussion
Our findings show that the total reading time for spaced writing and unspaced writing condition was not reliably different. This result is similar to Inhoff et al.'s (1997) study where they examined how inserting spaces between words in Chinese influenced reading. Their study showed no reliable differences in total reading times for any of the presentation conditions. In another study, Bai et al. (2008) investigated the influence of spacing information on eye movement behaviour during Chinese reading where total reading times for text presented under nonword and single character spacing conditions were not reliably different too. But in contrast, in Shen et al.'s (2012) study, reading times were shortest for word-spaced text and reliably longer for normal unspaced and character-spaced text and reliably longer again for nonword-spaced text. Their study indicated that word-spaced text was easiest for non-native Chinese readers to process and even easier to process than normal unspaced text.
As for the average fixation duration there was also no reliable effect of neither the writing condition (spaced/unspaced) nor the language level (elementary/advanced). In Bai et al.'s (2008, 6) study, for average fixation duration there was a signification effect of presentation condition. Average fixation durations were longer under normal spacing conditions than under single character, word spacing, and nonword spacing conditions. Also, average fixation durations were longer under word and non-word spacing conditions than under single spacing conditions. Finally, average fixation durations did not differ between word and nonword spacing conditions.
As mentioned in the results, the total number of fixation showed reliable difference both in condition and language level variables. Number of fixation in spaced writing and elementary Spaces between Words as a Visual Cue when Reading Chinese: An Eye-Tracking Study language level were higher. Presumably, it is because the participants need more time to process what they read, during the early stages of learning Chinese.
Concerning the local analyses there is a statistically significant difference between spaced and unspaced words in the early processing (first-pass duration and first fixation duration) of AOI-3 (在zài -function word) and AOI-6 (了le -function word). These two area of interests are both functional words (dependent form units). On the other hand in late processing (secondpass duration), only in AOI-4 (展会zhǎnhuì -place word) there is a significant difference.
The difference in the early processing of AOI-3 (在) and late processing for the area of interest 4 (展会), can be evaluated together. In Chinese, the function word '在' can add different meanings to sentence according to the word it is used together with. The reason why the participants focus on this preposition in the first fixation duration may be that they are trying to understand the correct function of '在' in the sentence. Because spaced or unspaced writing of the '在' can change the meaning of the sentence the place name and '在' may have attracted the attention of the students in early and late processing. If '在' has location meaning, it is read with the place word (see 1).
(1) 她在学校。 Tā zài xuéxiào. 'She is at school.' '在' can also be used as auxiliary verb to express that an action is ongoing or in progress (see 2). This is the equivalent of present continuous in English.
(2) 我在看书。 Wǒ zài kànshū. 'I am reading book.' Both early processing (first-pass duration and first fixation duration) and dwell time calculations showed that, there is a significant difference between the conditions of spaced and unspaced writing for the function word '了'. The meaning of '了', depending on whether it is spaced or unspaced in a sentence, causes uncertainty in meaning. As shown in figure 2 the function word '了' has several different meanings and usage. As it can be seen from the examples above, functional words in Chinese can change the meaning of the sentences. For this reason, students, whose native language is an alphabetic language, can have difficulty in understanding and using the functional words '了' and '在' because of their complex usage and meaning features. According to our study results, we can say that the students focused on functional words in order to understand the meaning of the sentence. Because spaced or unspaced writing of functional words is very important to determine the meaning of the sentence. Participants may find it difficult to understand the function of the particle '了', they try to understand sentences correctly. The particle '了'is often followed by a verb to indicate various additional meanings. In Chinese, the aspect and the time of an action are not entirely expressed in one grammatical form, the aspectual particle '了' is used to indicate the completion of an action but it doesn't necessary show that the action took place in the past (Li, 2009).
Concerning the second-pass duration, in unspaced sentences AOI-4 tends to have a longer duration. As mentioned before participants were instructed to answer a wh-question for testing the comprehension and one of these questions was "where" so it may be assumed that they try to find answer the question to this question correctly by having a longer duration in this area of interest which represents the place where the action takes place. Statistically significant difference of dwell time in AOI-3, AOI-4 and AOI-6 in condition (spaced/unspaced) and a statistically significant difference in AOI-6 in language level (elementary/advanced) variables concerning the dwell time is foreseen as there were differences in all other measurements in these area of interests.
To conclude, this study shows a minor difference in processing the spaced and unspaced sentences, by participants who are learning Chinese as a foreign language. The only difference occurs when it comes to processing the bound morphemes which tend to take more time and afford. | v2 |
2021-02-03T02:16:12.102Z | 2021-02-02T00:00:00.000Z | 231749899 | s2orc/train | Stochastic kinetic treatment of protein aggregation and the effects of macromolecular crowding
Investigation of protein self-assembly processes is important for the understanding of the growth processes of functional proteins as well as disease-causing amyloids. Inside cells, intrinsic molecular fluctuations are so high that they cast doubt on the validity of the deterministic rate equation approach. Furthermore, the protein environments inside cells are often crowded with other macromolecules, with volume fractions of the crowders as high as 40%. We study protein self-aggregation at the cellular level using Gillespie's stochastic algorithm and investigate the effects of macromolecular crowding using models built on scaled-particle and transition-state theories. The stochastic kinetic method can be formulated to provide information on the dominating aggregation mechanisms in a method called reaction frequency (or propensity) analysis. This method reveals that the change of scaling laws related to the lag time can be directly related to the change in the frequencies of reaction mechanisms. Further examination of the time evolution of the fibril mass and length quantities unveils that maximal fluctuations occur in the periods of rapid fibril growth and the fluctuations of both quantities can be sensitive functions of rate constants. The presence of crowders often amplifies the roles of primary and secondary nucleation and causes shifting in the relative importance of elongation, shrinking, fragmentation and coagulation of linear aggregates. Comparison of the results of stochastic simulations with those of rate equations gives us information on the convergence relation between them and how the roles of reaction mechanisms change as the system volume is varied.
these studies often include primary nucleation, monomer addition and subtraction, fibril fragmentation, merging of oligomers, heterogeneous (or surface-catalyzed) nucleation, etc.
The reaction rates associated with the reaction steps considered can be independent or dependent on the oligomer/fibril size. 6,7,12 Almost all of our current knowledge comes from the studies of systems in vitro, and little has been done in understanding processes in vivo. In the latter case, the volume of the compartment or inside confining boundaries is usually small and the numbers of copies of certain protein species can be low, resulting in large number fluctuations. [13][14][15][16][17][18][19] Furthermore, experiments carried out on protein aggregation often starts out with monomers of amyloid peptides or proteins. This may be what happens in the brains also, starting with monomers.
Above the critical concentration, monomers then aggregate into dimers, trimers, tetramers, . . . , by steps. The system necessarily goes through the stages in which the numbers of dimers, trimers, . . . , oligomers are very small. On the other hand, deterministic rate equations based on mass-action laws are often used to simulate the growth of the oligomer species. Rigorously speaking, rate equations, that is, concentrations, are well-defined only when large numbers of relevant molecules exist, for the fluctuations are inversely proportional to the square root of the number of molecules. Therefore, application of the rate equations to the cases involving small numbers of oligomers cannot fully be justified. For these cases, one should use stochastic dynamics methods, such as the Gillespie algorithm. [20][21][22][23][24][25] From another point of view, it would be important to evaluate the accuracy of the rate equation results, especially in the early stage of aggregation, or at any times when any numbers of the chemical species involved are small, by comparing them to those obtained using a stochastic kinetic method.
Gillespie's method of carrying out a stochastic chemical kinetics study is to solve the master equations defined on the probabilities, P( 1 , 2 , 3 , . . . , , . . . , ), where is the number of -mer, , denoting an oligomer containing monomers. We can estimate the dimension, , of the vector space of P. Consider an experiment for which only monomers exist initially at = 0, that is, (0) = 0, for ≥ 2. Since 1 is bound above by , 2 by /2, by / , and by 1, the dimension of the vector P has an upper bound given by /( !). Thus for a large , is bounded above by , using the Sterling approximation.
This implies that the dimension of the state space grows very fast as increases. We cannot realistically integrate the master equations numerically for an larger than a few hundreds.
In reality, for amyloid fibrils, can go as high as several thousands. Thus currently the standard stochastic kinetic method is restricted to investigating the early stage of the protein self-assembly processes or, alternatively, to investigating systems of small volume, in which the total number of proteins in the system is relatively small.
The fact that the stochastic kinetic methods can be used to study chemical reactions in small volumes was pointed out early on by Gillespie. [20][21][22][23] Applications of the stochastic kinetic methods to study protein self-assembly processes in small volumes have been carried out recently by several groups. Szavits-Nossan, et al. have derived an analytic expression for the lag-time distribution based on a simple stochastic model, including in it primary nucleation, monomer addition, and fragmentation as possible reaction mechanisms. 18 Tiwari and van der Schoot have carried out an extensive investigation of nucleated reversible protein self-aggregation using the kinetic Monte Carlo method. 19 They focused on the stochastic contribution to the lag time before polymerization sets in and found that in the leading order the lag time is inversely proportional to system volume for all nine different reaction pathways considered. Michaels, et al. on the other hand, have carried out a study of protein filament formation under spatial confinement using stochastic calculus, focusing on statistical properties of stochastic aggregation curves and the distribution of reaction lag time. 26 At the cellular level, besides the reaction being contained in a small volume, the environments of proteins are crowded with other biomolecules, such as DNA, lipids, other proteins, etc. The fraction of volume occupied by these "crowders" can be as high as 30 -40%, which can affect the reaction rates of proteins as well as other biopolymers in the cell in significant ways. [27][28][29][30][31][32][33] In this article, we extend the earlier stochastic works on protein self-assembly by in-vestigating beyond the early stage of aggregation to include the polymerization phase and present a general study of fluctuations for systems in small volumes (i.e., low total number of proteins). Furthermore, we examine the role of macromolecular crowding in changing the microscopic behavior of an aggregating system, and present a new method for extracting the dominant aggregation mechanisms that is more explicit than scaling law analysis. 17,34 We also compare the stochastic and rate-equation approaches for a model case to gain insight on when and why the methods begin to diverge.
The organization of the present article is as follows: In Section II we describe the kinetic models used in the study. In Section III we introduce the master equation, present Gillespie's stochastic simulation algorithm (SSA), discuss how it connects to a rate equation treatment, and finally introduce our reaction frequency method of analysis. In Section IV we give a brief overview of how the effects of macromolecular crowding are included in the models through scaled-particle and transition-state theories. Our results are presented in Section V. Finally, we conclude with discussion and comments in Section VI.
Kinetic Models Studied
The Oosawa Model Fig. 1(i), and grow in one possible way by simple monomer addition and subtraction, shown in Fig. 1(iii). It is assumed based on the classical nucleation theory that the concentrations of aggregates smaller than are zero. The model is described by the kinetic equations where , and are, respectively, the monomer addition, monomer subtraction and primary where and are the coagulation and fragmentation ( Fig. 1(iv)) rate constants, respectively, is the Kronecker delta function, and the factors of 1 2 and − 3 take into account double counting and aggregates having multiple points at which they can break. The last term on the RHS of the monomer concentration equation is due to the assumption that any polymers smaller than which form via fragmentation immediately dissolve into monomers.
Secondary Nucleation
In some cases, an additional mechanism is needed to accurately describe aggregation: secondary or heterogeneous nucleation. Secondary nucleation is the process of existing polymers catalyzing the formation of new aggregates on their surface and has been shown 13,38,39 to play an important role in many aggregating systems. We investigate a simple model of secondary nucleation, where the process is modelled as a one-step process, shown in Fig. 1(ii). In reality, secondary nucleation can be more generally represented as a two-step process, which reduces to a one-step process in the low monomer concentration limit. 32,40 But for the present purposes, we will focus on the one-step secondary nucleation process.
For one-step secondary nucleation, the rate equations in 1 and 2 are modifed by where 2 is the secondary nucleus size, and is the total mass of the polymers which contain 2 or more monomers.
Stochastic Chemical Kinetics
In classical chemical kinetics, it is assumed that the concentrations of chemical species vary continuously over time, the so-called mean-field approach. 6 where x is the state of the system, v is change in system state due to a single reaction , is the probability of the system being in state x at time given an initial state x 0 at time 0 , and (x) is the propensity function, or transition rate, of a given reaction defined by (x) the probability, given x, that one (5) reaction occurs in the time interval .
We will show later that the propensity function can be directly related to the reaction rates from classical chemical kinetics.
In principle, the probability distribution, P(x, |x 0 , 0 ) is entirely described by eq. 4. In practice, however, analytical solutions are often impossible due to the CME being, in general, a very large system of coupled ODEs. Other complications with a direct analysis of the CME are described by Gillespie. 22,23 Thus, it is necessary to use computational methods to solve for the evolution of the probability distribution function. Several methods of simulating exact and approximate solutions to the CME have been proposed. 23,41 In our study we use the Gillespie stochastic simulation algorithm (SSA), which allows for a highly detailed, albeit somewhat computationally expensive, look at how a stochastic system evolves over time.
Gillespie Simulation Algorithm
The Gillespie stochastic simulation algorithm is a method for generating statistically accurate reaction pathways of stochastic equations, and thus statistically correct solutions to the CME (eq. 4). The algorithm proceeds as follows: 1. Set the species populations to their initial values and = 0.
2. Calculate the transition rate, (x), for each of the possible reactions.
Set the total transition rate
4. Generate two uniform random numbers, 1 and 2 .
Set
7. Set = + ∆ and update species populations based on reaction .
8. Return to step 2 and repeat until an end condition is met.
This process generates a single reaction pathway and may be repeated and averaged to compare with bulk behavior and experimental results. Modifications to this method exist to decrease computation time, such as the next-reaction 24 and tau-leaping 41 methods, but where they improve efficiency they sacrifice in accuracy.
Relation to Chemical Kinetics
In classical chemical kinetics, differential rate equations are solved to study the bulk behavior of a continuous system. In order to compare with these studies, as well as with experimental studies, it is necessary to relate bulk rates, and rate constants, with the stochastic propensity functions and the equivalent stochastic rate constants. We give two examples of how this is done. For a coagulation process + → + , we have a rate equation of the form Making use of the relationship between species population and species concentration, = , where is the species population and is system volume, we find the relation which is the rate at which an -mer and an -mer transition into an ( + )-mer. The RHS is the sum of the propensity functions from eq. 5 for all possible + → + coagulation reactions. Finally, the stochastic rate constant can be written For protein aggregation, we can again make use of the definition of concentration for the total number, , and concentration, 0 , of monomers to obtain For a primary nucleation process, 1 → , we have a rate equation Following the same process, the stochastic nucleation rate constant is given by The other stochastic rate constants are found similarly. It is worth noting that all stochastic rate constants have units of frequency ( −1 ) and thus the stochastic rate constants involved in shrinking or breaking processes ( ′ , ′ ,¯2 ′ , etc.) are identical in value to their bulk counterparts.
Reaction Frequency Method
In numerical simulations using Gillespie's stochastic algorithm, we can obtain information on what reactions are occurring in certain intervals of time. In particular, we investigate the frequencies or propensities of particular reaction types as they evolve over time. This approach gives unique insight into the behavior of a system and offers direct confirmation of which mechanisms of reaction are dominating at various phases of the aggregation process.
For instance, a common scenario, meaning for a set of parameters showing nontrivial dynamic behaviors, is that the primary nucleation dominates at the very beginning, then monomer addition and oligomer coagulation become important, balanced by monomer subtraction and fragmentation. These are followed by secondary nucleation, which becomes active, before monomers are depleted. At longer time scale, oligomer coagulation and fragmentation persist as the system approaches an equilibrium or steady state. For some aggregation reactions, such as that involving actin, secondary nucleation never plays a significant role, so it can be neglected. But, for other reactions, especially under the influence of molecular crowders, secondary nucleation dominates, until monomers are depleted. In the results section, we normalize the reaction frequencies by the total number of reactions occurring at that time ( ). Doing this allows us to investigate the relative importance of any particular reaction as it evolves over time.
Macromolecular Crowding
In the presence of molecular crowders, the rate constants of reaction steps may be affected and the degree of influence varies widely among the different types of reaction steps involved.
The effects of crowders on the rate constants has been worked out in our previous study 27,32 using the transition-state theory 42 (TST) and the scaled-particle theory 28,31,43 (SPT). In this section, we outline the relevant formulas that we have used in the present simulations. We start with the forward coagulation reaction of the reversible reaction In TST one assumes that quasi-equilibrium is established between the reactants and the transition state which allows us to express the rate constant in terms of the free energy difference between the reactants and the transition state. Further, expressing the chemical potential of a chemical species in terms of the product of its activity coefficient, , and concentration, we can describe the forward rate constant using the following relationship In other words, breaking reactions are unaffected by crowders in this model. This approach can be applied to the other mechanisms in our model to obtain where Γ is a factor related to the change in shape of an aggregate as monomers attach to the surface in a secondary nucleation process. For our study, we assume Γ ≡ 1 39 in a one-step secondary nucleation model. The activity coefficients, as well as , may be calculated using SPT by treating crowders and monomers as hard spheres and aggregates as hard sphero-cylinders. 27
Scaling Laws
A powerful tool for extracting the dominant mechanisms of aggregation is by looking at how the half-time of aggregation mass ( 1/2 ) scales with increasing initial concentration of monomers. 34,45 We test this method using stochastic simulations by directly examining which reactions are dominating during the growth phase. In a scaling law analysis, the slope of a log-log plot of 1/2 vs 0 gives the scaling exponent, , which can be related to specific Fig. 3 shows the relative reaction frequency of each mechanism of growth as they evolve over time. For low 0 , it is clear that monomer addition following an initial phase of primary nucleation is the dominant mechanism of growth. As 0 is increased, the relative frequency of both monomer addition and secondary nucleation increase (competition) before eventually secondary nucleation begins to suppress even monomer addition during the growth phase. This comparison both supports the validity of the scaling laws and justifies further use of this approach, as it is clear that much can be gained from this level of detail. Figure 3: Relative reaction frequencies vs time. Blue (•) corresponds to primary nucleation, orange ( ) to monomer addition, red ( ) to secondary nucleation and green (x) to fragmentation. As 0 is increased, it is clear that secondary nucleation goes from hardly participating in the reaction to completely dominating. Relative frequencies are calculated by dividing the individual reaction rates with the total reaction rate, which includes monomer subtraction and coagulation (not shown in figure).
Half-time Fluctuations
Tiwari and van der Schoot 19 showed that nucleation time increases linearly with 1/ , the inverse of system volume. We investigate the effects of decreasing the volume, or total number of monomers, , at fixed concentration, on the the halftime, 1/2 , and fluctuations of the halftime, 1/2 (the standard deviation of 1/2 ), of polymer mass for two different sets of parameters, which we call set 1 and set 2 (defined in Fig. 4). We define halftime as the time it takes for the polymer mass to reach half its equilibrium value. Fig. 4 Perhaps unsurprisingly, the relative deviations increase as decreases. More interestingly, the manner in which these fluctuations increases depends on the rate constants themselves.
For set 1, where 1/2 increases linearly with 1/ , the standard deviation ( 1/2 ) also increases roughly linearly, whereas for set 2, 1/2 appears to increase more like the square root of 1/ before abruptly increasing at the same threshold.
Moment Fluctuations
System volume has a significant effect on fluctuations of the moments of the distribution (polymer number and polymer mass), but also on the overall rate of mass production as well as the evolution of the average length of polymers. As mentioned earlier, for certain choices of rate constants the half-time of the reaction may increase or stay roughly the same as volume is decreased. In all cases, fluctuations increase with decreasing volume but the magnitude of fluctuations depends strongly on the rate constants themselves.
Average versus Individual-run Behavior
In addition to giving access to fluctuations, stochastic simulations allow for direct comparison of individual reaction pathways to the average behavior of a set of simulations. This is analogous to a comparison of bulk behavior to single-molecule behavior in the field of singlemolecule experiments, the latter of which is much more difficult to access experimentally. For this section, we ran a sweep of the ratio of parameters
Crowders Change Dynamics
A more physiologically relevant example of the present model is to see how the presence of crowder molecules can change the local behavior. Fig. 10 shows that as the volume fraction, , of crowders increases, we see similar spread in local behavior compared to the average as we did when directly changing the rates. This is because the growth rates are directly affected by as seen in the theory section. From Fig. 11, you can see that secondary nucleation completely dominates the growth process as is increased. However, before it can occur, an incubation period exists, as shown in the case of = 0.25 of Fig. 11. Hence, in the individual runs at large , there is a short period of no growth before the first primary nucleation event occurs, followed by a rapid explosive period of growth once a polymer has formed and the auto-catalytic secondary-nucleation process can occur. These simulations were run with = 2 = 2, and the effect is generally more exaggerated when 2 > . This is a purely stochastic phenomenon, as fluctuations in the first-passage time of primary nucleation (as well as monomer addition in the case of 2 > ) leads to the spread of the individual runs, producing an average that does not represent the individual reaction pathways of the system. Moreover, it is clear that the presence of crowders can magnify these differences by increasing certain reaction propensities (e.g secondary nucleation) more than others (such as merging or addition).
In other words, beyond increasing the overall rate of the reaction, the actual way in which the polymers grow is changed significantly. This can be shown in terms of the scaling law analysis as well. Fig. 12 shows how the scaling law governing the crowderless reaction is significantly different from that with even fairly low . The scaling laws here agree with the reaction picture, in that for = 0 the slope is close to 1 (corresponding to nucleationelongation. = − /2) and for = 0.2 the slope is 1.5 (secondary nucleation dominating. = −( 2 + 1)/2). One implication of this is that knowledge of the dynamics of an aggregating system in vitro does not necessarily translate to the same protein aggregating in vivo, where much of the system is occupied by molecules which do not participate in the reaction other than to exclude volume, resulting in the entropic effects. Figure 12: Scaling law comparison of the same system with different volume fraction ( ) of crowders. For the crowder-less case, the slope is close to 1 corresponding to nucleationelongation. For = 0.2, it is close to 1.5 corresponding to secondary nucleation. In between, there is competition between nucleation-elongation and secondary nucleation as the dominant mechanisms shift.
Comparison With the PM-model
In order to show more clearly the effects of low particle number, we compared the stochastic approach to the moment-closure approximation of the reaction-rate-equation approach. The latter will be called the PM-model 5 in the present article. This model reduces a large set of rate equations for the concentration of each species to three closed differential equations.
One for the monomer concentration, 1 ( ), and one each for the first two moments of the distribution of aggregates: the number of polymers, ( ), and the mass contained in polymers, ( ). Additionally, the average length of polymers, ( ), is computed as the ratio ( )/ ( ).
For a more detailed description, we refer the reader to the references. 6,7,32,40 stochastic runs actually reach their equilibrium mass values more rapidly, following initial nucleation, than do the individual runs for large . So the reduction in the rate of relative mass production is purely due to the stochastic nature of the system: nucleation events are more spread out and, on average, take longer to occur. Additionally, at larger , more than one nucleation event generally occurs as evidenced by kinks in the individual runs. When considering that the number of molecules in the simulation is analogous to the volume of the system at fixed concentration, this implies that the dynamics may change significantly depending on system volume.
Discussion and Conclusions
In summary, we have introduced a powerful stochastic method based on the Gillespie SSA to study the time evolution of the relative frequencies of aggregation reaction mechanisms.
We tested this method by comparing with the scaling law treatment of Meisl et al. 47 to confirm that the dominant mechanisms predicted by the scaling exponent, , agreed with the relative reaction frequencies of those mechanisms. In particular, we showed that as the scaling exponent increased, competition between mechanisms was seen as the relative frequency of monomer addition was overcome by secondary nucleation as 0 increased. Additionally, detail as to which mechanisms become important and at what time during the reaction can give insight into why observable quantities, such as polymer mass, behave as they do. The method provides great detail into the behavior of reacting systems and, in principle, can be used for any stochastic system where knowledge of the importance of particular reactions or events as they change over time is desired.
We showed that the halftime of ( ) is not always proportional to 1/ (or 1/ ), as shown by Tiwari and van der Schoot, 19 and in fact can reach a thermodynamic limit where increasing has no effect on 1/2 . This effect, along with the behavior of 1/2 , is strongly dependent on the values of the rate constants. Accordingly, for a choice of rate constants where 1/2 does not increase with decreasing volume, the time-scale of ( ) does not change as volume is decreased, whereas it can increase quite significantly for another set of rate constants. By looking at the average polymer length over time, we showed that simulations with rate constants which favored smaller polymers did not show changing results for the range of used in this study, while those using rate constants which favored very long polymers had dramatic changes in dynamics. This is further confirmed by the reaction frequencies for longer polymer-forming rate constants fluctuating more significantly, and more mechanisms being important in the reaction at smaller values of . In other words, the dynamics can be very different for small volumes. Physically, this is consistent with smaller volumes being less conducive to very large polymers growing. This implies that, when fitting models to experimental data in order to predict behavior within living cells, one should also fit ( ) to data on the average length of polymers, or at the very least bias the fits to achieve a best guess of the actual length profiles, as was done in Schreck et al. 32 That atomic force microscope (AFM) measurements of polymer length can be used in addition to ThT measurements to obtain more robust estimates of rate constants was previously pointed out by Schreck and Yuan. 12 We also compared individual stochastic reaction pathways with the average value calculated from many runs for a range of parameter values, as well as in the presence of crowders. We showed that for certain parameters, the individual runs can look very different than the average. In the case studied, when secondary nucleation was relatively unimportant compared to monomer addition, the individual runs looked similar to the average. When secondary nucleation was made more important, however, the individual runs differed greatly from the average. This reflects both the change in reaction dynamics and the skewness in the distribution of 1/2 . When crowders were included, this change was even more dramatic, and plots of the reaction frequencies indeed confirm that secondary nucleation could become completely dominant.
Furthermore, we compared the stochastic approach to the continuous, PM-model. For large values of the two models are in good agreement, but diverge significantly as decreases. Again, we saw that ( ) not being in agreement for the two models was indicative of this divergence, further confirming the importance of having experimental data on the lengths of polymers. Additionally, analysis of the individual runs shows that, for small values of , the local behavior is vastly different than the average behavior. This means that the local behavior effect mentioned previously can be caused not only by the presence of crowders, but by decreasing the reaction volume. The specific model compared was the Oosawa model without secondary nucleation, so the effect is present even in the simplest of models provided they have more than one mechanism of growth.
Lastly, to our stochastic kinetic simulator we can add other reaction mechanisms to different degrees of sophistication, depending on the system that we are investigating. An important next step is to apply our methods to protein aggregation systems that have been studied experimentally, especially, in vivo. For such systems, we should first fit the observed ( ) and/or ( ) curves by varying rate constants and other parameters. 12,32 With the set of constants determined, the present stochastic scheme can be used to provide valuable information on the fluctuations, the reaction dynamics, and pathways involved in the system and how they vary with the changes of system volume and the amount of crowders.
All stochastic simulations were performed using popsim, 48 a browser-based program developed by the authors for this study. It is available for use at www.popsim.xyz. The source code is available at https://github.com/jljorgenson18/popsim. | v2 |
2016-05-04T20:20:58.661Z | 2014-08-01T00:00:00.000Z | 2090409 | s2orc/train | Prevalence and risk factors for chronic co-infection in pulmonary Mycobacterium avium complex disease
Background Patients with pulmonary Mycobacterium avium complex (MAC) disease are often co-infected with various pathogenic microorganisms. This study aimed to determine the prevalence of co-infection with non-MAC pathogens and the risk factors associated with co-infection in patients with pulmonary MAC disease. Methods We retrospectively reviewed the patient characteristics, microbiological results and chest CT findings in 275 patients with pulmonary MAC who visited the Kyoto University Hospital from January 2001 to May 2013. We defined chronic pathogenic co-infection as the isolation of non-MAC pathogens from sputum samples taken on more than two visits that occurred at least 3 months apart. Results The participants were predominantly female (74.5%) and infected with M. avium (75.6%). Chronic co-infection with any pathogen was observed in 124 patients (45.1%). Methicillin-sensitive Staphylococcus aureus (MSSA; n=64), Pseudomonas aeruginosa (n=35) and Aspergillus spp (n=18) were the most prevalent pathogens. The adjusted factors were chronic obstructive pulmonary disease (COPD; OR=4.2, 95% CI 1.6 to 13.1) and pulmonary M. intracellulare disease (OR=2.2, 95% CI 1.1 to 4.4) in chronic co-infections; COPD (OR=4.2, 95% CI 2.1 to 31.4), long duration of MAC disease (OR=2.2, 95% CI 1.2 to 4.4) and nodules (OR=3.5, 95% CI 1.2 to 13.2) in chronic MSSA co-infection; COPD (OR=7.5, 95% CI 2.1 to 31.4) and lower lobe involvement (OR=9.9, 95% CI 2.0 to 90.6) in chronic P. aeruginosa co-infection; and use of systemic corticosteroids (OR=7.1, 95% CI 1.2 to 50.9) and pulmonary M. intracellulare disease (OR=4.0, 95% CI 1.1 to 14.5) in chronic Aspergillus spp co-infection. Conclusions Patients with pulmonary MAC disease frequently had chronic co-infections with pathogenic microorganisms such as MSSA, P. aeruginosa and Aspergillus. The risk factors for chronic co-infection were COPD and pulmonary M. intracellulare disease.
INTRODUCTION
As the prevalence of pulmonary nontuberculous mycobacterial (NTM) disease, especially pulmonary Mycobacterium avium complex (MAC) disease, has been increasing worldwide, 1-3 more patients have an opportunity to be followed in a medical institution. 4 5 Pulmonary MAC disease has a prolonged course and often manifests as bronchiectasis and cavitation in highresolution CT (HRCT) images. 6 In patients susceptible to bronchiectasis, chronic inflammation causes damage primarily to the bronchi. Damaged airways are susceptible to infection, resulting in further destruction and dilation of the bronchi and leading to bronchiectasis. 7 8 NTM infection has been shown to stimulate the development of or worsen pre-existing bronchiectasis, although causality has not been definitively established. [9][10][11] Chronic infections with bacteria such as Pseudomonas aeruginosa and Haemophilus influenzae are associated with bronchiectasis and cystic fibrosis, causing recurrent exacerbations of these diseases and leading to lung function decline and premature death. [12][13][14] Although these pathogenic microorganisms
KEY MESSAGES
▸ Patients with pulmonary Mycobacterium avium complex (MAC) disease are often co-infected with various other pathogenic microorganisms, but the factors associated with microorganism co-infection in patients with pulmonary MAC remain unclear. ▸ Patients with pulmonary MAC disease frequently had chronic co-infections with pathogenic microorganisms such as methicillin-sensitive Staphylococcus aureus, Pseudomonas aeruginosa and Aspergillus, and the adjusted risk factors for chronic co-infection were chronic obstructive pulmonary disease (COPD) and pulmonary M. intracellulare disease. ▸ Chronic co-infection is common in patients with pulmonary MAC disease, and COPD and pulmonary M. intracellulare disease increase the risk of co-infection.
can be isolated intermittently, chronic infections are known to have a higher clinical impact. [15][16][17][18] During the course of pulmonary NTM disease, co-infections with various bacteria other than NTM such as P. aeruginosa, H. influenzae and Aspergillus are occasionally observed. 19 20 However, previous studies of these infections included a relatively small number of participants with MAC disease, and patients with single NTM isolates were most likely only temporarily colonised. 6 Furthermore, although some host traits, such as chronic lung disease and autoimmune disease, and the use of immunosuppressive agents are known risk factors for infection in patients with bronchiectasis and cystic fibrosis, 18 21 22 the factors associated with microorganism co-infections in patients with pulmonary MAC remain unclear.
The aim of this study was to determine the prevalence of co-infection with non-MAC pathogenic microorganisms and to identify risk factors for co-infection among clinical, microbiological and radiological findings in patients with pulmonary MAC disease.
METHODS Study design and population
This was a retrospective cohort study of 645 patients with pulmonary MAC, who fulfilled the American Thoracic Society diagnostic criteria and who visited the Kyoto University Hospital from January 2001 to May 2013. 6 We reviewed patient characteristics, microbiological results and chest (HRCT) findings from institutional medical records. We excluded 370 patients: 295 patients who were unable to provide sputum samples at least two times in a year, medical history and/or CT scan data; 74 patients who were followed up for less than 12 months from the first visit to the last visit and 1 patient who had complications with disseminated MAC infection and HIV infection. Finally, we analysed 275 patients with pulmonary MAC in this study. Laboratory and HRCT data from patients with any co-infecting microorganism were collected around the time that the co-infecting microorganism was first isolated, and the data collected from patients without co-infection by microorganisms were collected at the time of the first visit.
Microbiological classification
We defined chronic pathogenic microorganism co-infection (chronic co-infection) as the isolation of non-MAC potential pathogens from two or more sputum samples taken on two separate visits at least 3 months apart. Cultures did not necessarily have to be consecutive. Patients were defined as having an intermittent pathogenic microorganism co-infection (intermittent co-infection) when the potential pathogen had been isolated only once in the past. Patients with no pathogenic microorganism co-infection (no co-infection) did not have any potential pathogens isolated from any of the sputum samples at any time. 15 Since Staphylococcus aureus often colonises the human oropharynx, the sputum quality was checked according to the Geckler classification to distinguish between infection and colonisation. 23 Only sputum with a Geckler classification of 4 or 5 was selected for analysis. In addition, making a clear distinction between Aspergillus infection and colonisation is not feasible. Therefore, we have chosen to use the term infection throughout this article. 18
Radiological findings
We assessed four cardinal HRCT findings (nodule, bronchiectasis, cavity and consolidation). We counted the extent and location of lung involvement and thoracic abnormalities (scoliosis and pectus excavatum) in the HRCT. We classified the following four radiographic forms according to previous reports: nodular/bronchiectatic (NB), fibrocavitary (FC), NB+FC and unclassified. 4 One board-certified thoracic radiologist who had no prior knowledge of the patients' profiles or laboratory test results read the HRCT images.
Statistical analysis JMP V.9.0.0 was used for all statistical analyses. Group comparisons were made using the χ 2 test or Fisher's exact test for categorical values and the Wilcoxon test for continuous values. To adjust for confounders, variables with a p value less than 0.05 on univariate analysis were entered into a multivariate logistic regression analysis. ORs and their respective 95% CIs were computed as estimates of relative risk. For all analyses, p values less than 0.05 were considered statistically significant.
Characteristics of the study population
The participants were predominantly female (205 patients, 74.5%) and infected with M. avium (208 patients, 75.6%). The mean age at diagnosis was 61.9 ±11.6 years, and the mean duration of MAC disease from diagnosis was 7.2±7 years. Bronchiectasis was the most frequent host trait (234 patients, 85.1%), followed by severe pneumonia (81 patients, 29.6%), malignant disease (57 patients, 20.7%) and prior tuberculosis (34 patients, 12.4%). Since it is often difficult to distinguish which comes first, the bronchiectasis or the pulmonary MAC disease, we counted bronchiectasis as an underlying disease when it was detected in the first HRCT. Autoimmune disease was recorded in 36 patients (13.1%), with 19 (52.8%) having rheumatoid arthritis (table 1). In the HRCT scans, nodules and bronchiectasis were the most common findings (86.2% and 85.1%), and they were predominantly located in the right middle lobe or lingula.
Patients with pulmonary M. intracellulare were older in age and had significantly lower body mass indices. These patients more frequently had host traits of severe pneumonia, malignant disease and autoimmune disease and used more systemic corticosteroids than patients with pulmonary M. avium (table 2). In the HRCT analysis, patients with pulmonary M. intracellulare had significantly more cavity findings and the NB+FC form of lung involvement than patients with M. avium (table 2).
Characteristics of patients and factors associated with chronic and intermittent co-infection
Compared with patients who did not have a co-infection, chronic co-infection was significantly associated with a history of severe pneumonia, chronic obstructive pulmonary disease (COPD), rheumatoid arthritis, use of systemic corticosteroids and pulmonary M. intracellulare disease. Intermittent co-infection was associated with pulmonary M. intracellulare disease alone. There was no significant difference in the history of MAC treatment and a negative conversion rate of MAC sputum cultures during the study period between patients with chronic co-infections and those without co-infection (table 3). There were no significant differences in the HRCT findings, the location of areas of lung involvement and thoracic abnormalities between patients with chronic or intermittent co-infection and those without co-infection. (table 4).
Characteristics of patients and factors associated with chronic MSSA co-infection COPD, the use of inhaled corticosteroids and a longer duration of MAC disease were significantly associated with chronic MSSA co-infection in patients (table 6).
Of 64 patients with chronic MSSA co-infection, 41 patients (64.1%) had a MAC-positive sputum culture. Thirty-seven patients (57.8%) had a history of MAC treatment, and only two of these patients (5.4%) had a positive MSSA sputum culture during MAC treatment. After 46 patients had converted sputum cultures of MAC, 32 patients (69.6%) had a positive MSSA sputum culture (tables 6 and 9). Thirty-two of 64 (50%) patients with chronic MSSA co-infection had received antibiotic treatment for their co-infection.
Characteristics of patients and factors associated with chronic P. aeruginosa co-infection A history of severe pneumonia, COPD or autoimmune disease including rheumatoid arthritis; the use of systemic corticosteroids and immunosuppressive agents and pulmonary M. intracellulare disease were significantly associated with the development of chronic P. aeruginosa co-infections (table 6). The areas of lung involvement in patients with chronic P. aeruginosa co-infections were predominantly located in the lower lobe (table 7). In the multivariate analysis, COPD (OR 7.5; 95% CI 2.1 to 31.4; p=0.0017) and lung involvement in the lower lobe on HRCT (OR 9.9; 95% CI 2.0 to 90.6; p=0.0027) were significantly associated with chronic P. aeruginosa co-infection (table 8).
Of the 35 patients with chronic P. aeruginosa co-infection, 9 (25.7%) had P. aeruginosa detected in a MAC-positive sputum culture. Of the 24 patients with a history of MAC treatment, 18 (75%) had a positive P. aeruginosa sputum culture during MAC treatment. After 29 patients had a converted sputum culture of MAC, 27 (93.1%) also had a positive P. aeruginosa sputum culture (tables 6 and 9). Seventeen of 35 (48.6%) patients with chronic P. aeruginosa co-infection had received antibiotic treatment for their co-infection.
Characteristics of patients and factors associated with chronic Aspergillus co-infection
Of the 18 patients with chronic Aspergillus co-infection, 15 (83.3%) had a chronic necrotising pulmonary aspergillosis (CNPA), with 5 (33.3%) having pulmonary aspergilloma and 3 (16.7%) having an allergic bronchopulmonary aspergillosis (ABPA). Of the 6 patients using systemic corticosteroids, 5 had CNPA and 1 had ABPA. Male sex; a history of severe pneumonia, asthma, tuberculosis or autoimmune disease including rheumatoid arthritis; the use of systemic corticosteroids and pulmonary M. intracellulare disease were significantly associated with chronic Aspergillus co-infection in patients (table 6). In the multivariate analysis, the use of systemic corticosteroids (OR 7.1; 95% CI 1.2 to 50.9; p=0.034) and pulmonary M. intracellulare disease (OR 4.0; 95% CI 1.1 to 14.5; p=0.036) was significantly associated with chronic Aspergillus co-infection (table 8).
Of the 18 patients with chronic Aspergillus co-infection, 9 (50%) were positive for Aspergillus spp at the time of MAC-positive sputum culture. Of the 11 patients with a history of MAC treatment, 9 (81.8%) had a positive Aspergillus sputum culture during MAC treatment. After 11 patients converted a sputum culture of MAC, 9 (81.8%) had a positive Aspergillus sputum culture (tables 6 and 9). Ten of the 18 (55.6%) patients with chronic Aspergillus co-infection had received antibiotic treatment for their co-infection.
DISCUSSION
Previous studies in patients with bronchiectasis have shown that H. influenzae and P. aeruginosa were the more prevalent pathogens and that S. aureus was a less common pathogen. 12 19 24 25 In contrast, a previous study in patients with bronchiectasis and NTM infection reported that P. aeruginosa (51%) and S. aureus (28%) were often isolated, whereas H. influenzae (12%) was rarely isolated. 19 As compared with these previous studies, our study showed that chronic and intermittent microorganism co-infection was observed in 45.1% and 14.9%, respectively, of patients with pulmonary MAC disease. The majority of co-infecting microorganisms were MSSA, followed by P. aeruginosa and Aspergillus spp. We found that co-infection with Aspergillus spp is the third most prevalent infection in patients with pulmonary MAC disease.
CNPA was occasionally complicated during a long course of MAC disease. 26 Kunst et al reported that Aspergillus-related lung disease was more common in patients with bronchiectasis and NTM. Although they used serological markers but not sputum culture for the diagnosis of Aspergillus-related lung disease, they showed that NTM infection predisposed patients with bronchiectasis to Aspergillus-related lung disease. 20 In this study, most of our participants had bronchiectasis, and all 18 patients with chronic Aspergillus infection had cultureproven Aspergillus-related lung disease (15 patients with CNPA and 3 patients with ABPA).
In patients with cystic fibrosis, chronic Methicillinresistant Staphylococcus aureus (MRSA) infection caused a rapid decline in lung function, and chronic Aspergillus infection was more frequently associated with both low lung function and increased risk of hospitalisation than intermittent Aspergillus infection or no infection. 17 18 In patients with bronchiectasis, the baseline lung function of patients with chronic P. aeruginosa infection was lower than that of patients either with intermittent P. aeruginosa infection or without an infection. 16 Others reported that chronic P. aeruginosa infection was associated with an accelerated decline in lung function. 13 27 Therefore, we divided our group of co-infected patients into those with chronic co-infections and those with intermittent co-infections. In this study, we found that these three microorganisms were predominantly isolated from chronically co-infected patients (71.9% with an MSSA infection, 77.8% with a P. aeruginosa infection and 62.1% with an Aspergillus infection).
Previous studies have demonstrated that the risk factors for microorganism infection in patients with bronchiectasis and cystic fibrosis include COPD, 21 rheumatoid arthritis, 22 a long duration of the disease 12 and the use of immunosuppressive agents. 18 22 Compared with these previous studies, our study found that patients with COPD were at an increased risk of chronic infection with any pathogenic microorganisms or with MSSA or P. aeruginosa individually. A long duration of MAC disease (≥8 years) was significantly associated with chronic MSSA co-infection. The use of systemic corticosteroids was significantly associated with chronic Aspergillus spp co-infection. These factors for microorganism co-infection in patients with pulmonary MAC disease are similar to those in patients with bronchiectasis and cystic fibrosis.
Since COPD and systemic corticosteroid use also increased the risk of pulmonary NTM disease, 28-30 close attention to pulmonary MAC disease and other co-infections is needed in these patients.
A recent study comparing the features of patients with pulmonary M. avium and M. intracellulare disease showed that patients with pulmonary M. intracellulare disease had more severe symptoms including the FC form of the disease and a worse prognosis. 5 In this study, we found that pulmonary M. intracellulare disease was significantly associated with intermittent co-infection and chronic co-infection, especially Aspergillus co-infection. Patients with pulmonary M. intracellulare disease more frequently (table 2). Therefore, patients with pulmonary M. intracellulare disease potentially may have more lung deterioration than patients with pulmonary M. avium disease and thus be predisposed to the development of microorganism co-infection. In our study participants, clarithromycin, rifampicin and ethambutol were the most commonly used drugs for MAC treatment. The historical use of these antibiotics in patients with MAC disease did not differ among patients with MSSA, P. aeruginosa and Aspergillus co-infections (table 6). However, since clarithromycin and rifampicin decrease susceptibility to MSSA, MAC treatment markedly suppressed the sputum isolation of MSSA but only during MAC treatment. In contrast, P. aeruginosa and Aspergillus were isolated during MAC treatment due to the lack of susceptibility of Pseudomonas and Aspergillus to these drugs.
Recently, Binder et al 31 reported that cystic fibrosis patients with MAC were less likely than those without MAC to be colonised with P. aeruginosa. Winthrop et al also showed that non-cystic fibrosis bronchiectasis patients with NTM were less likely than those without NTM to be colonised with Pseudomonas spp as indicated in the US Bronchiectasis Registry. 32 In this study, MSSA was similarly isolated in MAC-positive sputum cultures and after MAC sputum conversion (table 9). However, we found that P. aeruginosa was less frequently isolated from positive MAC sputum cultures and more often isolated after MAC sputum conversion (tables 6 and 9). Although we investigated only patients with pulmonary MAC disease and did not include patients without MAC disease in this study, we found that P. aeruginosa was increasingly isolated after negative sputum conversion of MAC in patients who were originally MAC-positive and that P. aeruginosa was less likely to be isolated concurrently with MAC. Therefore, our data support these previous studies. 31 32 The existence of lung nodules was associated with chronic MSSA co-infection in this study. Morikawa et al previously reported that centrilobular nodules (63.9%) were more common than consolidation (51.8%) and bronchiectasis (12.0%) in patients with MSSA pneumonia. Since MSSA was rarely isolated during the antibiotic treatment of MAC in this study, some of the nodules found in patients with chronic MSSA co-infection might have been associated with MSSA pneumonia. 33 Patients with chronic P. aeruginosa infection had greater areas of lung involvement in the lower lobes than patients without co-infection in this study. Previous studies showed that P. aeruginosa pneumonia was predominantly involved in the lower lung zone. 34 35 Even after negative sputum conversion of MAC, P. aeruginosa remained positive in sputum cultures (table 9), and these areas of lower lung involvement were observed in follow-up CTs (data not shown). Therefore, some of the areas of lower lobe involvement in patients with chronic P. aeruginosa infection were most likely due to P. aeruginosa infection.
This study had the limitation of retrospective observation. We could not regularly follow sputum examination or chest CT evaluation for every participant. More than half of the patients were excluded from our cohort due to missing sputum examinations and chest CT evaluations. These excluded patients might have had a different frequency of microorganism isolation from the participants in this study. Therefore, the recruitment of additional patients and collection of additional sputum samples might allow more pathogenic microorganisms to be isolated and thus alter the prevalence of specific co-infections. However, since most of the excluded patients had few symptoms and less expectoration of sputum, the results in this study would reflect a symptomatic population. Also, since the university hospital is the tertiary referral hospital, more patients with severe conditions or with multiple complications are likely to be referred. Furthermore, this study was conducted only at a single centre. These may cause the patient selection bias. In this study, multiple statistical tests were applied to the different co-infection subgroups, and this carries a risk of false-positive associations-hence, the findings of this subgroup analysis should be viewed as hypothesisgenerating rather than definitive. Finally, since we did not analyse an association of co-infection with the outcome or prognosis, we could not show the clinical significance of co-infection in this study. In conclusion, we showed a high prevalence of chronic co-infections of pathogenic microorganisms in patients with pulmonary MAC disease. MSSA, P. aeruginosa and Aspergillus were the most prevalent isolated microorganisms. COPD and pulmonary M. intracellulare disease were risk factors for chronic co-infection.
Contributors KF conducted the study design, collected and analysed the data and drafted the manuscript. YI was principally responsible for the study design, recruited patients, collected and interpreted the data and critically revised the manuscript. TH recruited patients, collected and interpreted the data and revised the manuscript. TK analysed the data and revised the manuscript. KT, SI and MM contributed to the interpretation of data.
Funding This study was supported by Grants-in-Aid for Scientific Research by the Japanese Society for the Promotion of Science grant 24591479.
Competing interests None.
Ethics approval This study was approved by the Kyoto University Medical Ethics Committee (Approved number: E-1863).
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement No additional data are available.
Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work noncommercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons.org/licenses/by-nc/4.0/ | v2 |
2016-05-04T20:20:58.661Z | 2013-02-08T00:00:00.000Z | 13977299 | s2orc/train | Alterations in the Colonic Microbiota in Response to Osmotic Diarrhea
Background & Aims Diseases of the human gastrointestinal (GI) tract are often accompanied by diarrhea with profound alterations in the GI microbiota termed dysbiosis. Whether dysbiosis is due to the disease itself or to the accompanying diarrhea remains elusive. With this study we characterized the net effects of osmotic diarrhea on the composition of the GI microbiota in the absence of disease. Methods We induced osmotic diarrhea in four healthy adults by oral administration of polyethylene glycol 4000 (PEG). Stool as well as mucosa specimens were collected before, during and after diarrhea and 16S rDNA-based microbial community profiling was used to assess the microbial community structure. Results Stool and mucosal microbiotas were strikingly different, with Firmicutes dominating the mucosa and Bacteroidetes the stools. Osmotic diarrhea decreased phylotype richness and showed a strong tendency to equalize the otherwise individualized microbiotas on the mucosa. Moreover, diarrhea led to significant relative shifts in the phyla Bacteroidetes and Firmicutes and to a relative increase in the abundance of Proteobacteria on the mucosa, a phenomenon also noted in several inflammatory and diarrheal GI diseases. Conclusions Changes in microbial community structure induced by osmotic diarrhea are profound and show similarities to changes observed in other GI diseases including IBD. These effects so must be considered when specimens from diarrheal diseases (i.e. obtained by stratification of samples according to diarrheal status) or conditions wherein bowel preparations like PEG (i.e. specimens obtained during endoscopy) are used.
Introduction
The human GI tract is populated by a complex community of microorganisms that play a pivotal role in the maintenance of health and the development of disease [1,2]. Current knowledge indicates a crucial role for the GI microbiota in extracting nutrients from the diet, thereby influencing host metabolism, body growth and weight [3]. Moreover, it is a barrier against colonization with pathogens and is essential for mucosal homeostasis and for the maturation and correct function of the GI immune system [4]. Because our GI tract and its microbiota are interdependent, disease will affect both. A variety GI diseases including chronic inflammatory bowel disease (IBD), irritable bowel syndrome (IBS) and antibiotic-associated diarrhea (AAD) show specific alterations of the microbial community, called dysbiosis, and these diseases are supposed to be driven at least in part by these alterations [5][6][7][8][9][10][11][12]. Nevertheless, it is questionable whether dysbiosis itself causes these diseases or is just an epiphenomenon due to a microbial habitat altered by other pathophysiological factors [11,12].
A hallmark of many GI diseases is diarrhea, which often correlates with the severity of disease. Diarrhea is characterized by increased stool frequency, decreased stool consistency and increased stool weight. Pathophysiologic mechanisms leading to diarrhea include increased amounts of fluid in the intestinal lumen due to osmotically active substances (osmotic diarrhea), impaired absorption or increased secretion of water and electrolytes (secretory diarrhea) and accelerated intestinal transit [13,14]. Diarrhea is often caused by a combination of these mechanisms, which furthermore leads to intestinal malabsorption of nutrients such as fat or bile acids, altering the milieu within the gut [15,16]. Basically, acceleration of the luminal content influences the composition of the microbial community. Microbes that are replicating slowly or experiencing a particle-associated or free-living state will be subjected to wash-out and negatively selected against microbes that adhere to the mucosa or are replicating fast [17]. This principle shows that variation in just one parameter of GI physiology, like increased transit or increased amounts of fluid in the lumen, might have a profound influence on the microbial composition of our gut. Thus, deduction of relevant microbial community alterations in the light of a specific disease must take these accompanying effects into account.
To understand the effects of diarrhea on the composition of the GI microbiota we performed a longitudinal study wherein we induced osmotic diarrhea in four healthy adults by oral administration of polyethylene glycol 4000 (PEG). PEG is a polymer that is not reabsorbed or metabolized by intestinal bacteria. It is a pure osmotic agent that binds water in the intestinal lumen and so leads to diarrhea when administered in higher doses [18]. It is used to treat constipation and to cleanse the bowel prior to endoscopy. Stool as well as mucosa samples were collected before, during and after induction of diarrhea and subjected to culture-independent 16S rDNA-based microbiota profiling using barcoded pyrosequencing.
Study Protocol
Four healthy adult Caucasian males (subjects A, B, C, D) participated in this study (age range 36-47 years, BMI range 24-26.6). The subjects had had neither antibiotic therapy nor episodes of diarrhea for at least 1 year prior to the study. Stool frequency and consistency were recorded daily during the study and assessed according the Bristol stool chart [19]. After 6 days on a free diet without interventions (pre-treatment period) the subjects were placed on a standard diet (85 g protein, 77 g fat, 250 g carbohydrates, 25 g fiber, total calorie count 2150 kcal/d) for five days. Oral water intake was not restricted. On the third day of the diet diarrhea was induced with the osmotic laxative polyethylene glycol 4000 (ForlaxH, Merck, Vienna, Austria) in a dose of 50 g tid (150 g per day). PEG was administered in addition to the standard diet for three days (diarrhea period). Thereafter the subjects again noted their stool behavior without any interventions on a free diet for seven days (post-treatment period) (Fig. 1). The first day of PEG administration and the first day after PEG administration were considered equilibration days and were not included in the analysis of bowel habits. Stool samples were obtained at four different time points. Two baseline samples were taken before induction of diarrhea, sample 1 on a free diet at the beginning of the study (time-point 1, pretreatment period, day 27) and sample 2 seven days later on the second day of the diet (timepoint 2, diet period, day 0). Sample 3 was taken from the first stool on the third day of PEG intake while subjects were on the standard diet (time-point 3, diarrhea period, day 3). Sample 4 was taken 7 days after withdrawal of PEG and the standard diet (time-point 4, posttreatment period, day 10). Colonic biopsy samples were obtained from three of the four subjects (subjects B, C, D) at two different time points, sample 1 on the second day of the standard diet before diarrhea was induced (time-point 2, diet period, day 0) and sample 2 on the third day of PEG administration (time-point 3, diarrhea period, day 3). Biopsies were taken from the sigmoid colon 25 cm proximal to the anal canal by flexible sigmoidoscopy without bowel preparation. The mucosa of the area was flushed gently three times with 20 ml of physiological saline solution before two biopsies were taken. Stool samples (abbreviated in figures and tables as F) and mucosa samples (abbreviated in figures and tables as M) were immediately frozen and stored at -20uC.
Ethics Statement
The study was approved by the institutional review board of the Medical University of Graz (protocol no. 20-090 ex 08/09) and written informed consent was obtained from all subjects.
DNA Isolation and PCR Amplification
DNA was extracted from stools with the QIAamp DNA Stool Mini kit and from biopsies with the QIAamp DNA Mini Kit (Qiagen, Hilden, Germany) according to the recommended protocol. The stool homogenate was incubated in a boiling water bath for 5 min prior to DNA extraction to increase bacterial DNA yield as recommended. The variable V1-V2 region of the bacterial 16S rRNA gene was amplified with PCR using oligonucleotide primers BSF8 and BSR357 as described previously [20]. This 16S rDNA region was chosen since it gives robust taxonomic classification and has been shown to be suitable for community clustering [21]. We included a sample specific sixnucleotide barcode sequence on primer BSF8 to allow for a simultaneous analysis of multiple samples per pyrosequencing run [22]. Oligonucleotide sequences are given in table S1. PCR conditions were as follows: 100 ng DNA from stool samples or 10 ng from biopsy samples were subjected to PCR amplification in Figure 1. Study design. Subjects were on a free diet from day -7 to day -2 and from day 4 to day 10. From day 21 to day 0 a standardized diet was ingested. Diarrhea was induced by PEG for 3 days (day 1 to day 3). One stool sample was obtained one week before induction of diarrhea. Before the first dose of PEG a second stool sample and a mucosa sample were collected. A third stool and a second mucosa sample were taken at day three of PEG administration when diarrhea was maximally pronounced. A fourth stool sample was taken one week after withdrawal of PEG. doi:10.1371/journal.pone.0055817.g001 a total volume of 50 ml with 16HotStar Master Mix (Qiagen) and 20 mM of each primer. For stool samples the following PCR protocol was used: Initial denaturation at 95uC for 12 min followed by 22 cycles of 95uC for 30 sec, 56uC for 30 sec, and 72uC for 1 min and a final step of 72uC for 7 min. For biopsy samples the following PCR protocol was used: Initial denaturation at 95uC for 12 min followed by 35 cycles of 95uC for 30 sec, 56uC for 30 sec, and 72uC for 1 min and a final step of 72uC for 7 min. PCR products were separated on 1% 1xTAE agarose gel and specific bands (,300 bp) were excised and gel extracted using the Qiagen gel extraction kit (Qiagen). Each sample was amplified and extracted three times independently and subsequently pooled. Purified PCR products were assessed on BioAnalyzer 2100 DNA 1000 chips (Agilent Technologies, Vienna, Austria) for size and integrity. DNA concentration was determined fluorometrically using the QuantiDect reagent (Invitrogen, Carlsbad, CA). An amplicon library was prepared using aequimolar amounts of PCR products derived from the individual samples and bound to the sequencing beads at a one molecule per bead ratio. Long Read Amplicon Sequencing using 70675 PicoTiter Plates (Roche Diagnostics, Vienna, Austria) was done on a Genome Sequencer FLX system (Roche Diagnostics) according to the manufacturer's instruction.
Phylogenetic Analysis
As the initial step the data set was de-noised using the method described by Quince et al. [23,24] to avoid OTU inflation due to sequencing errors. All sequences shorter than 150 bp containing any ambiguous characters or not matching to the forward primer (distance.2) were discarded [25]. Subsequently, the chimeric sequences were identified with Uchime [26] and removed together with contaminant (human) sequences. The remaining sequences were assigned to their respective samples by using the samplespecific 6 bp barcode preceding the primer. In order to perform sample-and time-point-wide comparisons, operational taxonomic units (OTUs) were generated with an extended Ribosomal Database Project (RDP)-Pyrosequencing approach [27], which was integrated in the phylotyping pipeline SnoWMAn (http:// SnoWMAn.genome.tugraz.at) [28]. Briefly, all sequences were pooled and aligned with Infernal (V1.0) using a 16S rRNA secondary structure based model for accurate position alignment of sequences [29]. The aligned sequences were clustered by complete linkage to form OTUs with sequence distances ranging from 0% to 5%. For each OTU a representative sequence was extracted and a taxonomic classification was assigned to it using the RDP Bayesian classifier 2.0.1 [30]. Finally, the pooled sequences were again separated according to their sample affiliation. Taxonomic classification and biostatistical analyses reported in this paper were performed on the clustering results for 3% distance.
Statistical Analysis and Visualization
The analyses were conducted using the statistical environment R (V2.12.1) [31]. Species richness was estimated with the Chao1 estimator [32]. The abundance-based coverage estimator (ACE), diversity and evenness were calculated using the R package ''BiodiversityR'' (V1.5) [33]. Sequence abundance in each sample was normalized to the sample with the maximum number of sequences. Normalization factors ranged between 1.06 and 2.69. Additionally, abundance data were log-2 transformed after adding a value uniformly distributed between 0.75 and 1.25 to downweight OTUs with high abundance and to resemble the normal Gaussian distribution more closely. Principal component analysis (PCA) on the normalized, log-2 transformed data was performed with the prcomp function of R. OTUs significantly changing between time points were assessed either with Metastats using default settings [34] or the R package ''edgeR'' (V2.14.7) using a linear model accounting for the paired nature of the data [35]. To account for multiple comparisons, p-values were adjusted by the method proposed by Benjamini and Hochberg [36]. Adjusted pvalues less than 0.05 were considered statistically significant. Changes between time points on the level of taxonomic ranks were investigated using a paired t-test or a ratio paired t-test. The latter tests the ratio of the relative abundances (time-point 3: time point 2) against 1.
Scoring Approach and Visualization of OTUs According to their Change in Abundance
To visualize the change in OTUs' abundance in relation to diarrhea we used a scoring system in which we assigned each OTU to a respective increasing/decreasing pattern. In this way, we calculated the mean relative abundance between the prediarrhea states (time-point 1 and time-point 2) of each OTU. Together with the corresponding relative abundance values for diarrhea (time-point 3) and post-diarrhea (time-point 4), a three point profile (pre-diarrhea -diarrhea -post-diarrhea) of each OTU could be drawn. Only OTUs experiencing an abundance change of at least 0.05% in relation to the respective sample were included. Subsequently, a scoring system was introduced that assigned values of 21 (decreasing abundance value between two states), +1 (increasing abundance value) or 0 (relative abundance change,60.05%) to the (two) slopes of this profile. The score for the first slope was multiplied by 3 and added to the score of the second slope, yielding a specific overall score for each OTU that related to one of the nine possible profile patterns. For mucosa samples, which were only represented by pre-diarrhea (time-point 2) and diarrhea (time-point 3) states, three 2-point profiles were generated in a similar fashion. Finally, OTUs were assigned to their respective reaction pattern and these associations were visualized with Cytoscape [37].
Data Availability
Sequence data generated for this work can be accessed via the EBI short read archive (EBI SRA) under the accession number ERP002098.
A Highly Individualized Colonic Microbiota with Different Community Structures in Stools and on the Mucosa
After denoising and filtering the data set for chimeras and contaminant (human) sequences, 452,363 high-quality 16S rDNA sequences with an average length of 246 bp (range 230-277 bp) remained, yielding an average of 20,562 sequences per sample (Table. S2). The RDP classifier (80% bootstrap cutoff) assigned 10 phyla, but only 7 phyla were represented by more than 20 sequences. Most sequences were related to the phyla Bacteroidetes (52.6%), Firmicutes (43.1%), Proteobacteria (4%) and Actinobacteria (0.2%) [38].
We noted a strikingly different phylum distribution between stool and mucosa samples. In stools Bacteroidetes dominated (69.565.8%) followed by Firmicutes (22.164.7%), whereas on the mucosa Firmicutes (75.2613.7%) were more abundant than Bacteroidetes (17.8612.7%) ( Fig. 2A). Proteobacteria were also more abundant on the mucosa than in the stools (5.5611.1% vs. 2.161.2%). When the representation of phyla was compared between matched stool and mucosa samples (i.e. from the same individual at the same time point) and p-values were corrected for multiple comparisons, Firmicutes (adjusted P = 0.001), Proteobacteria (adusted P = 0.027), Actinobacteria (adjusted P,0.001) and Cyanobacteria (adjusted P,0.001) were more abundant on the mucosa and Bacteriodes more abundant in stools (adjusted P = 0.016). Although we noted a trend towards increased microbial richness on the mucosa compared to stools as indicated by the rarefaction analysis ( Fig. 2B), richness was not statistically significant different between the two habitats (P = 0.1913 and P = 0.989 at time-points 2 and time-points 3, respectively). Microbial diversity and evenness, both measures of the uniformity of the phylotype assembly, also showed no statistical difference between matched stool and mucosa samples (Table. S3).
Stool microbiotas were highly individualized; interpersonal variation significantly exceeded intrapersonal variation irrespective of diarrhea (P#0.0077, Student's t-test) as shown by the principal component analysis (PCA; Fig. 3A). Stool and mucosa samples represented significantly different microbial communities (P = 0.0002, Student's t-test) if matched stool and mucosa samples were analyzed by PCA, which clearly separated the two habitats irrespective of the origin from different individuals (Fig. 3B). Mucosa samples also showed more shared phylotypes between individuals than stool samples and this proportion increased during diarrhea (13.8% vs. 8.7% at time-point 2 and 25.7% vs. 10.4% at time-point 3; Fig. 4).
The most abundant phylotypes across all stool specimens were dominated by Bacteroidetes. In three individuals these were represented by Bacteroides (individuals B, C and D) resembling the recently published enterotype 1, in one (individual A) by Prevotella resembling enterotype 2 [39,40]. Often the most abundant stool phylotypes were more individual specific and were rarely detected or absent in stool specimens from other individuals (Table. S4). The most abundant phylotypes in mucosa specimens were dominated by lactobacilli (Weisella, Leuconostoc, Lactococcus), which were rarely detected or completely absent in stool specimens from the same person, underscoring the difference in microbial habitat composition (Table. S5). Interestingly, the two most abundant mucosal phylotypes matched to the exopolysaccharide producers Weisella confusa and Weisella cibaria (OTU_61 and OTU_24; BLAST: 100% homology either). Both were also considered stable phylotypes (i.e. no significant relative abundance change in respect to diarrhea; see below).
Consequences of Osmotic Diarrhea: Reduction of Microbial Richness and Convergence of Individualized Microbiotas on the Mucosa
The administration of PEG increased stool frequency (6.061.5 vs. 1.260.6 bowel movements/day) and decreased stool consistency (stool type: 6.760.6 vs. 3.060.9) in all 4 individuals (Table. S6). The effect of diarrhea on the individual microbiotas was readily identifiable in the PCA, wherein community variation at timepoint 3 exceeded intrapersonal variation between time-points 1 and 2 (Fig. 3). Diarrhea also led to a significant decrease in phylotype richness in stools (P = 0.0295, paired t-test), further evidenced by decreased Chao1 and abundance-based coverage (ACE) richness estimators comparing time-point 2 with time-point 3 (P = 0.017 and P = 0.0218, respectively; Table. S3). Although overall decreased richness due to diarrhea was evident in the rarefaction analysis of mucosa specimens (Fig. 5), this difference did not reach statistical significance (P = 0.0801). Phylotype diversity and evenness showed no significant difference between pre-diarrhea and diarrhea samples, either in stools or on the mucosa (Table. S3). PCA clearly separated mucosa from stool samples, reflecting the different niches, and also separated pre-diarrhea mucosa samples by individual. It was noteworthy that diarrhea led to a prominent shift of the mucosal communities, which significantly differed from pre-diarrheal mucosal communities in the PCA (P = 0.0044, Student's t-test). Diarrhea-state mucosal communities clustered together in the PCA, indicating an equalization of the otherwise individualized microbiotas (Fig. 3B). Diarrhea also led to an increase in the number of shared phylotypes between individuals that was most pronounced in the mucosa samples at time-point 3 (Fig. 4).
The capacity of stool microbiotas to reconstitute was assessed by comparing samples from diarrhea (time-point 3) and post-diarrhea (time-point 4). Although species richness increased significantly towards time-point 4 in stools (P = 0.042) an overall reduced species richness persisted during the one week interval after PEG administration ( Fig. S1; Table. S3).
Unaltered Community Members in Response to Osmotic Diarrhea
To understand the community changes induced by PEG administration in more detail we assessed the relative abundance change of phylotypes during the course of the study. Depending on the stressor acting on the microbial community (i.e. wash-out due to osmotic diarrhea) and the life-style of the respective microbes (adherent vs. living in suspension), certain phylotypes should experience a more pronounced abundance change compared to others. Thus we assessed the coefficient of variation (CV) of the relative abundances of phylotypes between time-point 2 and time-point 3 samples. A CV of #10% was chosen as threshold and only phylotypes prevalent with at least 10 reads per individual were considered. This analysis revealed that only a small fraction of phylotypes exhibited stable behavior and the proportion of these so-called ''stable'' phylotypes differed greatly between subjects (Table. S7). The majority of stable phylotypes were specific to the individuals, meaning that a phylotype showing stable behavior in one individual showed non-stable behavior in the other individuals according to our definition. In stools only one stable phylotype was found in two individuals simultaneously (OTU_1199; Lachnospiriaceae), while there was none in the mucosa samples. In stool samples the stable phylotype with the highest abundance was represented by Bacteroides vulgatus (OTU_33; BLAST homology 100%), but only in one individual (Table. S4, Table. S7). In the mucosa samples stable phylotypes with the highest abundance were represented by Weisella confusa and Weisella cibaria (OTU_61 and OTU_24, respectively; BLAST homology 100%), which also represented top abundant phylotypes on the mucosa as mentioned above (Table. S5). Several low-abundant phylotypes were also considered stable (Table. S7). In general, Firmicutes were overrepresented in both mucosa and stool samples as stable phylotypes (Table. S7). The finding that the number of stable phylotypes differed greatly between individuals highlights the high degree of individualization of the GI microbiota. Moreover, stable behavior seems to be related to the individual and/or the microbial community itself and not to the phylotype per-se.
Altered Community Members in Response to Osmotic Diarrhea
We next looked for phylotypes showing a significant relative abundance change in response to diarrhea by comparing time- showing an increasing-decreasing pattern. Only OTUs are displayed that were assigned to a respective reaction pattern in at least two individuals (corresponding to thin lines connecting OTUs with their pattern). The width of lines correlates with the number of individuals in whom an OTU was assigned to a specific pattern. Size of nodes correlates with the sum of changes during the study period (mean relative abundance change comparing pre-diarrhea to diarrhea and diarrhea to post-diarrhea samples). OTUs are colored according to their phylum membership and named according to the taxonomic rank conferred by the RDP classifier (80% identity threshold). M denotes significantly changed according to Metastats analysis (P,0.05); E denotes significantly changed according to edgeR analysis (P,0.05). OTUs identified by both biostatistical methods are highlighted with a bold outline. Note the increase of Faecalibacterium due to diarrhea (upper left) and the skew of edgeR-identified phylotypes towards decreasing patterns (bottom). doi:10.1371/journal.pone.0055817.g007 point 2 with time-point 3 samples. In stools we also assessed significantly changing phylotypes involved in reconstitution by comparing time-point 3 with time-point 4 samples. We initially performed this analysis at the levels of phylogenetic ranks from phylum down to genus. After testing for multiple comparisons only Rikenellaceae (family level; adjusted P = 0.000), Alistipes and Holdemania (genus level; adjusted P = 0.000 and P = 0.032, respectively) showed a significant relative decrease in response to diarrhea in stools (Table. S8). No significantly changing taxon during the reconstitution phase (comparing time-point 3 with time-point 4) could be identified in stools (Table. S9). In the mucosa samples Rikenellaceae (family level; adjusted P = 0.000) and Alistipes (genus level; adjusted P = 0.000) also showed a significant relative decrease in response to diarrhea (Table. S10). Interestingly, we noted a relative increase of the proteobacterial taxon Acinetobacter (genus level; P = 0.038) on the mucosa during diarrhea.
This approach revealed only a few significantly changing taxa. It is now evident that the human GI microbiota is highly individualized [2]. Levels of inter-individual variation might therefore exceed community variation induced by diarrhea. Moreover, our pilot study encompassed a relatively small sample size (n = 22), which hampers stringent statistical assessment. Consequently, both preconditions may have obscured patterns in the microbial community driven by osmotic diarrhea. We thus employed an alternative strategy and assessed abundance changes on the level of individual OTUs with three different measures. Two biostatistical tools well established for assessment of abundance data were employed, Metastats and edgeR. A not too stringent significance threshold was used in these analyses (P,0.05) to account for the relatively small sample size. The third approach involved a scoring system with graphical data visualization (denoted Viz), wherein the abundance change of phylotypes (increasing and decreasing in response to diarrhea) was scored and presented within association networks created with Cytoscape.
In stool samples Metastats identified 72 significantly changing OTUs and edgeR 20 OTUs (Table. S11, S12). Viz identified 299 OTUs correlated with a respective reaction pattern (abundance change threshold $60.05%) representing 9.78% of OTUs found in stool specimens. If Viz analysis was narrowed down to phylotypes showing a respective association pattern in at least 2 individuals, 61 phylotypes were evident (Table. S13). To that end, all three methods together identified 100 OTUs showing significant relative abundance variation or a respective abundance pattern (in at least 2 individuals) in relation to diarrhea (Table. 1, Fig. 6A) Out of them, 39 OTUs were at least identified by two methods simultaneously (Table. 2). Community variation was readily presented by Viz; 37 out of 61 Viz-identified phylotypes (60.7%) were reconfirmed by Metastats and/or edgeR (Fig. 7, Fig. S2). In general, Bacteroidetes were associated with an increase and decrease pattern in response to diarrhea but often approached baseline values within the 1 week posttreatment interval. Firmicutes were also associated with either an increase pattern and thereafter approached baseline or decreased due to diarrhea and remained so. Interestingly, several OTUs matching to the genus Faecalibacterium including F. prausnitzii (e.g. OTU_206; BLAST identity 97%) experienced a relative increase in abundance due to diarrhea, which was mirrored by a simultaneous decrease of these taxa in the mucosa specimens.
In the mucosa sample data set, Metastats identified 87 significantly changing OTUs and edgeR 79 OTUs (Table. S14, S15). Viz identified 232 OTUs correlated with a respective reaction pattern (abundance change threshold.60.05%), representing a fraction of 7.59% of OTUs found in mucosa specimens. If Viz analysis was narrowed down to phylotypes showing a respective association pattern in at least in 2 individuals, 64 phylotypes were represented (Table. S16). Given these definitions, all three methods together identified 183 significantly changing OTUs (Table. 1, Fig. 6B). Only one OTU, a Pseudomonas sp. (OTU_1341; Pseudomonas putida, BLAST identity 100%), was detected by all three methods simultaneously; 46 OTUs were identified at least by two methods simultaneously (Table. 3). Community variation was readily captured by Viz; 36 out of 64 Viz-identified phylotypes (56.3%) were reconfirmed by Metastats and/or edgeR (Fig. 8, Fig. S3). Interestingly, several Proteobacteria experienced a relative increase in response to diarrhea revealed by Viz and confirmed mainly by Metastats, as did several lactic acid bacteria. From the 46 OTUs identified by at least 2 methods simultaneously, 13 OTUs (28.3%) represented Proteobacteria (Table. 3), among them several opportunistic pathogens including pseudomonads (e.g. OTU_1341, Pseudomonas putida, BLAST identity 100%) or the e-proteobacterial taxon Arcobacter (e.g. OTU_596). There was a significant association of Proteobacteria with the increasing abundance pattern in Viz (P = 0.000371, Fisher's exact test) and a significant association of Bacteroidetes with the decreasing pattern (P = 0.000216, Fisher's exact test). As mentioned above several OTUs matching to Faecalibacterium including F. prausnitzii (e.g. OTU_206) experienced a relative abundance decrease in mucosal specimens (Fig. 8, Table. 3).
Discussion
We used 16S rDNA-based community profiling to assess the influence of osmotic diarrhea on the composition of the human colonic microbiota. Our longitudinal study with simultaneously sampled stool and mucosa specimens enabled us to compare microbiota changes within and between individuals. We noted strikingly different community structures between stool and mucosa samples wherein Bacteroidetes dominated stools and Firmicutes the mucosa. The dominance of Firmicutes on the mucosa is in accordance with several earlier reports [41,42]. Bacteria display different life styles: either they are particle associated or they experience a free-living (''planctonic'') life style [17,43,44]. Both life styles can be found in stools as well as on the mucosa, although in the latter the polysaccharide-rich mucus overlying the gut epithelium constitutes a biofilm-like community, which might favor a particle-associated life-style [45]. Niche colonization is determined by both partners of the mutualistic human/microbe relationship and is dependent on factors like the availability of nutrients or the capability to adhere [17]. Recent investigations comparing liquid phase and particle-associated communities have also revealed that Firmicutes are dominant in the latter [46]. Interestingly, the two top-abundant phylotypes on the mucosa, which have also been found unaltered (''stable'') in response to diarrhea, matched to Weisella confusa and Weisella cibaria (OTU_61 and OTU_24). Both taxons are exopolysaccharide (dextran) producers and show a strong adhesion capacity, e.g. to Caco-2 cells, which might explain their preferential colonization of the mucosal habitat and their investigation regarding their potential as probiotics [47,48]. We also recorded a trend toward higher richness on the mucosa compared to stools, which is in accordance with earlier reports [42]. Since the mucosal surface represents the interface of host/microbe interactions, a higher phylotype richness (''biodiversity''), which enhances the robustness and stability of an ecosystem, might be an intrinsic safeguard against perturbations like invasion of pathogens [49,50]. Understanding the spatial organization of host-associated microbial communities thus poses an important challenge for future microbiota studies of the GI tract [21,51].
The human GI microbiota shows a high degree of interindividual variation at higher phylogenetic levels despite a uniform community structure at lower levels where the phyla Firmicutes and Bacteroidetes dominate [2,38]. This phenomenon was most prominent in stools, wherein inter-individual differences exceeded any intra-individual variation. In the mucosa samples the degree of inter-individual variation was generally lower, despite a trend towards higher richness. For instance, in mucosa specimens more phylotypes were shared between individuals than in stools. Importantly, diarrhea led to an equalization of the mucosal microbiotas, which clustered together in the PCA and showed an increased phylotype overlap at time-point 3. We induced diarrhea with PEG, a mixture of non-absorbable, non-metabolizable polymers acting as a pure osmotic agent ''binding'' water in the gut lumen [52]. This led to ''wash-out'' and decreased phylotype richness in both habitats as described by others [53,54]. In various inflammatory and diarrheal GI diseases, reduced phylotype richness has been reported, including AAD, C. difficile colitis, viral enterocolitis, IBD and IBS [5,[7][8][9][10]20,55]. Reduced richness can be subverted by (opportunistic) pathogens that colonize niches otherwise occupied by the endogenous microbiota [50]. In that regard antibiotic treatment represents a paradigm condition wherein certain groups of bacteria are specifically depleted [55].
Our study indicates that reduced richness per se does not necessarily reflect or lead to pathology but is in turn a consequence of the diarrhea prevalent in many GI diseases. Microbial communities are complex adaptive systems, in which patterns at higher levels emerge from localized interactions and selection processes acting at lower levels [56]. To understand the basic reaction patterns induced by osmotic diarrhea, we assessed the relative abundance change of individual phylotypes. To account for the high level of inter-individual variation of the GI microbiota with our relatively small sample size, we vigorously tested our data set with different approaches. These measures included two established biostatistical tools (Metastats and edgeR) and a scoring system with graphical representation of the results (Viz). These analyses revealed several significantly changing phylotypes but showed reduced congruence between methods. Interestingly, the majority of phylotypes detected with Viz (in at least two individuals simultaneously) were confirmed by at least one biostatistical method showing the usefulness of the scoring method. It is important to note that all three methods identified several low abundant significantly changing taxa (i.e. OTUs with about 10 reads representing just about 0.05% of the whole community, given that about 20,000 reads were generated per sample). But reliable detection of these low abundant taxa is highly dependent on the sampling effort (sequencing depth), which can hardly reach completeness given the large number of microbes (about 10 13 -10 14 ) colonizing our gut [38]. Thus some of the identified low-abundant OTUs might represent artifacts because of sampling bias. Removal of these low abundant OTUs (e.g. with #10 reads) prior to statistical assessment would be a reasonable strategy that might increase accuracy of analysis but could also lead to loss of relevant information [57][58][59][60][61].
To overcome the incongruence of the applied methods, we narrowed the findings down to phylotypes that were detected by at least two different methods simultaneously. In this way, we identified several Bacteroidetes and Firmicutes experiencing a relative increase or decrease in stools in response to diarrhea. On the mucosa Bacteroidetes showed a significant association with decreas- Only OTUs are displayed that were assigned to a respective reaction pattern in at least two individuals (corresponding to thin lines). The width of lines correlates with the number of individuals in whom an OTU was assigned to a specific pattern. Size of nodes correlates with the mean relative abundance change comparing pre-diarrhea to diarrhea samples. OTUs are colored according to their phylum membership and named according to the taxonomic rank conferred by the RDP classifier (80% identity threshold). M denotes significantly changed according to Metastats analysis (P,0.05); E denotes significantly changed according to edgeR analysis (P,0.05). OTUs identified by both biostatistical methods are highlighted with a bold outline. Note the increase of various Proteobacteria, including opportunistic pathogens (e.g. Pseudomonas, Acinetobacter, Arcobacter), and also an increase of Firmicutes due to diarrhea (right); Bacteroidetes generally occurred together with Faecalibacterium, which was mirrored by an increase in stools. Note the skew of edgeRidentified OTUs towards the decreasing pattern (left) and of Metastats identified OTUs towards the increasing pattern (right). doi:10.1371/journal.pone.0055817.g008 ing relative abundance. It is noteworthy that we observed a significantly increased fraction of Proteobacteria experiencing a rise in relative abundance in the mucosa specimens due to diarrhea. Among them were several opportunistic pathogens including pseudomonads like Pseudomonas and Acinetobacter (e.g. OTU_1341, OTU_101) as well as the e-proteobacterial taxon Arcobacter (e.g. OTU_596). Several lactic-acid bacteria (e.g. Lactococcus) also increased on the mucosa during diarrhea, and may therefore represent interesting candidates for probiotics in the setting of diarrheal disease [62]. Interestingly, we also observed a relative increase in taxa matching to Faecalibacterium including F. prausnitzii (e.g. OTU_206) in stools, which was mirrored by a simultaneous decrease in the mucosa specimens. This observation warrants further investigation since this anti-inflammatory GI bacterium is reported to be decreased in IBD [63,64]. The finding that Proteobacteria increase in response to diarrhea has been reported in several diarrheal and inflammatory GI diseases including IBD [8,11,12,[65][66][67][68]. Proteobacteria are usually considered to be generalists able to colonize various habitats with diverse resources. For example we found that OTU_1341 matching to Pseudomonas putida significantly increased due to diarrhea; this pathogen shows genomic adaptation to diverse environments but can also cause severe diseases in humans [69][70][71][72]. Since diarrhea decreases richness, as was reflected by a significant drop in several Bacteroidetes and Firmicutes in our study, it is reasonable to speculate that Proteobacteria can occupy and repopulate these depleted niches more efficiently. It so seems that diarrhea per se, irrespective of its etiology, can select for this special community type with increased Proteobacteria. It is therefore important to note that these changes may not be specific for diseases like IBD but may represent an epiphenomenon of the wash-out effect due to diarrhea. Moreover, the efficient colonization capacity of Proteobacteria might explain the effectiveness of strains like E. coli Nissle 1917 used for the therapy of IBD [73]. It is important to note that we assessed the relative abundances of taxa within samples and their relative abundance changes comparing different samples, which does not necessarily translate into absolute changes of taxa, which would require further assessment of specimens (e.g. by means of qPCR).
Capturing the true microbial representation within a sample by cultivation-independent techniques is hampered by various technical challenges. Specimen handling, DNA extraction, PCR amplification and sequencing altogether are causes of bias [57,59,[74][75][76][77]. For instance, we compared stool and biopsy samples, which display considerable differences in their composition requiring individual protocols for efficient cell lysis and DNA release from samples. To account for the ''rich'' matrix composition of stools, we utilized a recommended boiling step prior to DNA extraction from feces, which was not used for biopsies. Several reports emphasized the influence of DNA extraction methods on the outcome of PCR-based microbial community surveys [74][75][76][77]. Thus we cannot exclude that the different extraction protocols used in our study influenced our findings. In addition to specimen work-up, template concentration, primer sequences and PCR conditions including PCR cycle numbers also influence the assessed community structure [57,59,75,78]. The different sample types (i.e. stools and biopsies) in our study displayed different loads of 16S-targets requiring sample-type specific adjustment of PCR cycle numbers (22 and 35 cycles for stool and mucosa samples, respectively) to prevent PCR substrate exhaustion and to approach a similar end-point of PCR within the linear range of amplification. Increased PCR cycles are reported to skew diversity measures leading to an underestimation of diversity present in the sample [79]. Since we noted a trend towards an increased richness in the mucosa samples compared to stools, albeit not statistically significant, we speculate that the PCR cycle trade-off in our study might have led to underestimation of richness in the mucosa samples. The challenge to optimize the technological accuracy of human microbiome studies poses a major challenge. Inconsistencies may remain even if up-to-date technology with high accuracy combined with a stringent data analysis as in our study are used [57,80].
Our longitudinal study has revealed several important findings regarding the human GI microbiota and its response to diarrhea. (I) We found that stools and the mucosa represent strikingly different habitats with a different community structure and a different response to stressors like diarrhea. For this reason, studies investigating changes in the GI microbiota in association with specific diseases need to consider that the fecal microbiota does not readily reflect the mucosal community. (II) The finding that Proteobacteria relatively increase in response to diarrhea on the mucosa is suggestive of a basic principle of the community in this niche regardless of the cause of diarrhea. When the mucosa is severely affected as in IBD, nutrients like iron derived from blood are available in excess for these efficient colonizers [81]. In turn these bacteria can utilize these resources, i.e. they have developed siderophore uptake systems for iron capture, and so can experience a growth advantage [12,67,82]. This phenomenon might then lead to the persistent community change (dysbiosis) noted in IBD, which in turn perpetuates chronic inflammation due to the pro-inflammatory behavior of these bacteria. (III) Our findings show definite changes of the GI microbiota in response to PEG treatment, which is used for bowel cleansing prior to endoscopy. Studies using colonoscopy samples for microbiota analysis need to bear this in mind. (IV) We have shown the usefulness of small-scale longitudinal clinical studies to find relevant microbial community patterns of variation, if data are assessed stringently. In this regard our newly described scoring approach with visualization (Viz) is a valuable tool; since it readily illustrates the reaction of the microbiota as a whole, patterns can be caught visually by the investigator.
In summary, our study is proof of the principle that manipulation of basic functions of the human GI tract enables the detection of relevant microbial community changes and highlights the importance of such studies investigating basic (patho-)physiological effects on the GI microbiota. | v2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.